input
stringlengths
6.82k
29k
Instruction: 18F-FDG PET for the lymph node staging of non-small cell lung cancer in a tuberculosis-endemic country: is dual time point imaging worth the effort? Abstracts: abstract_id: PUBMED:18283453 18F-FDG PET for the lymph node staging of non-small cell lung cancer in a tuberculosis-endemic country: is dual time point imaging worth the effort? Purpose: This study was to compare (18)F-FDG positron emission tomography (PET) with thoracic contrast-enhanced CT (CECT) in the ability of lymph node (LN) staging non-small cell lung cancer (NSCLC) in a tuberculosis-prevalent country. The usefulness of dual time point PET imaging (DTPI) in NSCLC nodal staging was also evaluated. Methods: We reviewed 96 NSCLC patients (mean age, 65.3+/-11.7 years) who had received PET studies before their surgery. DTPI were performed on 37 patients (mean age, 64.8+/-12.2 years) who received an additional scan of thorax 3 h after tracer injection. The accuracies of nodal staging by CECT and PET were evaluated according to final histopathology of hilar and mediastinal LN resected by surgery. Results: The accuracy for nodal staging by CECT was 65.6% and that by PET was 82.3% (p<0.05). Six patients were over-staged and 11 were under-staged by PET. Tuberculosis (n=3, 50%) were mostly responsible for false-positive, while small tumor foci (n=7, 63.6%) were mostly accountable for false-negative. For the 37 patients with DTPI, 45 min standardized uptake value (SUV) and 3 h SUV for negative LNs are significantly lower than those for positive LNs (p<0.0001). Nevertheless, the retention index (RI) showed no significant difference between these two groups. Conclusions: Our study demonstrates that PET is more accurate than CECT in LN staging NSCLC patients in Taiwan where TB is still prevalent. Semi-quantitative SUV method or DTPI with RI does not result in better diagnostic accuracy than visual analysis of PET images. abstract_id: PUBMED:19430783 18F-FDG PET/CT in mediastinal lymph node staging of non-small-cell lung cancer in a tuberculosis-endemic country: consideration of lymph node calcification and distribution pattern to improve specificity. Purpose: The aim of the study was to assess the accuracy of (18)F-fluorodeoxyglucose (FDG) PET/CT in mediastinal lymph node staging of patients with non-small-cell lung cancer (NSCLC) in a region with a high prevalence of granulomatous disease. Methods: Between March 2004 and February 2006, all patients with NSCLC underwent FDG PET/CT and contrast-enhanced thoracic CT, and subsequent surgical resection. PET/CT and contrast-enhanced CT images of 182 patients (126 men and 56 women; mean age 60.7 years) with NSCLC were acquired. Mediastinal node staging was determined using the American Joint Committee on Cancer (AJCC) staging system. Surgical and histological findings served as the reference standard. Results: A total of 182 patients with 778 mediastinal node stations were evaluated. Sensitivity and specificity of contrast-enhanced CT were 36% and 80% on a per-patient basis and 23% and 92% on a per-node station basis. Sensitivity and specificity of PET/CT were 81% and 73% on a per-patient basis and 75% and 85% on a per-node station basis. After lymph nodes with calcification and bilateral hilar distribution were considered benign, sensitivity and specificity of PET/CT were 75% and 89% on a per-patient basis and 66% and 96% on a per-node station basis. Conclusion: This prospective study suggests that FDG PET/CT can more accurately stage mediastinal lymph nodes than CT. Considering lymph node calcification and distribution pattern could improve specificity at the cost of a decrease in sensitivity. abstract_id: PUBMED:28920249 FDG PET/CT for the preoperative nodal staging of non-small cell lung cancer in a tuberculosis-endemic country: Are maximum standardized uptake values useful? Background: The aim of this study was to determine an optimum standardized uptake value threshold for identifying nodal metastasis in non-small cell lung cancer (NSCLC) patients using Fluorine-18 2-fluoro-2-deoxy-D-glucose (FDG) positron emission tomography/computed tomography (PET/CT) in Taiwan, a tuberculosis-endemic country. The variation in standardized uptake values of nodal metastasis among different NSCLC histological subtypes was also evaluated. Methods: We retrospectively reviewed 75 NSCLC patients who had received FDG PET/CT before surgery. The diagnostic accuracy of FDG PET/CT for the preoperative nodal staging was evaluated by histopathologic findings. Results: A total of 316 nodal stations were evaluated. The sensitivity and specificity of FDG PET/CT for nodal staging were 58.6% and 81.8%, respectively, using an SUV cut-off of 2.6. With regard to the levels of mean SUVmax in true-positive and false-positive groups, there was no significant difference among different histological subtypes. Conclusion: The present study demonstrated that FDG PET/CT for pre-operative nodal staging using SUVmax > 2.6 is a useful tool (with a higher specificity and a higher negative predictive value) to rule out the possibility of metastatic lymphadenopathy in operable patients with NSCLC. abstract_id: PUBMED:22436434 Meta-analysis study of lymph node staging by 18 F-FDG PET/CT scan in non-small cell lung cancer: comparison of TB and non-TB endemic regions. Unlabelled: Lymph node staging in non-small cell lung cancer (NSCLC) is challenging and important for deciding treatment policy. The role of PET/CT scans in lymph node staging of NSCLC remains controversial when comparing TB and non-TB endemic regions. This study systematically reviews the literature regarding the diagnostic performance of PET/CT in lymph node staging of patients with NSCLC, and determines its pooled sensitivity and specificity. Methods: The databases of PubMed, Medline, and Cochrane library were searched for relevant studies. Two reviewers independently assessed the methodological quality of each study. A meta-analysis of the reported sensitivity and specificity of each study was performed. Results: Seven of 86 studies were included. These studies had moderate to good methodological quality. Pooled sensitivity, specificity, positive likelihood ratio, and negative likelihood ratio for patient-based analyses (five studies) were 66%, 92.7%, 5.86%, and 0.41%, respectively, and those for lesion-based analyses (six studies) were 59.4%, 96.5%, 9.37%, and 0.31%, respectively. Subanalysis of endemic regions of tuberculosis (TB) showed that these regions had lower sensitivity and similar specificity to non-TB endemic regions. Conclusion: PET/CT showed high specificity in the lymph node staging of NSCLC and lower sensitivity in TB endemic regions. abstract_id: PUBMED:25655120 Update on nodal staging in non-small cell lung cancer with integrated positron emission tomography/computed tomography: a meta-analysis. Objectives: Nowadays, the number of primary studies on fluorodeoxyglucose (FDG) positron emission tomography/computed tomography (PET/CT) has been increasing rapidly. Thus, we updated meta-analysis to evaluate the test performance of FDG PET/CT for nodal staging in non-small cell lung cancer (NSCLC) including the most recent studies. Methods: We performed a systematic search of MEDLINE and EMBASE for English publications using keywords "positron emission tomography", "lung cancer", and "lymph node". All searches were limited to human studies. Inclusion criteria were studies of the initial nodal staging of NSCLC with PET/CT. The reasons for exclusion are as follows: (1) studies with PET, (2) previous therapy before PET/CT, (3) nodal staging not confirmed by histology, and (4) reviews, abstracts, and editorial materials. 786 articles were identified through database searching. Results: 28 studies including 3,255 patients and 11,887 lymph nodes (LN) were eligible for this study. The pooled sensitivity was 0.62 (95% CI 0.54-0.70), widely ranging from 0.13 to 0.98. The specificity ranged between 0.72 and 0.98 with an overall estimated specificity of 0.92 (0.88-0.95) for node-based data. The pooled sensitivity, specificity, positive and negative likelihood ratio were 0.67 (0.54-0.79), 0.87 (0.82-0.91), 5.20 (3.59-7.54), and 0.37 (0.25-0.55) for patient-based data. Studies from tuberculosis (Tb) endemic countries showed lower sensitivity (0.56 vs 0.68, p = 0.03) for node-based data and lower specificity (0.83 vs 0.89, p < 0.01) for patient-based ones. Conclusions: PET/CT has a high specificity, but low sensitivity for detecting LN metastasis in patients with NSCLC. Tb might be one of the main reasons for lower sensitivity of PET/CT in several countries. The primary clinicians of lung cancer should be aware of the possibility of hidden metastatic LNs in bilateral FDG uptake of mediastinal and hilar LNs, especially in the Tb endemic countries. abstract_id: PUBMED:30606144 PET-CT has low specificity for mediastinal staging of non-small-cell lung cancer in an endemic area for tuberculosis: a diagnostic test study (LACOG 0114). Background: The present study aims to assess the performance of 18F-FDG PET-CT on mediastinal staging of non-small cell lung cancer (NSCLC) in a location with endemic granulomatous infectious disease. Methods: Diagnostic test study including patients aged 18 years or older with operable stage I-III NSCLC and indication for a mediastinal lymph node biopsy. All patients underwent a 18F-FDG PET-scan before invasive mediastinal staging, either through mediastinoscopy or thoracotomy, which was considered the gold-standard. Surgeons and pathologists were blinded for scan results. Primary endpoint was to evaluate sensitivity, specificity and positive and negative predictive values of PET-CT with images acquired in the 1st hour of the exam protocol, using predefined cutoffs of maximal SUV, on per-patient basis. Results: Overall, 85 patients with operable NSCLC underwent PET-CT scan followed by invasive mediastinal staging. Mean age was 65 years, 49 patients were male and 68 were white. One patient presented with active tuberculosis and none had HIV infection. Using any SUV_max > 0 as qualitative criteria for positivity, sensitivity and specificity were 0.87 and 0.45, respectively. Nevertheless, even when the highest SUV cut-off was used (SUV_max ≥5), specificity remained low (0.79), with an estimated positive predictive value of 54%. Conclusions: Our findings are in line with the most recent publications and guidelines, which recommend that PET-CT must not be solely used as a tool to mediastinal staging, even in a region with high burden of tuberculosis. Trial Registration: The LACOG 0114 study was registered at ClinicalTrials.gov , before study initiation, under identifier NCT02664792. abstract_id: PUBMED:26661845 (18)F-FDG and (18)F-FLT PET/CT imaging in the characterization of mediastinal lymph nodes. Purpose: There is currently no single modality for accurate characterization of enlarged mediastinal lymph nodes into benign or malignant. Recently (18)F-fluorothymidine (FLT) has been used as a proliferation marker. In this prospective study, we examined the role of (18)F-fluorodeoxyglucose ((18)F-FDG) positron emission tomography/computed tomography (PET/CT) and (18)F-FLT PET/CT in categorizing mediastinal lymph nodes as benign or malignant. Materials And Methods: A total of 70 consecutive patients with mediastinal lymphadenopathy detected on computed tomography (CT) or chest radiograph underwent whole body (18)F-FLT PET/CT and (18)F-FDG PET/CT (within 1 week of each other). Lymph nodal tracer uptake was determined by calculation of standardized uptake value (SUV) with both the tracers. Results of PET/CT were compared with histopathology of the lymph nodes. Results: Histopathology results showed thirty-seven patients with sarcoidosis, seven patients with tuberculosis, nine patients with non-small cell lung cancer, five patients with Hodgkin's lymphoma and twelve patients with non-Hodgkin's lymphoma. The mean FDG SUVmax of sarcoidosis, tuberculosis, Hodgkin's and non-Hodgkin's lymphoma was 12.7, 13.4, 8.2, and 8.8, respectively, and the mean FLT SUVmax was 6.0, 5.4, 4.4, and 3.8, respectively. It was not possible to characterize mediastinal lymphadenopathy as benign or malignant solely based on FDG SUVmax values (p > 0.05) or FLT SUVmax values (p > 0.05). There was no significant difference in FDG uptake (p > 0.9) or FLT uptake (p > 0.9) between sarcoidosis and tuberculosis. In lung cancer patients, the FDG SUVmax and FLT SUVmax of those lymph nodes with tumor infiltration on biopsy was 6.7 and 3.9, respectively, and those without nodal infiltration was 6.4 and 3.7, respectively, and both the tracers were not able to characterize the nodal status as malignant or benign (p > 0.05). Conclusion: Though (18)F-FLT PET/CT and (18)F-FDG PET/CT reflect different aspects of biology, i.e., proliferation and metabolism, respectively, neither tracer could provide satisfactory categorization of benign and malignant lymph nodes. The results of this study clearly suggest that differentiation of mediastinal nodes into benign and malignant solely based on SUVmax values cannot be relied upon, especially in settings where tuberculosis and sarcoidosis are common. abstract_id: PUBMED:17311309 Mediastinal nodal staging of nonsmall cell lung cancer using integrated 18F-FDG PET/CT in a tuberculosis-endemic country: diagnostic efficacy in 674 patients. Background: Integrated (18)fluorine fluorodeoxyglucose ((18)F-FDG) positron emission tomography/computed tomography (PET/CT) has shown somewhat variable sensitivity and specificity for mediastinal nodal staging in granulomatous disease endemic areas. The purpose of the study was to prospectively evaluate the efficacy of PET/CT for mediastinal nodal staging in nonsmall cell lung cancer (NSCLC) patients in a tuberculosis-endemic country. Methods: Prospective assessment of the diagnostic efficacy of integrated PET/CT for detecting mediastinal nodal metastasis was performed in 674 patients (M:F ratio = 502:172; mean age, 61 years) with NSCLC. Patients underwent an integrated PET/CT examination and subsequent surgical nodal staging (by mediastinoscopy only in 121 patients and by thoracotomy in 553). Nodes showing greater (18)F-FDG uptake than mediastinum at PET without benign calcification or high attenuation >70 household unit (HU) at unenhanced CT were regarded as being positive for malignancy. The histologic nodal assessment results were used as reference standards. Results: Of 2477 mediastinal nodal stations evaluated in 674 patients, 275 (11%) stations in 180 (27%) patients proved to be malignant. On a per-person basis, the overall sensitivity, specificity, and accuracy of PET/CT for mediastinal nodal staging were 61% (110 of 180), 96% (473 of 494), and 86% (583 of 674), respectively. On a per-nodal station basis, they were 46% (126 of 275), 98% (2154 of 2202), and 92% (2280 of 2477). Conclusions: Integrated PET/CT provides high specificity and reasonably high accuracy, but somewhat low sensitivity for mediastinal nodal staging of NSCLCs. The high specificity is achieved at the expense of sensitivity by interpreting calcified nodes or nodes with high attenuation at CT, even with high FDG uptake at PET, as benign in a tuberculosis-endemic region. abstract_id: PUBMED:26242535 Integrated positron emission tomography/computed tomography for evaluation of mediastinal lymph node staging of non-small-cell lung cancer in a tuberculosis-endemic area: A 5-year prospective observational study. Background: Integrated positron emission tomography/computed tomography (PET-CT) is a well-validated modality for assessing mediastinal lymph node metastasis in non-small-cell lung cancer (NSCLC), which determines management and predicts survival. Tuberculosis (TB) is known to lead to false-positive PET-CT findings. Objectives: To assess the diagnostic accuracy of PET-CT in identifying mediastinal lymph node involvement of NSCLC in a high TB-endemic area. Methods: Patients who underwent both PET-CT and lymph node tissue sampling for the investigation of suspected NSCLC were prospectively included in this observational study. Results were analysed per patient and per lymph node stage. A post-hoc analysis was performed to test the validity of a maximum standardised uptake value (SUV-max) cut-off for lymph node positivity. Results: PET-CT had a sensitivity of 92.6%, specificity of 48.6%, positive predictive value of 56.8% and negative predictive value (NPV) of 90.0% in the per-patient analysis. Diagnostic accuracy was 67.2%. Similar values were obtained in the per-lymph node stage analysis. TB was responsible for 21.1% of false-positive results. A SUVmax cut-off of 4.5 yielded an improvement in diagnostic accuracy from 64.0% to 84.7% compared with a cut-off of 2.5, but at the cost of decreasing the NPV from 90.6% to 83.5%. Conclusion: In a high TB-endemic area, PET-CT remains a valuable method for excluding mediastinal lymph node involvement in NSCLC. Patients with a negative PET-CT may proceed to definitive management without further invasive procedures. However, PET-CT-positive lymph nodes require pathological confirmation, and the possibility of TB must be considered. abstract_id: PUBMED:20124294 Positron-emission tomography for lung cancer in a tuberculosis-endemic region. A potential limitation of integrated positron-emission tomography and computed tomography in non-small-cell lung cancer may be false-positive results due to granulomatous disease. This retrospective study examined the accuracy of this imaging modality for mediastinal nodal staging of non-small-cell lung cancer in Hong Kong where tuberculosis remains endemic. There were 249 lymph node stations evaluated in 107 patients, of whom 38 (36%) had active tuberculosis or evidence of previous tuberculosis. Imaging results were compared with histological findings. The sensitivity, specificity, and accuracy of integrated imaging for mediastinal nodal staging were 52%, 86%, and 80%, respectively; the overall positive-predictive value for mediastinal nodal metastasis was 46%, and the overall negative-predictive value was 89%. The positive-predictive value for mediastinal nodal metastasis was 39% in patients with tuberculosis and 50% in controls; the negative-predictive value was high in both groups (92% and 87%). The likelihood ratio for true positives was 6.47 in patients with tuberculosis vs. 10.97 in controls. This suggests that the reliability of positron-emission/computed tomography may be substantially poorer in patients with tuberculosis. Histological confirmation should be considered mandatory in patients with suspected metastasis on integrated imaging. Answer: The use of 18F-FDG PET for lymph node staging of non-small cell lung cancer (NSCLC) in tuberculosis-endemic countries has been evaluated in several studies, with a particular interest in the potential benefits of dual time point imaging (DTPI). One study (PUBMED:18283453) directly addressed the question of whether DTPI is worth the effort. The study compared the accuracy of PET with thoracic contrast-enhanced CT (CECT) for nodal staging in NSCLC patients in Taiwan, where tuberculosis (TB) is prevalent. The results showed that PET was more accurate than CECT, with an accuracy of 82.3% for PET compared to 65.6% for CECT. However, when evaluating the usefulness of DTPI, the study found that the retention index (RI) showed no significant difference between negative and positive lymph nodes, suggesting that semi-quantitative SUV methods or DTPI with RI did not result in better diagnostic accuracy than visual analysis of PET images. Other studies have also explored the efficacy of PET/CT in mediastinal lymph node staging in TB-endemic regions. For example, a study (PUBMED:19430783) found that considering lymph node calcification and distribution pattern could improve the specificity of PET/CT. Another study (PUBMED:28920249) suggested that using a standardized uptake value (SUV) cut-off of 2.6 could be a useful tool for ruling out metastatic lymphadenopathy with higher specificity and negative predictive value. A meta-analysis (PUBMED:22436434) indicated that PET/CT has high specificity but lower sensitivity in TB-endemic regions compared to non-TB-endemic regions. Similarly, another meta-analysis (PUBMED:25655120) confirmed that PET/CT has high specificity but low sensitivity for detecting lymph node metastasis in NSCLC, with TB being a possible reason for lower sensitivity in some countries. In a diagnostic test study (PUBMED:30606144), it was found that even with the highest SUV cut-off, specificity remained low in a region with a high burden of tuberculosis, suggesting that PET-CT should not be solely used for mediastinal staging.
Instruction: Microvascular invasion in hepatocellular carcinoma: is it predictable with a new, preoperative application of diffusion-weighted imaging? Abstracts: abstract_id: PUBMED:27840260 Microvascular invasion in hepatocellular carcinoma: is it predictable with a new, preoperative application of diffusion-weighted imaging? Purpose: The study aimed to explore the use of MRI in predicting the microvascular invasion (MVI) of hepatocellular carcinoma (HCC). Materials And Methods: The preoperative MRI and tissues of resected HCC patients were collected. The imaging characteristics that have previously been suggested and the mismatch between diffusion-weighted imaging (DWI) and T2-weighted imaging of regions, which the authors called DWI/T2 mismatch, were analyzed and compared with histopathological references. Results: A multivariate logistic regression analysis showed that DWI/T2 mismatch was an independent predictor of MVI. Conclusion: The DWI/T2 mismatch can be a preoperative predictor of MVI for HCC. abstract_id: PUBMED:35646649 The Roles of Diffusion Kurtosis Imaging and Intravoxel Incoherent Motion Diffusion-Weighted Imaging Parameters in Preoperative Evaluation of Pathological Grades and Microvascular Invasion in Hepatocellular Carcinoma. Background: Currently, there are disputes about the parameters of diffusion kurtosis imaging (DKI), intravoxel incoherent motion (IVIM), and diffusion-weighted imaging (DWI) in predicting pathological grades and microvascular invasion (MVI) in hepatocellular carcinoma (HCC). The aim of our study was to investigate and compare the predictive power of DKI and IVIM-DWI parameters for preoperative evaluation of pathological grades and MVI in HCC. Methods: PubMed, Web of Science, and Embase databases were searched for relevant studies published from inception to October 2021. Review Manager 5.3 was used to summarize standardized mean differences (SMDs) of mean kurtosis (MK), mean diffusivity (MD), tissue diffusivity (D), pseudo diffusivity (D*), perfusion fraction (f), mean apparent diffusion coefficient (ADCmean), and minimum apparent diffusion coefficient (ADCmin). Stata12.0 was used to pool the sensitivity, specificity, and area under the curve (AUC). Overall, 42 up-to-standard studies with 3,807 cases of HCC were included in the meta-analysis. Results: The SMDs of ADCmean, ADCmin, and D values, but not those of D* and f values, significantly differed between well, moderately, and poorly differentiated HCC (P < 0.01). The sensitivity, specificity, and AUC of the MK, D, ADCmean, and ADCmin for preoperative prediction of poorly differentiated HCC were 69%/94%/0.89, 87%/80%/0.89, 82%/75%/0.86, and 83%/64%/0.81, respectively. In addition, the sensitivity, specificity, and AUC of the D and ADCmean for preoperative prediction of well-differentiated HCC were 87%/83%/0.92 and 82%/88%/0.90, respectively. The SMDs of ADCmean, ADCmin, D, MD, and MK values, but not f values, showed significant differences (P < 0.01) between MVI-positive (MVI+) and MVI-negative (MVI-) HCC. The sensitivity and specificity of D and ADCmean for preoperative prediction of MVI+ were 80%/80% and 74%/71%, respectively; the AUC of the D (0.87) was significantly higher than that of ADCmean (0.78) (Z = -2.208, P = 0.027). Sensitivity analysis showed that the results of the above parameters were stable and reliable, and subgroup analysis confirmed a good prediction effect. Conclusion: DKI parameters (MD and MK) and IVIM-DWI parameters (D value, ADCmean, and ADCmin) can be used as a noninvasive and simple preoperative examination method to predict the grade and MVI in HCC. Compared with ADCmean and ADCmin, MD and D values have higher diagnostic efficacy in predicting the grades of HCC, and D value has superior diagnostic efficacy to ADCmean in predicting MVI+ in HCC. However, f value cannot predict the grade or MVI in HCC. abstract_id: PUBMED:24033853 Microvascular invasion in small hepatocellular carcinoma: is it predictable with preoperative diffusion-weighted imaging? Background And Aim: The presence of microvascular invasion (MVI) is an independent risk factor affecting recurrence-free survival following surgical treatment for small hepatocellular carcinoma (HCC). Our aim in this study was to investigate whether diffusion-weighted imaging (DWI) could be useful in predicting MVI for small HCC. Methods: Breath-hold DWI (b-value 0, 500 s/mm(2) ) and gadopentate dimeglumine-enhanced dynamic imaging of preoperative magnetic resonance imaging of 109 surgically proven small HCCs from 92 patients were retrospectively analyzed. The signal intensity ratio on DWI and apparent diffusion coefficients (ADCs) for lesions were quantitatively measured. Signal intensity ratio and ADC of DWI, tumor size, tumor shape, tumor capsule, peritumoral enhancement on arterial phase images, and dynamic enhancement pattern were analyzed as radiological parameters reflecting MVI and were compared with histopathological references. The chi-square test, Fisher's exact test, Mann-Whitney U test, and the independent t-test were used for univariate analysis. To identify the independent predictors of MVI among these radiological parameters and to evaluate their diagnostic performance, multivariate logistic regression analysis and receiver operating characteristic curve analysis were performed, respectively. Results: A univariate analysis showed that a lower ADC value (P = 0.005) and irregular circumferential enhancement (P = 0.020) showed statistically significant associations with MVI. A multiple logistic regression analysis showed that the ADC value and irregular circumferential enhancement were independent predictors of MVI. With a cut-off of 1.227 × 10(-3) mm(2) /s, the ADC value provided a sensitivity of 66.7% and a specificity of 78.6% in the prediction of MVI with an odds ratio of 7.63 (P < 0.01). Conclusions: Lower ADC values (< 1.227 × 10(-3) mm(2) /s) on DWI with b-value of 0.500 s/mm(2) can be a useful preoperative predictor of MVI for small HCCs. abstract_id: PUBMED:33708636 Diffusion-Weighted Imaging Reflects Tumor Grading and Microvascular Invasion in Hepatocellular Carcinoma. Background: To date, there are inconsistent data about relationships between diffusion-weighted imaging (DWI) and tumor grading/microvascular invasion (MVI) in hepatocellular carcinoma (HCC). Our purpose was to systematize the reported results regarding the role of DWI in prediction of tumor grading/MVI in HCC. Method: MEDLINE library, Scopus, and Embase data bases were screened up to December 2019. Overall, 29 studies with 2,715 tumors were included into the analysis. There were 20 studies regarding DWI and tumor grading, 8 studies about DWI and MVI, and 1 study investigated DWI, tumor grading, and MVI in HCC. Results: In 21 studies (1,799 tumors), mean apparent diffusion coefficient (ADC) values (ADCmean) were used for distinguishing HCCs. ADCmean of G1-3 lesions overlapped significantly. In 4 studies (461 lesions), minimum ADC (ADCmin) was used. ADCmin values in G1/2 lesions were over 0.80 × 10-3 mm2/s and in G3 tumors below 0.80 × 10-3 mm2/s. In 4 studies (241 tumors), true diffusion (D) was reported. A significant overlapping of D values between G1, G2, and G3 groups was found. ADCmean and MVI were analyzed in 9 studies (1,059 HCCs). ADCmean values of MIV+/MVI- lesions overlapped significantly. ADCmin was used in 4 studies (672 lesions). ADCmin values of MVI+ tumors were in the area under 1.00 × 10-3 mm2/s. In 3 studies (227 tumors), D was used. Also, D values of MVI+ lesions were predominantly in the area under 1.00 × 10-3 mm2/s. Conclusion: ADCmin reflects tumor grading, and ADCmin and D predict MVI in HCC. Therefore, these DWI parameters should be estimated for every HCC lesion for pretreatment tumor stratification. ADCmean cannot predict tumor grading/MVI in HCC. abstract_id: PUBMED:22767394 Preoperative prediction of the microvascular invasion of hepatocellular carcinoma with diffusion-weighted imaging. The aim of this study was to investigate whether diffusion-weighted imaging (DWI) could be useful in predicting the microvascular invasion (MVI) of hepatocellular carcinoma (HCC). Sixty-seven surgically proven HCCs from 65 patients (54 men and 11 women with an age range of 35-75 years and a mean age of 56.0 years) were retrospectively analyzed. The signal intensities (SIs) of the lesions on preoperative diffusion-weighted (DW) images were visually categorized as isointense or hyperintense in comparison with the adjacent liver. We also quantitatively measured SIs and apparent diffusion coefficients (ADCs) by drawing regions of interest in HCCs and livers on DW images with b values of 50, 400, and 800 second/mm(2) and on ADC map images. Logistic regression analyses were performed so that we could identify independent predictors of MVI among laboratory and DWI findings. A univariate analysis showed that a histological grade of 3, a larger tumor size, a higher SI ratio on DW images, and a lower ADC value were significantly associated with MVI. A multiple logistic regression analysis showed that a histological grade of 3 and the ADC value were independent predictors of MVI. With a cutoff of 1.11 × 10(-3) mm(2)/second, the ADC value provided a sensitivity of 93.5% and a specificity of 72.2% for the prediction of MVI with an odds ratio of 24.5 (95% confidence interval = 4.14-144.8, P < 0.001). In conclusion, lower ADC values (1.11 × 10(-3) mm(2)/second or less) can be a useful predictor of MVI during the preoperative evaluation of HCC. abstract_id: PUBMED:33303346 Prediction of Microvascular Invasion of Hepatocellular Carcinoma Based on Preoperative Diffusion-Weighted MR Using Deep Learning. Rationale And Objectives: To investigate the value of diffusion-weighted magnetic resonance imaging for the prediction of microvascular invasion (MVI) of Hepatocellular Carcinoma (HCC) using Convolutional Neural Networks (CNN). Material And Methods: This study was approved by the local institutional review board and the patients' informed consent was waived. Consecutive 97 subjects with 100 HCCs from July 2012 to October 2018 with surgical resection were retrieved. All subjects with diffusion-weighted imaging (DWI) examinations were performed with single-shot echo-planar imaging in a breath-hold routine. DWI parameters were three b values of 0,100,600 sec/mm2. First, apparent diffusion coefficients (ADC) images were computed by mono-exponentially fitting the three b-value points. Then, multiple 2D axial patches (28 × 28) of HCCs from b0, b100, b600, and ADC images were extracted to increase the dataset for training the CNN model. Finally, the fusion of deep features derived from three b value images and ADC was conducted based on the CNN model for MVI prediction. The data set was split into the training set (60 HCCs) and the independent test set (40 HCCs). The output probability of the deep learning model in the MVI prediction of HCCs was assessed by the independent student's t-test for data following a normal distribution and Mann-Whitney U test for data violating the normal distribution. Receiver operating characteristic curve and area under the curve (AUC) were also used to assess the performance for MVI prediction of HCCs in the fixed test set. Results: Deep features in b600 images yielded better performance (AUC = 0.74, p = 0.004) for MVI prediction than b0 (AUC = 0.69, p = 0.023) and b100 (AUC = 0.734, p = 0.011). Comparatively, deep features in the ADC map obtained lower performance (AUC = 0.71, p = 0.012) than that of the higher b value images (b600) for MVI prediction. Furthermore, the fusion of deep features from the b0, b100, b600, and ADC images yielded the best results (AUC = 0.79, p = 0.002) for MVI prediction. Conclusion: Fusion of deep features derived from DWI images concerning the three b-value images and the ADC image yields better performance for MVI prediction. abstract_id: PUBMED:37853175 Preoperative prediction of microvascular invasion in hepatocellular carcinoma using diffusion-weighted imaging-based habitat imaging. Objectives: Habitat imaging allows for the quantification and visualization of various subregions within the tumor. We aim to develop an approach using diffusion-weighted imaging (DWI)-based habitat imaging for preoperatively predicting the microvascular invasion (MVI) of hepatocellular carcinoma (HCC). Methods: Sixty-five patients were prospectively included and underwent multi-b DWI examinations. Based on the true diffusion coefficient (Dt), perfusion fraction (f), and mean kurtosis coefficient (MK), which respectively characterize cellular density, perfusion, and heterogeneity, the HCCs were divided into four habitats. The volume fraction of each habitat was quantified. The logistic regression was used to explore the risk factors from habitat fraction and clinical variables. Clinical, habitat, and nomogram models were constructed using the identified risk factors from clinical characteristics, habitat fraction, and their combination, respectively. The diagnostic accuracy was evaluated using the area under the receiver operating characteristic curves (AUCs). Results: MVI-positive HCC exhibited a significantly higher fraction of habitat 4 (f4) and a significantly lower fraction of habitat 2 (f2) (p < 0.001), which were selected as risk factors. Additionally, tumor size and elevated alpha-fetoprotein (AFP) were also included as risk factors for MVI. The nomogram model demonstrated the highest diagnostic performance (AUC = 0.807), followed by the habitat model (AUC = 0.777) and the clinical model (AUC = 0.708). Decision curve analysis indicated that the nomogram model offered more net benefit in identifying MVI compared to the clinical model. Conclusions: DWI-based habitat imaging shows clinical potential for noninvasively and preoperatively determining the MVI of HCC with high accuracy. Clinical Relevance Statement: The proposed strategy, diffusion-weighted imaging-based habitat imaging, can be applied for preoperatively and noninvasively identifying microvascular invasion in hepatocellular carcinoma, which offers potential benefits in terms of prognostic prediction and clinical management. Key Points: • This study proposed a strategy of DWI-based habitat imaging for hepatocellular carcinoma. • The habitat imaging-derived metrics can serve as diagnostic markers for identifying the microvascular invasion. • Integrating the habitat-based metric and clinical variable, a predictive nomogram was constructed and displayed high accuracy for predicting microvascular invasion. abstract_id: PUBMED:38311671 The value of varying diffusion curvature MRI for assessing the microvascular invasion of hepatocellular carcinoma. Purpose: Varying diffusion curvature (VDC) MRI is an emerging diffusion-weighted imaging (DWI) technique that can capture non-Gaussian diffusion behavior and reflect tissue heterogeneity. However, its clinical utility has hardly been evaluated. We aimed to investigate the value of the VDC technique in noninvasively assessing microvascular invasion (MVI) in hepatocellular carcinoma (HCC). Methods: 74 patients with HCCs, including 39 MVI-positive and 35 MVI-negative HCCs were included into this prospective study. Quantitative metrics between subgroups, clinical risk factors, as well as diagnostic performance were evaluated. The power analysis was also carried out to determine the statistical power. Results: MVI-positive HCCs exhibited significantly higher VDC-derived structural heterogeneity measure, D1 (0.680 ± 0.100 × 10-3 vs 0.572 ± 0.148 × 10-3 mm2/s, p = 0.001) and lower apparent diffusion coefficient (ADC) (1.350 ± 0.166 × 10-3 vs 1.471 ± 0.322 × 10-3 mm2/s, p = 0.0495) compared to MVI-negative HCCs. No statistical significance was observed for VDC-derived diffusion coefficient, D0 between the subgroups (p = 0.562). Tumor size (odds ratio (OR) = 1.242) and alpha-fetoprotein (AFP) (OR = 2.527) were identified as risk factors for MVI. A predictive nomogram was constructed based on D1, ADC, tumor size, and AFP, which exhibited the highest diagnostic accuracy (AUC = 0.817), followed by D1 (AUC = 0.753) and ADC (AUC = 0.647). The diagnostic performance of the nomogram-based model was also validated by the calibration curve and decision curve. Conclusion: VDC can aid in the noninvasive and preoperative diagnosis of HCC with MVI, which may result in the clinical benefit in terms of prognostic prediction and clinical decision-making. abstract_id: PUBMED:36062283 Diagnostic Accuracy of the Apparent Diffusion Coefficient for Microvascular Invasion in Hepatocellular Carcinoma: A Meta-analysis. Background And Aims: Microvascular invasion (MVI) is a major risk factor for the early recurrence of hepatocellular carcinoma (HCC) and it seriously worsens the prognosis. Accurate preoperative evaluation of the presence of MVI could greatly benefit the treatment management and prognosis prediction of HCC patients. The study aim was to evaluate the diagnostic performance of the apparent diffusion coefficient (ADC), a quantitative parameter for the preoperative diagnosis MVI in HCC patients. Methods: Original articles about diffusion-weighted imaging (DWI) and/or intravoxel incoherent motion (IVIM) conducted on a 3.0 or 1.5 Tesla magnetic resonance imaging (MRI) system indexed through January 17, 2021were collected from MEDLINE/PubMed, Web of Science, EMBASE, and the Cochrane Library. Methodological quality was evaluated using Quality Assessment of Diagnostic Accuracy Studies 2 (QUADAS-2). The pooled sensitivity, specificity, and summary area under the receiver operating characteristic curve (AUROC) were calculated, and meta-regression analysis was performed using a bivariate random effects model through a meta-analysis. Results: Nine original articles with a total of 988 HCCs were included. Most studies had low bias risk and minimal applicability concerns. The pooled sensitivity, specificity and AUROC of the ADC value were 73%, 70%, and 0.78, respectively. The time interval between the index test and the reference standard was identified as a possible source of heterogeneity by subgroup meta-regression analysis. Conclusions: Meta-analysis showed that the ADC value had moderate accuracy for predicting MVI in HCC. The time interval accounted for the heterogeneity. abstract_id: PUBMED:33752311 Value of gadolinium ethoxybenzyl diethylenetriamine pentaacetic acid enhanced magnetic resonance imaging and diffusion-weighted MR imaging in predicting microvascular invasion in hepatocellular carcinoma and the prognostic significance Objective: To investigate the combined value of gadolinium ethoxybenzyl diethylenetriamine pentaacetic acid (Gd-EOB-DTPA) enhanced magnetic resonance imaging (MRI) and diffusion-weighted imaging (DWI) in predicting pathological microvascular invasion (pMVI) preoperatively, and to determine the relationship between prediction results and prognosis in hepatocellular carcinoma (HCC) patients. Methods: A total of 181 newly diagnosed HCC patients were enrolled in this study. Imaging characteristics and the apparent diffusion coefficient (ADC) value of DWI were analyzed. The differences of imaging characteristics and ADC values between different pMVI groups were analyzed.Multivariate logistic regression and receiver operating characteristic (ROC) curve were used to analyze the value for pMVI prediction by using significant parameters. The patients were grouped based on MRI predicted MVI (mrMVI), and the relationship between mrMVI and recurrence free survival time (RFS) was analyzed. Results: Fifty-one patients were pMVI positive and 130 patients were pMVI negative. The ADC value in pMVI positive group were (1.10±0.17)×10(-3) mm(2)/s, significantly lower than (1.27±0.22)×10(-3) mm(2)/s of pEMVI negative group (P<0.001). The incidence rates of incomplete enhancing "capsule" , non-smooth tumor margin, arterial peritumoral enhancement, mosaic architecture and peritumoral hypointensity on hepatobiliary phase (HBP) in pMVI positive group were significantly higher than those of negative group (all P<0.05). Multivariate logistic regression analysis showed that tumor margin, arterial peritumoral enhancement, peritumoral hypointensity on HBP and ADC value were independently associated with pMVI. ROC analysis showed that the area under curve, sensitivity and specificity of pMVI predicted by combined parameters were 0.830, 76.5% and 81.5%, respectively. The median RFS of mrMVI positive group was 23.6 months, significantly lower than 38.2 months of mrEMVI negative group (P=0.004). Conclusion: Tumor margin, arterial peritumoral enhancement, peritumoral hypointensity on HBP and ADC value are independent predictors of pMVI in HCC, and mrMVI is related with RFS. Answer: Microvascular invasion (MVI) in hepatocellular carcinoma (HCC) is an important prognostic factor that can affect recurrence-free survival following surgical treatment. Several studies have investigated the predictability of MVI using preoperative diffusion-weighted imaging (DWI) and have provided evidence that certain DWI parameters can be useful predictors of MVI in HCC. One study found that a mismatch between DWI and T2-weighted imaging, termed DWI/T2 mismatch, was an independent predictor of MVI (PUBMED:27840260). Another study reported that diffusion kurtosis imaging (DKI) parameters, such as mean kurtosis (MK) and mean diffusivity (MD), as well as intravoxel incoherent motion (IVIM)-DWI parameters like tissue diffusivity (D) and apparent diffusion coefficient (ADCmean and ADCmin), can be used preoperatively to predict the grade and MVI in HCC (PUBMED:35646649). Specifically, the D value was found to have superior diagnostic efficacy compared to ADCmean in predicting MVI-positive HCC. Further research indicated that lower ADC values on DWI could be useful preoperative predictors of MVI for small HCCs (PUBMED:24033853). Similarly, another study concluded that minimum ADC (ADCmin) reflects tumor grading, and both ADCmin and true diffusion (D) predict MVI in HCC, suggesting that these DWI parameters should be estimated for pretreatment tumor stratification (PUBMED:33708636). Moreover, a study using deep learning techniques to analyze DWI data found that the fusion of deep features derived from DWI images concerning three b-value images and the ADC image yields better performance for MVI prediction (PUBMED:33303346). Additionally, DWI-based habitat imaging was proposed as a strategy for preoperatively and noninvasively identifying MVI in HCC with high accuracy (PUBMED:37853175). The emerging VDC MRI technique, which captures non-Gaussian diffusion behavior and reflects tissue heterogeneity, was also shown to aid in the noninvasive and preoperative diagnosis of HCC with MVI (PUBMED:38311671). Lastly, a meta-analysis confirmed that the ADC value had moderate accuracy for predicting MVI in HCC (PUBMED:36062283). In conclusion, these studies collectively suggest that MVI in HCC can be predicted preoperatively using various applications of DWI, including traditional DWI parameters, advanced techniques like DKI and IVIM, as well as innovative approaches involving deep learning and habitat imaging.
Instruction: Does Disclosure of Terminal Prognosis Mean Losing Hope? Abstracts: abstract_id: PUBMED:26288098 Does Disclosure of Terminal Prognosis Mean Losing Hope? Insights from Exploring Patient Perspectives on Their Experience of Palliative Care Consultations. Background: A primary barrier to physician disclosure of terminal prognosis is concern that patients will lose hope. Inpatient palliative care (IPC) teams are especially posed to mediate this barrier, but little is known about patient perceptions and experience of IPC. Objective: This study aimed to elicit seriously ill patients' perspective and experience of an IPC consultation, and to explore patient attitudes toward information derived from the consultation. Methods: An exploratory, qualitative study was conducted at a large nonprofit community hospital in the Los Angeles area. An established IPC team conducted individualized consults with patients and families within 24 hours of referral. Eligible participants were English-speaking adults, aged 18 or over, who had received an IPC consultation within the previous week during their hospitalization. Purposive recruitment of patients was conducted by the IPC social worker. Interviews were conducted at bedside using a semistructured interview protocol employing open-ended questions. Results: Twelve seriously ill patients were interviewed. Four themes were identified from the interview transcripts: (1) holistic care approach, (2) knowledge/information gained, (3) hope and enlightenment, and (4) patient readiness. Conclusions: Results suggest that disclosure of a terminal prognosis does not mean loss of patient hope. Instead, hope was redefined on a goal other than cure. Presenting patients with information and increasing their knowledge about care options and resources may facilitate patients in identifying meaningful goals that are better aligned with their prognosis. abstract_id: PUBMED:25124599 Iranian cancer patient perceptions of prognosis and the relationship to hope. Background: The aim of this study was to investigate Iranian cancer patient perceptions of their prognosis, factors that influence perceptions of prognosis and the effect this has on patient level of hope. Materials And Methods: Iranian cancer patients (n=200) completed self-report measures of their perceptions of their prognosis and level of hope, in order to assess the relationship between the two and identify factors predictive of perceptions by multiple linear regression analysis. Results: Cancer patients perceived of their prognosis positively (mean 11.4 out of 15), believed their disease to be curable, and reported high levels of hope (mean 40.4 out of 48.0). Multiple linear regression analyses demonstrated that participants who were younger, perceived they had greater family support, and had higher levels of hope reported more positive perceptions of their cancer prognosis. Conclusions: Positive perceptions of prognosis and its positive correlation with hope in Iranian cancer patients highlights the importance of cultural issues in the disclosure of cancer related information. abstract_id: PUBMED:18065734 Hope and prognostic disclosure. Purpose: Physicians sometimes selectively convey prognostic information to support patients' hopes. However, the relationship between prognostic disclosure and hope is not known. Patients And Methods: We surveyed 194 parents of children with cancer (overall response rate, 70%) in their first year of treatment at the Dana-Farber Cancer Institute and Children's Hospital (Boston, MA), and we surveyed the children's physicians. We evaluated relationships between parental recall of prognostic disclosure by the physician and possible outcomes, including hope, trust, and emotional distress. Our main outcome was assessed by asking parents how often the way the child's oncologist communicated with them about the children's cancers made them feel hopeful. Results: Nearly half of parents reported that physician communication always made them feel hopeful. Parents who reported receiving a greater number of elements of prognostic disclosure were more likely to report communication-related hope (odds ratio [OR], 1.77 per element of disclosure; P = .001), even when the likelihood of a cure was low (OR, 5.98 per element of disclosure with likelihood of a cure < 25%; P = .03). In a multivariable model, parents were more likely to report that physician communication always made them feel hopeful when they also reported receipt of more elements of prognostic disclosure (OR, 1.60; P = .03) and high-quality communication (OR, 6.58; P < .0001). Communication-related hope was inversely associated with the child's likelihood of cure (OR, 0.65; P = .005). Conclusion: Although physicians sometimes limit prognostic information to preserve hope, we found no evidence that prognostic disclosure makes parents less hopeful. Instead, disclosure of prognosis by the physician can support hope, even when the prognosis is poor. abstract_id: PUBMED:26842461 Do patients with lung cancer recall physician-initiated discussions about planning for end-of-life care following disclosure of a terminal prognosis? Background: Communicating with patients about their prognosis and goals of care, including offering opportunities to engage in advance care planning (ACP) is widely recognised as best practice. Little is known about terminally ill patients' perceptions of communication and ACP practice following disclosure of their terminal prognosis. Objectives: To examine whether terminally ill patients with lung cancer and their relatives recall conversations with disclosing physicians, about their concerns, goals of care or any offers to engage in ACP. Methods: Qualitative study using semistructured interviews with patients and their family members. The study setting was a cancer centre and cancer unit in northern England. Results: 25 patients with advanced lung cancer (18 men and 7 women, aged 47-85) and 19 family members, mainly from lower social economic classes, took part in the study. Participants had little or no recall of physicians initiating discussions about their concerns, or goals of care and did not perceive that they had been provided with either information about or opportunities to engage in ACP. Some participants reported a sense of abandonment following the disclosure of a terminal prognosis. This sense was compounded by a range of difficult emotional experiences following the disclosure. Conclusions: It may be inappropriate to initiate discussions about end-of-life care planning immediately following the disclosure of a terminal prognosis. To avoid patients feeling abandoned physicians need to consider how they or another appropriate person can provide information and opportunities for terminally ill patients to engage in a process of ACP. Trial Registration Number: 06/Q2307/22. abstract_id: PUBMED:21369633 Providing hope in terminal cancer: when is it appropriate and when is it not? Hope is essential in the face of terminal cancer. Generally in Western societies, patients and their families prefer their doctor to engage them in transparent, realistic, authoritative, empathic and open communication about the diagnosis and prognosis of cancer but this topic is not well studied in the Asian context. With the exponential increase in information about cancer and the many permutations in cancer treatment, rational and otherwise, the doctor-patient relationship is even more critical in planning the best treatment strategy and also in rendering both particular and general hope in the patient's war against cancer. Overall, the majority of drugs tested against cancer have failed to reach the market, and those that have, only provide modest benefits, several major therapeutic breakthroughs notwithstanding. Commoditised medicalisation of the dying process ingrained into the contemporary consciousness can potentially create unrealistic or false hope, therapeutic nihilism and a drain on the resources of both the patient and society. These factors can also detract from the dignity of dying as an acceptable natural process. Hope cannot be confined only to focusing merely on the existential dimension of improving survival through technological intervention. Psychosocial and, where appropriate, spiritual interventions and support also play major roles in relieving suffering and providing hope to the patient. Hope cannot be a victim of misinformation from self-interested external parties, nor be an obsession with just buying promises of extending survival time without sufficient regard for quality of life and achieving a good death. abstract_id: PUBMED:29278434 Intended and unintended consequences: Ethics, communication, and prognostic disclosure in pediatric oncology. Background: The majority of patients desire all available prognostic information, but some physicians hesitate to discuss prognosis. The objective of the current study was to examine outcomes of prognostic disclosure among the parents of children with cancer. Methods: The authors surveyed 353 parents of children with newly diagnosed cancer at 2 tertiary cancer centers, and each child's oncologist. Using multivariable logistic regression, the authors assessed associations between parental report of elements of prognosis discussions with the oncologist (quality of information/communication and prognostic disclosure) and potential consequences of these discussions (trust, hope, peace of mind, prognostic understanding, depression, and anxiety). Analyses were stratified by oncologist-reported prognosis. Results: Prognostic disclosure was not found to be associated with increased parental anxiety, depression, or decreased hope. Among the parents of children with less favorable prognoses (<75% chance of cure), the receipt of high-quality information from the oncologist was associated with greater peace of mind (odds ratio [OR], 5.23; 95% confidence interval [95% CI], 1.81-15.16) and communication-related hope (OR, 2.54; 95% CI, 1.00-6.40). High-quality oncologist communication style was associated with greater trust in the physician (OR, 2.45; 95% CI, 1.09-5.48) and hope (OR, 3.01; 95% CI, 1.26-7.19). Accurate prognostic understanding was less common among the parents of children with less favorable prognoses (OR, 0.39; 95% CI, 0.17-0.88). Receipt of high-quality information, high-quality communication, and prognostic disclosure were not found to be significantly associated with more accurate prognostic understanding. Conclusions: The results of the current study demonstrate no evidence that disclosure is associated with anxiety, depression, or decreased hope. Communication processes may increase peace of mind, trust, and hope. It remains unclear how best to enhance prognostic understanding. Cancer 2018;124:1232-41. © 2017 American Cancer Society. abstract_id: PUBMED:11865880 Disclosure preferences about terminal illness: an examination of decision-related factors. Twenty-six male and 86 female, predominantly White, non-terminal cancer patients addressed preferences for disclosure of terminal prognosis, should their disease advance to that stage. Specific inquiries were made about desired levels of disclosure (full, partial,or non-disclosure) and desired pathways of disclosure (from physician to patient only, from physician to patient in the presence of a loved one, or from physician to loved one only). Gender, previous experience with death, and trait anxiety were associated with level preference. Education, previous experience with death, and trait anxiety were associated with pathway preference.Variables predictive of level and pathway preference were identified, benefits to physicians and patients were explored, and sampling limitations were discussed. abstract_id: PUBMED:29127696 Inspiring hope-A physician's responsibility, translating the science into clinical practice. Giving hope to patients is our responsibility. It is the essence of a meaningful practice in medicine. Science now allows us to understand this complex and multidimensional human dynamic, and translate it into clinical practice. Quantitative research has shown hope is strong even in terminal illness. Through qualitative methodology hope fostering strategies and hope hindering behaviors have been identified. This exciting new knowledge facilitates the challenging task of disclosure of bad news while enabling hope. abstract_id: PUBMED:12812182 'Hitting you over the head': oncologists' disclosure of prognosis to advanced cancer patients. The disclosure of prognosis to terminally ill patients has emerged as a recent concern given greater demands for patient involvement in medical decision-making in the United States. As part of the informed consent process, American physicians are legally and ethically obligated to provide information to such patients about risks, benefits, and alternatives of all available treatment options including the use of experimental therapies. Although not legally required, the disclosure of terminal prognosis is ethically justified because it upholds the principle of self-determination and enables patients to make treatment decisions consistent with their life goals. To understand oncologists' attitudes about disclosing prognostic information to cancer patients with advanced disease, we interviewed fourteen oncologists and conducted one focus group of medical fellows. Although oncologists reported to disclose prognosis in terms of cancer not being curable, they tend to avoid using percentages to convey prognosis. Oncologists' reported reluctance to disclosing prognosis was conveyed through the use of metaphors depicting the perceived violent impact of such information on patients. Oncologists' reluctance to disclose prognosis and preserve patient hope are held in check by their need to ensure that patients have 'realistic expectations' about therapy. We discuss these data in light of the cultural, ethical, and legal dimensions of prognosis disclosure, patient hope and the doctor-patient relationship, and recommend ways to enhance the communication process. abstract_id: PUBMED:18022831 Sustaining hope when communicating with terminally ill patients and their families: a systematic review. The aim of this systematic review was to examine studies that have investigated sustaining hope during prognostic and end-of-life issues discussions with terminally ill patients and their families. A comprehensive search of databases (MEDLINE, EMBASE, CINAHL, PsychINFO, Cochrane Central Register of Controlled Trials) and handsearching, from 1985 to June 2006, identified 27 studies. This review suggests that the issues surrounding hope in this context are complex. Despite the lack of unanimity among researchers regarding the definition of hope, findings suggest that balancing hope with honesty is an important skill for health professionals (HPs). Many patients seem to be able to maintain a sense of hope despite acknowledging the terminal nature of their illness. Patients and caregivers mostly preferred honest and accurate information, provided with empathy and understanding. Many different sources of hope were identified in this context in broad aspects of life, not just the medical situation. HPs need to recognize this spectrum of hope and appreciate that patients may simultaneously hope for 'cure' while acknowledging the terminal nature of their illness. HPs may help patients to cope with their terminal prognosis by exploring and fostering realistic forms of hope that are meaningful for the particular patient and their family. Answer: Disclosure of terminal prognosis does not necessarily mean losing hope. Studies have shown that seriously ill patients can redefine hope to focus on goals other than cure after receiving a terminal prognosis (PUBMED:26288098). Iranian cancer patients, for example, maintained high levels of hope even with positive perceptions of their prognosis (PUBMED:25124599). Furthermore, parents of children with cancer reported that physician communication about prognosis often made them feel hopeful, and disclosure of prognosis by the physician can support hope, even when the prognosis is poor (PUBMED:18065734). However, the timing and manner of disclosure are crucial. Some terminally ill patients with lung cancer and their relatives reported little recall of discussions about their concerns or goals of care immediately following the disclosure of a terminal prognosis, which sometimes led to a sense of abandonment (PUBMED:26842461). This suggests that while disclosure is important, it must be handled sensitively and may need to be part of an ongoing conversation rather than a one-time event. In pediatric oncology, disclosure of prognosis was not associated with increased parental anxiety, depression, or decreased hope. Instead, high-quality communication from the oncologist was associated with greater peace of mind and hope (PUBMED:29278434). Similarly, sustaining hope when communicating with terminally ill patients and their families is a complex issue, but many patients can maintain hope despite acknowledging their illness's terminal nature (PUBMED:18022831). Overall, the evidence suggests that while disclosure of a terminal prognosis is a delicate process, it does not inherently lead to a loss of hope. Instead, it can be an opportunity to foster realistic forms of hope that are meaningful to the patient and their family, provided the communication is empathetic, honest, and ongoing (PUBMED:21369633; PUBMED:29127696).
Instruction: Is the conservative management of chronic retention in men ever justified? Abstracts: abstract_id: PUBMED:14511038 Is the conservative management of chronic retention in men ever justified? Objective: To assess the outcome of men presenting with lower urinary tract symptoms (LUTS) associated with large postvoid residual urine volumes (PVR). Patients And Methods: The study included men presenting with LUTS and a PVR of > 250 mL who, because of significant comorbidity, a low symptom score or patient request, were managed conservatively and prospectively, and were followed with symptom assessment, serum creatinine levels, flow rates and renal ultrasonography. Patients were actively managed if there was a history of previous outflow tract surgery, prostate cancer, urethral strictures, neuropathy, elevated creatinine or hydronephrosis. In all, 93 men (mean age 70 years, range 40-84) with a median (range) PVR of 363 mL (250-700) were included in the study and followed for 5 (3-10) years. At presentation, the median maximum flow rate was 10.2 (3-30) mL/s and the voided volume 316 (89-714) mL. Results: The measured PVR remained stable in 47 (51%), reduced in 27 (29%) and increased in 19 (20%) patients; 31 patients (33%) went on to transurethral resection of the prostate after a median of 30 (10-120) months, because of serum creatinine elevation (two), acute retention (seven), increasing PVR (eight) and worsening symptoms (14). Of 31 patients 25 were available for evaluation after surgery; their median PVR was 159 (0-1000) mL, flow rate 18.4 (4-37) mL/s and voided volume 321 (90-653) mL. Symptoms were improved in all but five men. There was no difference in initial flow rate, voided volume or PVR between those who developed complications or went on to surgery and those who did not. Urinary tract infections (UTIs) occurred in five patients and two developed bladder stones. Conclusions: Complications such as renal failure, acute retention and UTIs are uncommon in men with large, chronic PVRs. Conservative management for this group of patients is reasonable but outpatient review is prudent. There were no factors that could be used to predict those patients who eventually required surgery. abstract_id: PUBMED:38077474 The role of implant retention and conservative management in the management of fracture-related infection. Fracture-related infection (FRI) management has advanced considerably in recent years, offering new possibilities for predictable rates of infection eradication. Debridement, antibiotics, and implant retention (DAIR) procedures have shown promise in the treatment of early FRI. This article provides an overview of the principles and indications of DAIR, including the importance of meticulous debridement and the management of dead space. The outcomes of DAIR are discussed, highlighting the range of fracture union rates reported in the literature. The role of antimicrobial suppression in optimizing host biology and facilitating surgical intervention is also explored. While further research is needed to establish optimal treatment strategies, DAIR offers a valuable treatment approach for FRI when specific criteria are met. Level Of Evidence: IV. abstract_id: PUBMED:25304984 Understanding by older patients of dialysis and conservative management for chronic kidney failure. Background: Older adults with chronic kidney disease stage 5 may be offered a choice between dialysis and conservative management. Few studies have explored patients' reasons for choosing conservative management and none have compared the views of those who have chosen different treatments across renal units. Study Design: Qualitative study with semistructured interviews. Settings & Participants: Patients 75 years or older recruited from 9 renal units. Units were chosen to reflect variation in the scale of delivery of conservative management. Methodology: Semistructured interviews audiorecorded and transcribed verbatim. Analytical Approach: Data were analyzed using thematic analysis. Results: 42 interviews were completed, 4 to 6 per renal unit. Patients were sampled from those receiving dialysis, those preparing for dialysis, and those choosing conservative management. 14 patients in each group were interviewed. Patients who had chosen different treatments held varying beliefs about what dialysis could offer. The information that patients reported receiving from clinical staff differed between units. Patients from units with a more established conservative management pathway were more aware of conservative management, less often believed that dialysis would guarantee longevity, and more often had discussed the future with staff. Some patients receiving conservative management reported that they would have dialysis if they became unwell in the future, indicating the conditional nature of their decision. Limitations: Recruitment of older adults with frailty and comorbid conditions was difficult and therefore transferability of findings to this population is limited. Conclusions: Older adults with chronic kidney disease stage 5 who have chosen different treatment options have contrasting beliefs about the likely outcomes of dialysis for those who are influenced by information provided by renal units. Supporting renal staff in discussing conservative management as a valid alternative to dialysis for a subset of patients will aid informed decision making. There is a need for better evidence about conservative management to support shared decision making for older people with chronic kidney failure. abstract_id: PUBMED:37381745 Comprehensive conservative care for the management of advanced chronic kidney disease Comprehensive “conservative care” is defined as any active therapeutic procedure for the management of stage 5 chronic kidney disease without recourse to dialysis. This therapeutic option is discussed in elderly, frail patients whose anticipated life expectancy is reduced with dialysis. The decision for conservative management primarily relies on an informed choice by the patient and his caregivers. This holistic approach, focused on quality of life, requires a multidisciplinary approach. The goals are to slow the progression of kidney disease, prevent complications, anticipate the risks of decompensation, provide support for the patient and his caregivers to maintain the best possible quality of life at home. This article describes the principles of conservative management, highlights various barriers to this care pathway, and proposes potential solutions. abstract_id: PUBMED:22452619 Chronic urinary retention in men: how we define it, and how does it affect treatment outcome. What's known on the subject? and What does the study add? Chronic urinary retention (CUR) is a poorly defined entity, as the key element of definition, significant postvoid residual urine volume (PVR), has not a worldwide and moreover evidenced-based definition. There is no agreement on which is the threshold value to define a significant PVR and different society produced guidelines with different thresholds ranging from 300 mL to 1000 mL. Diagnosis is difficult, and management has not been defined yet. There is a lack of studies on the best management of these patients, as this group of patients has always been considered at high risk of failure. Only one study compares conservative with the surgical management but it is not a randomised controlled trail. This review offers a systematic appraisal of the most recent publications on CUR. It indicates the absence of a real worldwide agreed definition, as the two keys element of it are not satisfactorily defined yet: significant PVR, is suffering from a lack of evidenced-based definition, and percussable or palpable bladder is a very nebulous concept as it is not a criteria of certainty as different individual variables affect it. This has an important effect on management which is not structured. Most of the trials involving benign prostatic hyperplasia treatments (either medical or surgical) tend to exclude this group of patients, which is a clinically important group, comprising up to a quarter of men undergoing TURP in the UK. Urinary retention describes a bladder that does not empty completely or does not empty at all. Historically, urinary retention has been classified as either acute or chronic the latter is generally classified as high pressure or low pressure according to the bladder filling pressure on urodynamic. A MEDLINE® search for articles written in English and published before January 2010 was done using a list of terms related to urinary retention: 'urinary retention', 'chronic urinary retention' and 'PVR'. Chronic urinary retention (CUR) is defined by the International Continence Society as 'a non-painful bladder, which remains palpable or percussable after the patient has passed urine'. Abrams was the first to choose a residual urine volume >300 mL to define CUR as he considered it the minimum volume at which the bladder becomes palpable suprapubically. The UK National Institute for Health and Clinical Excellence lower urinary tract symptoms (LUTS) guidelines define CUR as a postvoid residual urine volume (PVR) of >1000 mL. No studies have specifically addressed the problem of quantifying the minimum amount of urine present in the bladder to define CUR. Nor did we find any publications objectively assessing at what amount of urine a bladder can be palpable. The ability to feel a bladder may rely on variables (i.e. medical skills and patient habitus). There is a marked variability of PVR, so the test should be repeated to improve precision. As defining CUR is difficult, structured management is challenging. Nearly all prospective trials exclude men with CUR from analysis, possibly anticipating a poor outcome and a high risk of complications. However, men with CUR are a clinically important group, comprising up to 25% of men undergoing transurethral resection of the prostate. Definition of CUR is imprecise and arbitrary. Most studies seem to describe the condition as either a PVR of >300 mL in men who are voiding, or >1000 mL in men who are unable to void. This confusion leads to an inability to design and interpret studies; indeed most prospective trials simply exclude these patients. There is a clear need for internationally accepted definitions of retention to allow both treatment and reporting of outcomes in men with LUTS, and for such definitions to be used by all investigators in future trials. abstract_id: PUBMED:27524521 More hostile dyslipidaemia in chronic kidney disease patients on maintenance haemodialysis than on conservative management. Objective: To study the pattern of dyslipidaemia in patients with stage-V chronic kidney disease on conservative management and those on maintenance haemodialysis. Methods: This comparative observational study was conducted at the Jinnah Postgraduate Medical Centre, Karachi, from February to July, 2008, and comprised stage-V chronic kidney disease patients. The patients were divided in two groups. Those who were on conservative management were placed in Group A, and those who were on maintenance haemodialysis were in Group B. Serum total lipid, cholesterol, triglycerides and high-density lipoprotein for both groups were assayed on chemical analyser and low-density lipoprotein was calculated by Friedwald equation. SPSS 17 was used for data analysis. Results: Of the 120 patients, there were 60(50%) in each group. The mean age of patients in Group A was 46.33±14.56 years and in Group B was 43.4±14.1years. Of all, 64(53.3%) were men and 56(46.7%) were women. Mean systolic and diastolic blood pressure was 134±19.58mmHg and 83.6±14.14mmHg in Group A and 129±19.7mmHg and 79.7±12.1mmHg in Group B. Mean serum total lipid was significantly higher (p<0.01) whereas mean serum cholesterol was significantly lower (p<0.01) in Group B. Comparison of mean serum triglycerides between the groups was also statistically significant (p<0.01) and was high in Group B. Mean serum high-density lipoprotein was lower in Group B (p<0.01). The difference between serum low-density lipoprotein levels was statistically insignificant between the groups (p=0.11). Conclusions: Pattern of dyslipidaemia in patients on maintenance haemodialysis was more hostile than those on conservative management, and posed increased risk of coronary heart disease. abstract_id: PUBMED:32552068 The role of ultrasound imaging in the management of partial placental retention after third trimester livebirth. Objectives: To evaluate the impact of different ultrasound signs in the management and the role of ultrasound guidance in the surgical evacuation of partial placental tissue retention. Methods: This is an observational cohort study and retrospective case assessment of 82 patients with clinical symptoms of partial placental retention following a third trimester singleton livebirth between January 2013 and May 2019. The ultrasound signs were recorded using a standardized protocol and the outcome of the management strategy and the use of ultrasound guidance during any surgical procedure was evaluated. Results: Out of the 64 patients who had a vaginal birth, 25 (39.1%) had a manual removal of the placenta at delivery. Fifteen patients were confirmed as not having retained placental tissue and did not require further treatment. Four patients were referred after failed surgical management and four after failed conservative management. All surgical procedures were vacuum aspiration and forceps removal under continuous ultrasound guidance. A significantly lower gestational age at delivery (p < .05), shorter interval between delivery and ultrasound diagnosis (p < .05) and lower number of patients presenting with heavy bleeding was found in the conservative compared to the surgical management subgroups (p < .05). The incidence of feeding vessels was significantly (p < .05) higher in the surgical than in the conservative management subgroups and associated with increased myometrial vascularity. Six patients developed intra-uterine adhesions. In four of these cases, ultrasound examination showed a hyperechoic mass surrounded by normal myometrial vascularity and no feeding vessel. Conclusions: Ultrasound imaging accurately differentiated between patient with and without partial placental retention after third trimester livebirth. Ultrasound-guided vacuum aspiration is safe and efficient in these cases. abstract_id: PUBMED:28538218 Dialysis Therapy and Conservative Management of Advanced Chronic Kidney Disease in the Elderly: A Systematic Review. Background: Dialysis in older adults with chronic kidney disease (CKD) and comorbidities may not be associated with improved life expectancy compared to conservative management. To inform clinical practice, we performed a systematic review of all available studies examining this hypothesis. Methods: We performed a systematic review of retrospective and prospective cohort studies of older adults with stage-5 CKD who chose dialysis (hemodialysis or peritoneal dialysis) or opted for conservative management (including management of complications of CKD and palliative care). Outcomes of interest included hospitalizations and mortality. Results: Twelve cohort studies (11,515 patients) were identified with most of them focusing on older adults. Patients choosing dialysis were younger compared to those opting for conservative management and were less functionally impaired. Patients opting for conservative management received care in a multidisciplinary setting focusing on palliative care and management of complications of CKD. Patients choosing dialysis and conservative management had a median survival time of 8-67 and 6-30 months, respectively. In a subset of studies of patients 65 years and older with an estimated glomerular filtration rate <15 mL/min/1.73 m2, and where the multivariable analyses included age and comorbidities, by meta-analysis, patients choosing dialysis had a pooled adjusted hazard ratio for mortality of 0.53 (95% CI 0.30-0.91, p = 0.02) relative to those opting for conservative management; however, significant heterogeneity precluded definitive conclusions. Conclusions: When caring for older adults with advanced CKD who are contemplating dialysis therapy vs. conservative management, efforts must focus on promoting patient values and preferences, shared decision-making, and symptom burden alleviation. abstract_id: PUBMED:31743240 Conservative vs. preservative management of chronic kidney disease: similarities and distinctions. Purpose Of Review: Dialysis has been the prevailing treatment paradigm in advanced chronic kidney disease (CKD) for patients ineligible for or unlikely to receive kidney transplantation. As dialysis may neither offer survival benefit nor improved quality of life in certain groups, there has been increasing interest in conservative management as an alternative approach. Recent Findings: Experts and workgroups suggest the main goals of conservative management are to optimize quality of life, treat symptoms of end-stage renal disease without dialysis or transplant, and improve survival and cardiovascular health. Given the implications of preserved kidney function on clinical outcomes, preservative management has been proposed as an integral component of conservative management. Growing evidence suggests the survival benefit of dialysis vs. conservative management without dialysis is marginal or even reversed in certain subpopulations (elderly, multimorbid, cardiovascular disease). Limited data suggest that conservative and preservative management is associated with equivalent to more favorable trajectories of health-related quality of life and symptom burden over time as opposed to dialysis. Summary: Whereas existing data suggest conservative management is a viable patient-centered treatment strategy, further research is needed to determine the comparative effectiveness of preservative kidney management vs. dialysis or palliative management, as well as which patient subgroups will most benefit from these treatment strategies. abstract_id: PUBMED:29546081 Disparities in Confidence to Manage Chronic Diseases in Men. Background: Chronic diseases are highly prevalent among men in the United States and chronic disease management is problematic for men, particularly for racial and ethnic minority men. Objectives: This study examined the association between health information seeking and confidence to manage chronic diseases among men. Methods: Study data were drawn from the 2007 Health Tracking Household Survey and analyzed using multiple binary logistic regressions. The analytical sample included 2,653 men, 18 years and older with a chronic illness. Results: Health information seeking was not associated with confidence to manage chronic illnesses. African-American men had lower odds than White men to agree to take actions to prevent symptoms with their health. Hispanic men had lower odds than White men to agree to tell a doctor concerns they have, even when not asked. Conclusions: Racial and ethnic minority men with a chronic condition appear to be less confident to manage their health compared to white men. Chronic disease management needs greater exploration to understand the best ways to help racial and ethnic minority men successfully manage their chronic condition. Answer: Yes, the conservative management of chronic retention in men can be justified in certain circumstances. A study included men presenting with lower urinary tract symptoms (LUTS) and a large postvoid residual urine volume (PVR) who were managed conservatively due to significant comorbidity, a low symptom score, or patient request. The study found that complications such as renal failure, acute retention, and urinary tract infections (UTIs) were uncommon in men with large, chronic PVRs. Conservative management was deemed reasonable for this group of patients, although outpatient review was recommended. There were no factors that could predict which patients would eventually require surgery, indicating that conservative management can be an appropriate initial approach for some men with chronic retention (PUBMED:14511038). Furthermore, the study on chronic urinary retention (CUR) in men highlights the lack of a universally agreed definition of significant PVR, which ranges from 300 mL to 1000 mL in different guidelines. This lack of clarity in definition affects the management of CUR, which is not well-structured. The review suggests that men with CUR, who comprise a clinically important group, have been excluded from most prospective trials, possibly due to anticipated poor outcomes and high risk of complications. This indicates that conservative management may be a necessary consideration for this patient population until more evidence is available (PUBMED:22452619). In summary, conservative management of chronic retention in men is justified in cases where the risks of intervention outweigh the benefits, when patients have significant comorbidities, or when they have a preference for conservative treatment. However, careful monitoring and outpatient review are important to ensure that any necessary intervention is not unduly delayed (PUBMED:14511038).
Instruction: Can continuous positive airway pressure therapy improve the general health status of patients with obstructive sleep apnea? Abstracts: abstract_id: PUBMED:32247766 Eustachian tube dysfunction in sleep apnea patients and improvements afforded by continuous positive airway pressure therapy. Introduction: Upper airway resistance may accompany eustachian dysfunction and alter middle ear pressure in patients with obstructive sleep apnea syndrome. Objective: To investigate effects of obstructive sleep apnea syndrome and continuous positive airway pressure treatment on eustachian tube functions. Methods: Forty-two mild obstructive sleep apnea syndrome patients, 45 moderate obstructive sleep apnea syndrome patients, 47 severe obstructive sleep apnea syndrome patients with continuous positive airway pressure therapy, 32 severe obstructive sleep apnea syndrome patients without continuous positive airway pressure therapy, and 88 individuals without sleep apnea (controls) were included. Tympanometric parameters of groups were compared. Results: Right middle ear pressures in mild and moderate obstructive sleep apnea syndrome groups did not differ significantly from that of control group (p = 0.93 and p = 0.55), as was also true of the left middle ear pressures (p = 0.94 and p = 0.86). Right middle ear pressure was significantly higher in severe obstructive sleep apnea syndrome groups than in the control group, as was the left middle ear pressure (p < 0.001). Middle ear pressure (negative) was significantly lower in severe obstructive sleep apnea syndrome patients with continuous positive airway pressure therapy compared to those without (p < 0.001). Right ear type B and C tympanogram frequencies were significantly higher in patients with severe obstructive sleep apnea syndrome without continuous positive airway pressure therapy (12.4%) than in the controls (0%) (p = 0.02). Left ear type B or C tympanogram frequencies were significantly higher in patients with severe obstructive sleep apnea syndrome without continuous positive airway pressure therapy (21.9%) than in the controls (0%) (p = 0.002). Conclusion: Mild and moderate obstructive sleep apnea syndrome did not affect middle ear pressure but severe obstructive sleep apnea syndrome may increase the (negative) middle ear pressure. In severe obstructive sleep apnea syndrome patients, long-term continuous positive airway pressure therapy may normalize middle ear pressure. abstract_id: PUBMED:25403278 Beneficial effect of continuous positive airway pressure therapy in obstructive sleep apnea syndrome Continuous positive airway pressure therapy is an evidence based therapy of obstructive sleep apnea syndrome. The effective treatment of obstructive sleep apnea can decrease sympaticotonia and, consequently, blood pressure. Furthermore, it can improve inflammatory and metabolic parameters resulting in a decreased cardiovascular risk. This article summarizes the positive effects of continuous positive airway pressure therapy on cardiovascular risk factors in obstructive sleep apnea syndrome. abstract_id: PUBMED:18198804 Interventions to improve compliance in sleep apnea patients previously non-compliant with continuous positive airway pressure. Study Objectives: Despite widespread agreement that continuous positive airway pressure is effective therapy for obstructive sleep apnea, it is estimated that 50% of patients recommended for therapy are noncompliant 1 year later. Interventions to improve compliance in such patients have not been studied. We evaluated a 2 phase intervention program to improve compliance in sleep apnea patients previously noncompliant with continuous positive airway pressure. Methods: 204 patients with previously diagnosed obstructive sleep apnea and noncompliant with continuous positive airway pressure were enrolled. Phase 1 evaluated standard interventions to improve therapy compliance, including mask optimization, heated humidification, topical nasal therapy, and sleep apnea education. Persistently noncompliant patients proceeded to phase 2, where compliance was compared in double-blind randomized fashion between standard continuous positive airway pressure and flexible bilevel positive airway pressure. Results: 49 (24%) of 204 previously noncompliant patients became compliant (average nightly use > or =4 hours) after standard interventions. Then 104 of the 155 persistently noncompliant patients agreed to continue and were randomized to either CPAP or flexible bilevel positive airway pressure retitration and treatment for an additional ninety days. At follow-up 15 (28%) of the 53 randomized to CPAP and 25 (49%) of the 51 randomized to flexible bilevel positive airway pressure (p = 0.03) achieved compliance. Conclusions: A two phase intervention program, first employing standard interventions, followed by a change to flexible bilevel airway pressure, can achieve improved compliance in patients previously noncompliant with continuous positive airway pressure. abstract_id: PUBMED:30604577 Good long-term adherence to continuous positive airway pressure therapy in patients with resistant hypertension and sleep apnea. Although adequate adherence is paramount in achieving the beneficial effects of continuous positive airway pressure therapy in patients with obstructive sleep apnea, long-term adherence and the variables involved in continuous positive airway pressure compliance in patients with resistant hypertension and obstructive sleep apnea are yet unknown. We conducted a prospective, multicentre, observational study in 177 patients recruited from hypertensive units with resistant hypertension confirmed by means of 24-hr blood pressure monitoring (blood pressure ≥ 130 and/or ≥ 80 mmHg, despite taking at least three antihypertensive drugs or < 130/80 mmHg with > 3 drugs) and obstructive sleep apnea (apnea-hypopnea index ≥ 5 in a respiratory polygraph) who were prescribed continuous positive airway pressure treatment. Good adherence was defined as an average cumulative continuous positive airway pressure use of ≥ 4 hr per night at the end of the follow-up. A multivariate Cox regression analysis was performed to identify independent predictors of continuous positive airway pressure adherence. Patients were followed for a median of 57.6 (42-72) months after initiating continuous positive airway pressure therapy. At the end of the follow-up, the median continuous positive airway pressure use was 5.7 (inter-quartile range 3.9-6.6) hr per night, and 132 patients (74.5%) showed good continuous positive airway pressure adherence. The only baseline variable associated with poor adherence was the presence of previous stroke (hazard ratio 4.00, 95% confidence interval 1.92-8.31). Adequate adherence at 1 month also predicted good adherence at the end of the follow-up (hazard ratio 14.4, 95% confidence interval 4.94-56). Both variables also predicted adherence at a threshold of 6 hr per night. Our results show that good continuous positive airway pressure adherence is an achievable and feasible goal in patients with resistant hypertension and obstructive sleep apnea. Previous stroke and short-term adherence predicted long-term adherence. abstract_id: PUBMED:16052419 Positive airway pressure therapy of OSA. Positive airway pressure is standard therapy for patients with obstructive sleep apnea. It comes in three basic varieties: (1) continuous positive airway pressure (CPAP), (2) bilevel positive airway pressure (BPAP), and (3) autotitrating positive airway pressure (APAP). When properly titrated, positive airway pressure devices minimize the number of sleep-related breathing disorder events, often producing dramatic results. Sleep-related breathing may completely normalize, specific stages of sleep may selectively rebound due to having been chronically suppressed, sleep microstructure may improve, and the patient may awaken feeling refreshed for the first time in years. The specific indications and criteria for recommending APAP, BPAP, and CPAP therapy are reviewed. The titration process is presented in a step-by-step manner and titration grading is explained. Issues surrounding the interface, acceptance, utilization, and side-effects are discussed. Finally, we present an assortment of approaches for troubleshooting clinical problems commonly encountered among patients being treated with positive airway pressure therapy. abstract_id: PUBMED:27720457 Positive Airway Pressure Therapy for Obstructive Sleep Apnea. Positive airway pressure (PAP) is considered first-line therapy for moderate to severe obstructive sleep apnea and may also be considered for mild obstructive sleep apnea, particularly if it is symptomatic or there are concomitant cardiovascular disorders. Continuous PAP is most commonly used. Other modes, such as bilevel airway pressure, autotitrating positive airway pressure, average volume assured pressure support, and adaptive support ventilation, play important roles in the management of sleep-related breathing disorders. This article outlines the indications, description, and comfort features of each mode. Despite the proven efficacy of PAP in treating obstructive sleep apnea syndrome and its sequelae, adherence to therapy is low. Close follow-up of patients for evaluation of adherence to and effectiveness of treatment is important. abstract_id: PUBMED:27472984 The importance of clinical monitoring for compliance with Continuous Positive Airway Pressure. Introduction: Obstructive sleep apnea syndrome is currently a public health problem of great importance. When misdiagnosed or improperly treated, it can lead to serious consequences on patients' quality of life. The gold standard treatment for cases of obstructive sleep apnea syndrome, especially in mild to severe and symptomatic cases, is continuous positive airway pressure therapy. Compliance with continuous positive airway pressure therapy is directly dependent on the active participation of the patient, which can be influenced by several factors. Objective: The objective of this study is to describe the factors related to compliance with continuous positive airway pressure therapy, and to analyze which associated factors directly influence the efficiency of the treatment. Methods: Patients who received continuous positive airway pressure therapy through the Municipal Health Department of the city of Ribeirão Preto were recruited. A structured questionnaire was administered to the patients. Compliance with continuous positive airway pressure therapy was assessed by average hours of continuous positive airway pressure therapy usage per night. Patients with good compliance (patients using continuous positive airway pressure therapy ≥4h/night) were compared to those with poor compliance (patients using <4h/night). Results: 138 patients were analyzed: 77 (55.8%) were considered compliant while 61 (44.2%) were non-compliant. The comparison between the two groups showed that regular monitoring by a specialist considerably improved compliance with continuous positive airway pressure therapy (odds ratio, OR=2.62). Conclusion: Compliance with continuous positive airway pressure therapy is related to educational components, which can be enhanced with continuous and individualized care to patients with obstructive sleep apnea syndrome. abstract_id: PUBMED:23409736 Residual sleepiness in sleep apnea patients treated by continuous positive airway pressure. Hypoxic brain damage might explain persistent sleepiness in some continuous positive airway pressure-compliant obstructive sleep apnea called residual excessive sleepiness. Although continuous positive airway pressure may not be fully efficient in treating this symptom, wake-promoting drug prescription in residual excessive sleepiness is no longer allowed by the European Medicines Agency. The aim of this study is to describe residual excessive sleepiness phenotypes in a large prospective sample of patients with obstructive sleep apnea. Residual excessive sleepiness was defined by an Epworth Sleepiness Scale score ≥ 11. Eligible patients from the French National Sleep Registry attending follow-up continuous positive airway pressure visits numbered 1047. Patients using continuous positive airway pressure < 3 h (n = 275), with residual apnea-hypopnea index > 15 h⁻¹ (n = 31) or with major depression were excluded (n = 150). Residual excessive sleepiness prevalence in continuous positive airway pressure-treated obstructive sleep apnea was 13% (18% for those with an initial Epworth Sleepiness Scale score > 11), and significantly decreased with continuous positive airway pressure use (9% in ≥ 6 h night⁻¹ continuous positive airway pressure users, P < 0.005). At the time of diagnosis, patients with residual excessive sleepiness had worse subjective appreciation of their disease (general health scale, Epworth Sleepiness Scale and fatigue score), and complained more frequently of continuous positive airway pressure side-effects. Residual excessive sleepiness prevalence was lower in severe obstructive sleep apnea than in moderate obstructive sleep apnea (11% when AHI > 30 h⁻¹ versus 18% when AHI 15-30, P < 0.005). There was no relationship between residual excessive sleepiness and body mass index, cardiovascular co-morbidities or diabetes. Continuous positive airway pressure improved symptoms in the whole population, but to a lower extent in patients with residual excessive sleepiness (fatigue scale: -5.2 versus -2.7 in residual excessive sleepiness- and residual excessive sleepiness+ patients, respectively, P < 0.001). Residual excessive sleepiness prevalence decreased with continuous positive airway pressure compliance. Hypoxic insult is unlikely to explain residual excessive sleepiness as obstructive sleep apnea severity does not seem to be critical. Residual symptoms are not limited to sleepiness, suggesting a true 'continuous positive airway pressure-resistant syndrome', which may justify treatment by wake-promoting drugs. abstract_id: PUBMED:25042599 Continuous positive airway pressure therapy for obstructive sleep apnea reduces interictal epileptiform discharges in adults with epilepsy. Obstructive sleep apnea (OSA) is highly prevalent, affecting 25% of men and 10% of women. We recently reported a prevalence of OSA of 30% among 130 adults with epilepsy unselected for sleep disorder complaints, including 16% with moderate-to-severe disease, rates that markedly exceed general population estimates. Treatment of OSA with continuous positive airway pressure (CPAP) therapy or upper airway surgery reduces seizures in many cases. A single study reported a reduction in interictal spike rate with CPAP in 6 patients with OSA. We explored the effect of CPAP therapy on spike rate in 9 adults with epilepsy and OSA. Interictal epileptiform discharges were quantified during a diagnostic polysomnogram (PSG) and a second PSG using therapeutic CPAP. Spike rates were calculated for each recording during wake and sleep stages. Continuous positive airway pressure therapy was associated with significant reductions in median (quartiles) spike rate overall (77.9 [59.7-90.7] %), in wakefulness (38.5 [0.3-55] %), and in sleep (77.7 [54.8-94.7] %) but not in REM sleep. Continuous positive airway pressure therapy also produced a significant improvement in oxygen saturation and arousals. Our work extends a single prior observation demonstrating beneficial effects of CPAP therapy on interictal EEG in patients with epilepsy with comorbid OSA and supports the hypothesis that sleep fragmentation due to OSA contributes to epileptogenicity. abstract_id: PUBMED:27867577 When continuous positive airway pressure (CPAP) fails. Obstructive sleep apnoea (OSA) is increasingly prevalent, particularly in the context of the obesity epidemic, and is associated with a significant social, health and economic impact. The gold standard of treatment for moderate to severe OSA is continuous positive airway pressure (CPAP). However compliance rates can be low. Methodology to improve patient tolerance to CPAP alongside with alternative, non-surgical and surgical, management strategies are discussed. All patients that fail CPAP therapy would benefit from formal upper airway evaluation by the otolaryngologist to identify any obvious causes and consider site-specific surgical therapies. Patient selection is integral to ensuring successful outcomes. A multidisciplinary team is needed to manage these patients. Answer: Yes, continuous positive airway pressure (CPAP) therapy can improve the general health status of patients with obstructive sleep apnea (OSA). CPAP therapy is an evidence-based treatment for OSA and has been shown to decrease sympaticotonia and consequently blood pressure. It can also improve inflammatory and metabolic parameters, resulting in a decreased cardiovascular risk (PUBMED:25403278). Additionally, CPAP therapy can normalize middle ear pressure in patients with severe OSA, which may otherwise increase the (negative) middle ear pressure (PUBMED:32247766). Furthermore, CPAP therapy has been associated with significant reductions in interictal epileptiform discharges in adults with epilepsy and OSA, suggesting a beneficial effect on neurological health (PUBMED:25042599). Overall, CPAP therapy is considered the first-line therapy for moderate to severe OSA and can also be considered for mild OSA, particularly if it is symptomatic or there are concomitant cardiovascular disorders (PUBMED:27720457).
Instruction: Can routine information improve case finding of depression among 65 to 74 year olds in primary care? Abstracts: abstract_id: PUBMED:12356706 Can routine information improve case finding of depression among 65 to 74 year olds in primary care? Background: The identification of groups with higher prevalence of major depression allows for more cost-effective investigation and treatment. Objectives: The aim of this study was to examine whether the identification of patient groups with a higher prevalence of depression through information routinely available in primary care can increase the efficiency of active case finding. Methods: A cross-sectional two-stage survey was carried out of 2633 community residents between the age of 65 and 74 consisting of a structured interview with concurrent audit of general practice records. The 15-item geriatric depression scale (GDS-15) was used as a screening tool for depression. Individuals scoring > or =4 on the GDS-15 were offered a clinical interview using the Schedules for Clinical Assessment in Neuropsychiatry (SCAN). Results: There were 1912 (72.6%) participants in the first stage interview, of whom 134 had GDS-15 scores > or =4 and were invited to take part in the second stage. Of these, 90 participants (67.2%) agreed to the clinical interview. To detect one case of major depression, the number needing to be assessed was 63 for the whole sample. The number needing to be assessed was lower among those receiving antidepressants (nine), frequent GP attenders (12) and those living alone (32). Conclusion: Although depression is more common among various subgroups, our data show that the investigation of each high prevalence subgroup would detect only a minority of all cases in the total population. It is not possible to rely on active case finding in high prevalence subgroups for a high detection rate of depression in a practice population. abstract_id: PUBMED:31192278 Identification of Frailty in Primary Care: Feasibility and Acceptability of Recommended Case Finding Tools Within a Primary Care Integrated Seniors' Program. Background: Case finding for frailty is recommended as part of routine clinical practice. We aimed to test feasibility and acceptability of three recommended case finding tools in primary care as part of an integrated seniors' program. Method: Program of Research to Integrate Services for the Maintenance of Autonomy-7 (PRISMA-7), 4-m walk test, and electronic frailty index (eFI) were used as frailty case finding tools for a target population of community-dwelling seniors ≥65 years of age enrolled in a seniors' program within an academic primary care clinic in Alberta, Canada. Feasibility was measured by percent completion rate and requirements for training/equipment/space/time, and acceptability by health care providers was measured using focus groups. Results: Eighty-five patients underwent case finding and 16 health care providers participated in the focus groups. Completion rate for PRISMA-7, 4-m walk test, and eFI was 97.6%, 93%, and 100%, respectively. No special training or equipment was required for PRISMA-7; brief training, equipment, and space were required for 4-m walk test. Both tools took less than 5 min to complete. Despite eFI requiring 10 to 20 min/patient chart, providers found it less intrusive. Conclusion: Despite feasibility of the tests, acceptance was higher for tools with minimal clinic interruption, low requirements for resources, and those with added benefit. abstract_id: PUBMED:37851079 An economic evaluation of universal and targeted case-finding strategies for identifying antenatal depression: a model-based analysis comparing common case-finding instruments. Half of women with depression in the perinatal period are not identified in routine care, and missed cases reflect inequalities in other areas of maternity care. Case finding (screening) for depression in pregnant women may be a cost-effective strategy to improve identification, and targeted case finding directs finite resources towards the greatest need. We compared the cost-effectiveness of three case-finding strategies: no case finding, universal (all pregnant women), and targeted (only pregnant women with risk factors for antenatal depression, i.e. history of anxiety/depression, age < 20 years, and adverse life events). A decision tree model was developed to represent case finding (at around 20 weeks gestation) and subsequent treatment for antenatal depression (up to 40 weeks gestation). Costs include case finding and treatment. Health benefits are measured as quality-adjusted life years (QALYs). The sensitivity and specificity of case-finding instruments and prevalence and severity of antenatal depression were estimated from a cohort study of pregnant women. Other model parameters were derived from published literature and expert consultation. The most cost-effective case-finding strategy was a two-stage strategy comprising the Whooley questions followed by the PHQ-9. The mean costs were £52 (universal), £61 (no case finding), and £62 (targeted case finding). Both case-finding strategies improve health compared with no case finding. Universal case finding is cost-saving. Costs associated with targeted case finding are similar to no case finding, with greater health gains, although targeted case finding is not cost-effective compared with universal case finding. Universal case finding for antenatal depression is cost-saving compared to no case finding and more cost-effective than targeted case finding. abstract_id: PUBMED:31285359 Model-based evaluation of the long-term cost-effectiveness of systematic case-finding for COPD in primary care. Introduction: 'One-off' systematic case-finding for COPD using a respiratory screening questionnaire is more effective and cost-effective than routine care at identifying new cases. However, it is not known whether early diagnosis and treatment is beneficial in the longer term. We estimated the long-term cost-effectiveness of a regular case-finding programme in primary care. Methods: A Markov decision analytic model was developed to compare the cost-effectiveness of a 3-yearly systematic case-finding programme targeted to ever smokers aged ≥50 years with the current routine diagnostic process in UK primary care. Patient-level data on case-finding pathways was obtained from a large randomised controlled trial. Information on the natural history of COPD and treatment effects was obtained from a linked COPD cohort, UK primary care database and published literature. The discounted lifetime cost per quality-adjusted life-year (QALY) gained was calculated from a health service perspective. Results: The incremental cost-effectiveness ratio of systematic case-finding versus current care was £16 596 per additional QALY gained, with a 78% probability of cost-effectiveness at a £20 000 per QALY willingness-to-pay threshold. The base case result was robust to multiple one-way sensitivity analyses. The main drivers were response rate to the initial screening questionnaire and attendance rate for the confirmatory spirometry test. Discussion: Regular systematic case-finding for COPD using a screening questionnaire in primary care is likely to be cost-effective in the long-term despite uncertainties in treatment effectiveness. Further knowledge of the natural history of case-found patients and the effectiveness of their management will improve confidence to implement such an approach. abstract_id: PUBMED:21542671 Case-finding and risk-group screening for depression in primary care. OBJECTIVE. Central health organizations suggest routine screening for depression in high-risk categories of primary care patients. This study compares the effectiveness of high-risk screening versus case-finding in identifying depression in primary care. DESIGN. Using an observational design, participating GPs included patients from 13 predefined risk groups and/or suspected of being depressed. Patients were assessed by the Major Depression Inventory (MDI) and ICD-10 criteria. Setting. Thirty-seven primary care practices in Mainland Denmark. Main outcome measures. Prevalence of depression, diagnostic agreement, effectiveness of screening methods, risk groups requiring special attention. RESULTS. A total of 37 (8.4%) of 440 invited GP practices participated. We found high-risk prevalence of depression in 672 patients for the following traits: (1) previous history of depression, (2) familial predisposition to depression, (3) chronic pain, (4) other mental disorders, and (5) refugee or immigrant. In the total sample, GPs demonstrated a depression diagnostic sensitivity of 87% and a specificity of 67% using a case-finding strategy. GP diagnoses of depression agreed well with the MDI (AUC values of 0.91-0.99). The potential added value of high-risk screening was 4.6% (31/672). Patients with other mental disorders were at increased risk of having an unrecognized depression (PR 3.15, 95% CI 1.91-5.20). If patients with other mental disorders were routinely tested, then 42% more depressed patients (14/31) would be recognized. CONCLUSIONS. A broad case-finding approach including a short validation test can help GPs identify depressed patients, particularly by including patients with other mental disorders in this strategy. This exploratory study cannot support the screening strategy proposed by central health organizations. abstract_id: PUBMED:10840264 Randomized trial of case-finding for depression in elderly primary care patients. Objective: To determine the effect of case-finding for depression on frequency of depression diagnoses, prescriptions for antidepressant medications, prevalence of depression, and health care utilization during 2 years of follow-up in elderly primary care patients. Design: Randomized controlled trial. Setting: Thirteen primary care medical clinics at the Kaiser Permanente Medical Center, an HMO in Oakland, Calif, were randomly assigned to intervention conditions (7 clinics) or control conditions (6 clinics). Participants: A total of 2,346 patients aged 65 years or older who were attending appointments at these clinics and completed the 15-item Geriatric Depression Scale (GDS). GDS scores of 6 or more were considered suggestive of depression. Interventions: Primary care physicians in the intervention clinics were notified of their patients' GDS scores. We suggested that participants with severe depressive symptoms (GDS score >/= 11) be referred to the Psychiatry Department and participants with mild to moderate depressive symptoms (GDS score of 6 -10) be evaluated and treated by the primary care physician. Intervention group participants with GDS scores suggestive of depression were also offered a series of organized educational group sessions on coping with depression led by a psychiatric nurse. Primary care physicians in the control clinics were not notified of their patients' GDS scores or advised of the availability of the patient education program (usual care). Participants were followed for 2 years. Measurements And Main Results: Physician diagnosis of depression, prescriptions for antidepressant medications, prevalence of depression as measured by the GDS at 2-year follow-up, and health care utilization were determined. A total of 331 participants (14%) had GDS scores suggestive of depression (GDS >/= 6) at baseline, including 162 in the intervention group and 169 in the control group. During the 2-year follow-up period, 56 (35%) of the intervention participants and 58 (34%) of the control participants received a physician diagnosis of depression (odds ratio [OR], 1.0; 95% confidence interval [CI], 0.6 to 1.6; P =.96). Prescriptions for antidepressants were received by 59 (36%) of the intervention participants and 72 (43%) of the control participants (OR, 0.8; 95% CI, 0.5 to 1.2; P =.3). Two-year follow-up GDS scores were available for 206 participants (69% of survivors): at that time, 41 (42%) of the 97 intervention participants and 54 (50%) of the 109 control participants had GDS scores suggestive of depression (OR, 0.7; 95% CI, 0.4 to 1.3; P =.3). Comparing participants in the intervention and control groups, there were no significant differences in mean GDS change scores (-2.4 +/- SD 3.7 vs -2.1 SD +/- 3.6; P =.5) at the 2-year follow-up, nor were there significant differences in mean number of clinic visits (1.8 +/- SD 3.1 vs 1.6 +/- SD 2.8; P =.5) or mean number of hospitalizations (1.1 +/- SD 1.6 vs 1.0 +/- SD 1.4; P =.8) during the 2-year period. In participants with initial GDS scores > 11, there was a mean change in GDS score of -5.6 +/- SD 3.9 for intervention participants (n = 13) and -3.4 +/- SD 4.5 for control participants (n = 21). Adjusting for differences in baseline characteristics between groups did not affect results. Conclusions: We were unable to demonstrate any benefit from case-finding for depression during 2 years of follow-up in elderly primary care patients. Studies are needed to determine whether case-finding combined with more intensive patient education and follow-up will improve outcomes of primary care patients with depression. abstract_id: PUBMED:26246249 An intervention of active TB case finding among smokers attending routine primary care facilities in China: an exploratory study. Background: Smoking is an important risk factor of TB. However, no studies have been conducted to identify TB cases from smokers. We assessed the process and initial impact of active case finding among smokers at primary care facilities in a setting with high smoking rates and TB burden. Methods: A prospective quasi-experimental study was conducted in para-urban communities in Yunnan China between September 2013 and June 2014. Smokers attending primary care facilities in the intervention group were prescribed chest X-rays if they had diabetes or TB symptoms, or were elders or close contacts of TB patients. Those with X-rays suggestive of TB were referred to TB dispensaries for diagnosis. Passive case finding was practiced in the control group. Results: In the intervention group, we screened 471 smokers with high risks of TB, of whom 73% took chest X-ray examinations. Eight TB cases were diagnosed, reflecting a 1.7% yield rate of all screened smokers. Smokers with diabetes (OR 6.003, 95% CI 1.057-34.075) were more likely to have TB compared with those without. In total, the intervention group reported significantly higher TB notification rate compared with the control group (38.6 vs 22.9 per 100 000, p=0.016). Conclusions: Active case finding among smokers with high risks of TB was feasible and contributed to improved notification rates. abstract_id: PUBMED:9371051 Can case-finding instruments be used to improve physician detection of depression in primary care? Objective: To explore the issue of diagnostic specificity for major depression in the primary care setting by examining the relative accuracy of 3 methods to detect major depression in primary care. Design: Comparison of performance characteristics of 3 case-finding methods for depression (ie, the Center for Epidemiologic Studies-Depression scale [CES-D], unaided physician detection, and "augmented" physician detection by use of a case-finding instrument), with the standard criterion being the Structured Clinical Interview for Diagnostic and Statistical Manual of Mental Disorders, Third Edition, Revised (DSM-III-R). Setting: The offices of 50 family physicians from private and academic practice in southeastern Michigan. Patients: Adult patients (N = 1580) who presented for routine care, from which a weighted random sample of 425 patients completed the Structured Clinical Interview for DSM-III-R. Main Outcome Measures: Sensitivity, specificity, positive predictive value, and positive likelihood ratio for each case-finding method. Results: Major depression was present in 13.4% of the sample. Both the CES-D and unaided physician detection methods performed poorly in identifying patients who met DSM-III-R criteria for major depressive disorder. The CES-D had high sensitivity but low specificity at standard and high cut points, resulting, respectively, in low positive predictive values (0.307 and 0.385) and low positive likelihood ratios (2.9 and 4.0). Unaided physician detection showed lower sensitivity, higher specificity, and a slightly higher positive predictive value (0.45) and positive likelihood ratio (4.9). Raising of the CES-D threshold for a positive test did not enhance the detection of depression. Augmented physician detection with CES-D scores resulted in minimal improvement. Although the positive predictive value and positive likelihood ratio increased to 0.50 and 6.1, respectively, using the most stringent case-finding definition (ie, physician identification plus the CES-D score [score > or = 22]), the proportion of depressed patients who were correctly identified decreased to 26.9%. Conclusions: Neither high scores on the CES-D nor unaided physician detection accurately identified patients with major depression who were seen in primary care settings, while the supplementation of physician detection with CES-D scores had a minimal net effect on the accuracy of detection. The data do not support the routine use of the CES-D as a primary care screening instrument for depression, either as a stand-alone measure or as a supplement to physician detection. abstract_id: PUBMED:37142002 An economic evaluation of targeted case-finding strategies for identifying postnatal depression: A model-based analysis comparing common case-finding instruments. Background: Half of women with postnatal depression (PND) are not identified in routine care. We aimed to estimate the cost-effectiveness of PND case-finding in women with risk factors for PND. Methods: A decision tree was developed to represent the one-year costs and health outcomes associated with case-finding and treatment for PND. The sensitivity and specificity of case-finding instruments, and prevalence and severity of PND, for women with ≥1 PND risk factor were estimated from a cohort of postnatal women. Risk factors were history of anxiety/depression, age < 20 years, and adverse life events. Other model parameters were derived from published literature and expert consultation. Case-finding for high-risk women only was compared with no case-finding and universal case-finding. Results: More than half of the cohort had one or more PND risk factor (57.8 %; 95 % CI 52.7 %-62.7 %). The most cost-effective case-finding strategy was the Edinburgh Postnatal Depression Scale with a cut-off of ≥10 (EPDS-10). Among high-risk women, there is a high probability that EPDS-10 case-finding for PND is cost-effective compared to no case-finding (78.5 % at a threshold of £20,000/QALY), with an ICER of £8146/QALY gained. Universal case-finding is even more cost-effective at £2945/QALY gained (versus no case-finding). There is a greater health improvement with universal rather than targeted case-finding. Limitations: The model includes costs and health benefits for mothers in the first year postpartum, the broader (e.g. families, societal) and long-term impacts are also important. Conclusions: Universal PND case-finding is more cost-effective than targeted case-finding which itself is more cost-effective than not case-finding. abstract_id: PUBMED:12100833 Identifying depression in primary care: a literature synthesis of case-finding instruments. We evaluated the usefulness of case-finding instruments for identifying patients with major depression or dysthymia in primary care settings using English language literature from Medline, a specialized trials registry and bibliographies of selected papers. Studies were done in primary care settings with unselected patients and compared case-finding instruments with accepted diagnostic criterion standards for major depression were selected. A total of 16 case-finding instruments were assessed in 38 studies. More than 32,000 patients received screening with a case-finding instrument; approximately 12,900 of these received criterion standard assessment. Case-finding instruments ranged in length from 1 to 30 questions. Average administration times ranged from less than 2 min to 6 min. Median sensitivity for major depression was 85% (range 50% to 97%); median specificity was 74% (range 51% to 98%). No significant differences between instruments were found. However for individual instruments, estimates of sensitivity and specificity varied significantly between studies. For the combined diagnoses of major depression or dysthymia, overall sensitivity was 79% (CI, 74% to 83%) and overall specificity 75% (CI, 70% to 81%). Stratified analyses showed no significant effects on overall instrument performance for study methodology, criterion standard choice, or patient characteristics. We found that multiple instruments with reasonable operating characteristics are available to help primary care clinicians identify patients with major depression. Because operating characteristics of these instruments are similar, selection of a particular instrument should depend on issues such as feasibility, administration and scoring times, and the instruments' ability to serve additional purposes, such as monitoring severity or response to therapy. Answer: Based on the provided abstracts, routine information may not significantly improve case finding of depression among 65 to 74 year olds in primary care. A study examining the efficiency of active case finding in this age group found that while depression is more common among various subgroups (such as those receiving antidepressants, frequent GP attenders, and those living alone), investigating each high prevalence subgroup would detect only a minority of all cases in the total population. The authors concluded that it is not possible to rely on active case finding in high prevalence subgroups for a high detection rate of depression in a practice population (PUBMED:12356706). This suggests that routine information, while potentially helpful in identifying certain high-risk groups, may not be sufficient to significantly improve the overall detection rate of depression among the elderly in primary care settings.
Instruction: Hand hygiene compliance by physicians: marked heterogeneity due to local culture? Abstracts: abstract_id: PUBMED:18834749 Hand hygiene compliance by physicians: marked heterogeneity due to local culture? Background: Physician compliance with hand hygiene guidelines often has been reported as insufficient. Methods: The study was conducted in 2 hospitals (Hadassah Ein Kerem [EK] and Mt Scopus [MS]) in Jerusalem, Israel. Covert observations were conducted during morning rounds by trained observers. The data were recorded as the percentage of times that hand hygiene was applied out of the total contacts with patients. After the observational step, an intervention-providing an alcohol gel and encouraging its use-was instituted in several wards. Results: Physicians' compliance with hand hygiene averaged 77% at MS and 33% at EK (P < .001), and was characterized by a marked additional heterogeneity among wards. Rates of adherence ranged from as low as 4% in a gynecology ward to as high as 96% in a neonatal unit. Availability of a handwashing basin in the room and seniority status of the physician were associated with higher compliance rates but explained only a small part of the variation. Compliance improved significantly in 2 wards exposed to the intervention. Conclusion: The remarkable heterogeneity in physicians' hand hygiene compliance among sites within the same institution is consistent with an important role of the local ward culture. abstract_id: PUBMED:22686125 An observational study of hand hygiene compliance in paediatric wards. Aims And Objectives: To measure healthcare workers', children's and visitors' hand hygiene compliance in a paediatric oncology ward and a paediatric respiratory ward in an English hospital. Background: Children are especially vulnerable to healthcare-associated infections, yet few studies have reported on hand hygiene compliance in paediatric clinical areas. Design: This was an observational study. Method: We measured hand hygiene compliance over an eight-hour period in two hospital wards using the 'five moments of hand hygiene' observation tool. We monitored a total of 407 hand hygiene opportunities. Results: Overall opportunities for compliance were 74% for healthcare workers (n = 315) and children and visitors 23% (n = 92). Compliance was 84% for allied health professionals, 81% for doctors, 75% for nurses and 73% for ancillary and other staff. Hand hygiene compliance varied depending on which of the five moments of hygiene healthcare workers were undertaking (p < 0·001), with compliance before child contact 90% (140/155); after child contact 78% (89/114); after body fluid exposure 75% (3/4); and after surroundings contact 36% (15/42). For healthcare workers and visitors, there was no evidence of an association between time of day and their hand hygiene compliance, and for visitors to the oncology ward, hand hygiene compliance was higher (p < 0·05). Conclusion: Owing to the nature of the clinical environments, we are unable to draw conclusions about children's hand hygiene compliance; however, visitors' compliance was low. Among healthcare workers, levels of compliance were higher compared with previous reported estimates. Relevance To Clinical Practice: Visitors had the lowest level of compliance yet owing to the nature of the clinical environments, nearly a quarter of care is delivered by them rather than healthcare workers, and so, this offers opportunities for specific future interventions aimed at families and carers. abstract_id: PUBMED:31670442 Hand hygiene compliance in intensive care units: An observational study. Aim: Health care-associated infections along with antibiotic resistance are a leading risk for patient safety in intensive care units. Hygienic hand disinfection is still regarded as the most effective, simplest, and most cost-effective measure to reduce health care-associated infections. To improve hand hygiene compliance and to prevent health care-associated infections, interventions of the "German Clean Hands Campaign" were implemented in a university hospital. Methods: Observational single-center study using direct observation and feedback. Hand hygiene performance was assessed in 12 intensive care units between 2013 and 2017. Linear mixed model regression analyses were used to estimate the compliance trend over time. Results: In total, 10 315 "my five moments for hand hygiene" were observed. The mean hand hygiene compliance rates increased from 75.1% to 88.6% during the study period, yielding an estimated increase of about 4.5% per year. However, there are differences in compliance between occupational groups (physicians: between 61.2% and 77.1%; nurses: between 80.2% and 90.9%; others: between 61.3% and 82.4%). Conclusions: After implementation of the "German Clean Hands Campaign" interventions, an overall significant improvement of hand hygiene was detected. Compliance measurements helped to raise awareness among health care professional groups. abstract_id: PUBMED:36089071 Hand hygiene compliance by direct observation in physicians and nurses: a systematic review and meta-analysis. Background: Direct observation of hand hygiene compliance is the gold standard despite limitations and potential for bias. Previous literature highlights poorer hand hygiene compliance among physicians than nurses and suggests that covert monitoring may give better compliance estimates than overt monitoring. Aim: To explore differences in compliance between physicians and nurses further, and to determine whether compliance estimates differed when observations were covert rather than overt. Methods: A systematic search of databases PubMed, Embase, CENTRAL and CINAHL was performed. Experimental or observational studies in hospital settings in high-income countries published in English from 2010 onwards were included if estimates for both physicians and nurses using direct observation were reported. The search yielded 4814 studies, of which 105 were included. Findings: The weighted pooled compliance rate for nurses was 52% (95% CI: 47-57) and for doctors was 45% (95% CI: 40-49%). Heterogeneity was considerable (I2 = 99%). The majority of studies were at moderate or high risk of bias. Random-effects meta-analysis of low risk of bias studies suggests higher compliance for nurses than physicians for both overt (difference of 7%; 95% CI for the difference: 0.8-13.5; P = 0.027) and covert (difference of 7%; 95% CI: 3-11; P = 0.0002) observation. Considerable heterogeneity was found in all analyses. Conclusion: Wide variability in compliance estimates and differences in the methodological quality of hand hygiene studies were identified. Further research with meta-regression should explore sources of heterogeneity and improve the conduct and reporting of hand hygiene studies. abstract_id: PUBMED:26088769 Effect of hand sanitizer location on hand hygiene compliance. Background: Hand hygiene is the most important intervention to prevent infection in hospitals. Health care workers should clean their hands at least before and after contact with patients. Hand sanitizer dispensers are important to support hand hygiene because they can be made available throughout hospital units. The aim of this study was to determine whether the usability of sanitizer dispensers correlates with compliance of staff in using the sanitizer in a hospital. This study took place in a Midwest, 404-bed, private, nonprofit community hospital with 15 inpatient care units in addition to several ambulatory units. Methods: The usability and standardization of sanitizers in 12 participating inpatient units were evaluated. The hospital measured compliance of staff with hand hygiene as part of their quality improvement program. Data from 2010-2012 were analyzed to measure the relationship between compliance and usability using mixed-effects logistic regression models. Results: The total usability score (P = .0046), visibility (P = .003), and accessibility of the sanitizer on entrance to the patient room (P = .00055) were statistically associated with higher observed compliance rates. Standardization alone showed no significant impact on observed compliance (P = .37). Conclusion: Hand hygiene compliance can be influenced by visibility and accessibility of dispensers. The sanitizer location should be part of multifaceted interventions to improve hand hygiene. abstract_id: PUBMED:33425018 Improving children's and their visitors' hand hygiene compliance. Background: Numerous interventions have tried to improve healthcare workers' hand hygiene compliance. However, little attention has been paid to children's and their visitors' compliance. Aim: To test whether interactive educational interventions increase children's and visitors' compliance with hand hygiene. Methods: This was a cluster randomised study of hand hygiene compliance before and after the introduction of educational interventions. Observations were compared for different moments of hygiene and times of the day. Qualitative data in the form of questionnaire-based structured interviews were obtained. Findings: Hand hygiene compliance increased by 24.4% (P < 0.001) following the educational interventions, with children's compliance reaching 40.8% and visitors' being 50.8%. Compliance varied depending on which of the five moments of hygiene was observed (P < 0.001), with the highest compliance being 'after body fluid exposure' (72.7%). Responses from questionnaires showed educational interventions raised awareness of the importance of hand hygiene (69%, 57%) compared to those who had not experienced the educational intervention (50%). Conclusion: Educational interventions may result in a significant increase in children's and visitors' hand hygiene (P < 0.001). abstract_id: PUBMED:29269167 The development of hand hygiene compliance imperatives in an emergency department. Background: Monitoring results showing poor hand hygiene compliance in a major, busy emergency department prompted a quality improvement initiative to improve hand hygiene compliance. Purpose: To identify, remove, and reduce barriers to hand hygiene compliance in an emergency department. Methods: A barrier identification tool was used to identify key barriers and opportunities associated with hand hygiene compliance. Hand hygiene imperatives were developed and agreed on with clinicians, and a framework for monitoring and improving hand hygiene compliance was developed. Results: Barriers to compliance were ambiguity about when to clean hands, the pace and urgency of work in some areas of the department, which left little time for hand hygiene and environmental and operational issues. Sore hands were a problem for some staff. Expectations of compliance were agreed on with staff, and changes were made to remove barriers. A monitoring tool was designed to monitor progress. Gradual improvement occurred in all areas, except in emergency situations, which require further improvement work. Conclusions: The context of care and barriers to compliance should be reflected in hand hygiene expectations and monitoring. In the emergency department, the requirement to deliver urgent live-saving care can supersede conventional hand hygiene expectations. abstract_id: PUBMED:31263732 Does Physician Leadership Influence Followers' Hand Hygiene Compliance? The aim of this study was to determine factors influencing the hand hygiene compliance of a physician. We found a strong correlation between a leader's (staff member's or fellow's) and a follower's (resident's) hand hygiene compliance. Followers' appropriate hand hygiene compliance was significantly associated with the compliance of the leader (P = .01). abstract_id: PUBMED:24080083 Impact of an educational intervention upon the hand hygiene compliance of children. Background: Hand hygiene compliance is the single most effective way to reduce healthcare-associated infections. Children are notoriously vulnerable to infection as well as acting as conduits to transmission. Based on these observations, the authors formulated the hypothesis that behavioural change which improved children's hand hygiene compliance would decrease the spread of infectious diseases. Aim: To create an educational intervention to induce long-term behavioural change culminating in increased hand hygiene compliance of children, and thus a decrease in the rate of infections. Methods: Focus groups conducted during interactive teaching sessions identified what children felt would help them to increase their hand hygiene compliance. This informed the design of an educational device that was subsequently trialled to measure its effectiveness in increasing hand hygiene compliance. Initial developmental stages were conducted in two schools in the East Midlands with study participants aged 5-8 years; the device was subsequently used in a healthcare setting to assess deployment flexibility. Findings: Focus groups indicated that children enjoyed interactive learning, developed knowledge about cross-transmission of infections, and became motivated to encourage others to improve hand hygiene compliance. Microbiological swabbing verified the presence of pathogens on children's hands and environmental surfaces that could serve as reservoirs of infection, and questionnaires indicated an increase in handwashing following the intervention. Conclusion: Educational interventions have the potential to increase hand hygiene and reduce the transmission of infections. abstract_id: PUBMED:23910401 Baseline evaluation of hand hygiene compliance in three major hospitals, Isfahan, Iran. Hand hygiene is the mainstay of nosocomial infection prevention. This study was a baseline survey to assess hand hygiene compliance of healthcare workers by direct observation in three major hospitals of Isfahan, Iran. The use of different hand hygiene products was also evaluated. In 3078 potential opportunities hand hygiene products were available on 2653 occasions (86.2%). Overall compliance was 6.4% (teaching hospital: 7.4%; public hospital: 6.2%; private hospital: 1.4%). Nurses (8.4%) had the highest rates of compliance. Poor hand hygiene compliance in Isfahan hospitals necessitates urgent interventions to improve both hospital infrastructure and staff knowledge. Answer: The study of hand hygiene compliance among physicians in two hospitals in Jerusalem, Israel, revealed significant heterogeneity that could be attributed to local culture. At the Hadassah Ein Kerem (EK) and Mt Scopus (MS) hospitals, physician compliance with hand hygiene guidelines varied widely, with an average compliance of 77% at MS and only 33% at EK. This variation was further pronounced among different wards within the same institution, ranging from as low as 4% in a gynecology ward to as high as 96% in a neonatal unit. Factors such as the availability of a handwashing basin in the room and the seniority status of the physician were associated with higher compliance rates, but these factors only explained a small part of the variation. The study concluded that the marked heterogeneity in physicians' hand hygiene compliance among sites within the same institution is consistent with an important role of the local ward culture (PUBMED:18834749). This finding is supported by other studies that have observed variability in hand hygiene compliance among healthcare workers, including physicians. For instance, a systematic review and meta-analysis found that the weighted pooled compliance rate for nurses was higher than that for doctors, with considerable heterogeneity across studies (PUBMED:36089071). Another study in intensive care units reported differences in compliance between occupational groups, with physicians showing lower compliance rates compared to nurses (PUBMED:31670442). These studies suggest that local culture, along with other factors, may significantly influence hand hygiene compliance among physicians.
Instruction: Can 18F-FDG PET improve the evaluation of suspicious breast lesions on MRI? Abstracts: abstract_id: PUBMED:24931922 Can 18F-FDG PET improve the evaluation of suspicious breast lesions on MRI? Objective: To evaluate the impact of adding 18F-fluorine-2-deoxy-d-glucose (FDG) positron emission tomography (PET) in the evaluation of suspicious breast lesions on magnetic resonance imaging (MRI). Methods: Sixty patients with suspicious breast lesions on MRI were selected to perform a PET-CT in prone position, dedicated to the evaluation of the breasts. The areas with increased 18F-FDG concentration relative to normal parenchyma were considered positive on PET-CT. Fusion of PET and MRI images (PET-MRI) was performed on a dedicated workstation to better locate corresponding lesions, and its findings were compared with histological results. Results: 76 lesions were evaluated, including 64 mass lesions (84.2%) and 12 non-mass lesions (15.8%). Lesions' mean diameter on MRI was 29.6 ± 19.2 mm (range 6-94 mm). PET-CT showed increased metabolically activity on 57 lesions (75.0%), with mean maximum SUV of 5.7 ± 5.0 (range 0.8-23.1). On histopathology, there were 17 (22.4%) benign and 59 (79.7%) malignant lesions. Considering all lesions, PET-MRI fusion provided 89.8% sensitivity, 76.5% specificity and 86.8% accuracy. Considering only mass lesions higher than 10mm, PET-MRI fusion provided 95.8% sensitivity, 83.3% specificity and 93.3% accuracy. Conclusion: The inclusion of 18F-FDG PET on the evaluation of suspicious breast lesions on MRI helped to differentiate benign from malignant breast lesions, especially for mass lesions with a diameter higher than 10 mm. abstract_id: PUBMED:30728757 Sequential [18F]FDG-[18F]FMISO PET and Multiparametric MRI at 3T for Insights into Breast Cancer Heterogeneity and Correlation with Patient Outcomes: First Clinical Experience. The aim of this study was to assess whether sequential multiparametric 18[F]fluoro-desoxy-glucose (18[F]FDG)/[18F]fluoromisonidazole ([18F]FMISO) PET-MRI in breast cancer patients is possible, facilitates information on tumor heterogeneity, and correlates with prognostic indicators. In this pilot study, IRB-approved, prospective study, nine patients with ten suspicious breast lesions (BIRADS 5) and subsequent breast cancer diagnosis underwent sequential combined [18F]FDG/[18F]FMISO PET-MRI. [18F]FDG was used to assess increased glycolysis, while [18F]FMISO was used to detect tumor hypoxia. MRI protocol included dynamic breast contrast-enhanced MRI (DCE-MRI) and diffusion-weighted imaging (DWI). Qualitative and quantitative multiparametric imaging findings were compared with pathological features (grading, proliferation, and receptor status) and clinical endpoints (recurrence/metastases and disease-specific death) using multiple correlation analysis. Histopathology was the standard of reference. There were several intermediate to strong correlations identified between quantitative bioimaging markers, histopathologic tumor characteristics, and clinical endpoints. Based on correlation analysis, multiparametric criteria provided independent information. The prognostic indicators proliferation rate, death, and presence/development of recurrence/metastasis correlated positively, whereas the prognostic indicator estrogen receptor status correlated negatively with PET parameters. The strongest correlations were found between disease-specific death and [18F]FDGmean (R=0.83, p < 0.01) and between the presence/development of metastasis and [18F]FDGmax (R=0.79, p < 0.01), respectively. This pilot study indicates that multiparametric [18F]FDG/[18F]FMISO PET-MRI might provide complementary quantitative prognostic information on breast tumors including clinical endpoints and thus might be used to tailor treatment for precision medicine in breast cancer. abstract_id: PUBMED:27003055 Contribution of FDG PET/CT for the Optimization of the Management of Additional Lesions Detected on Local Staging Breast MRI. Objective: The purpose of this study is to evaluate the added value of FDG PET/CT for the management of additional lesions detected by breast MRI during initial staging of known breast cancer. Materials And Methods: We retrospectively queried our database for all consecutive patients with biopsy-confirmed breast cancer who underwent breast MRI and FDG PET/CT before neoadjuvant chemotherapy between November 2011 and November 2012. The final population comprised 80 patients. Initially, two readers retrospectively analyzed the breast MRI data for the presence of lesions in addition to the previously confirmed index neoplasm. Analysis of FDG PET/CT data followed; two different readers evaluated the presence or absence of FDG uptake in both breasts. The reference standard for additional lesions was percutaneous biopsy with subsequent 2-year follow-up for benign results. Statistical analysis was conducted with nonparametric analysis and the Fisher exact test. Results: The readers detected 61 additional breast lesions at MRI in 45 patients (56.2%); 37 of 61 (61%) additional lesions were malignant and 24 of 61 (39%) were benign. Among the 61 additional breast lesions detected by MRI, only 32 were positive at FDG PET/CT, with a positive predictive value of 90.6% and negative predictive value of 72%. The sensitivity, specificity, and accuracy of FDG PET/CT were 78.3%, 87.5%, and 81.9%, respectively. In eight cases, FDG PET/CT missed additional malignant lesions. All missed malignant additional lesions were smaller than 1 cm. In three cases, additional lesions also detected at FDG PET/CT were benign. Conclusion: In the case of additional lesions detected at MRI, a negative FDG PET/CT finding could replace a breast biopsy for lesions larger than 1 cm. abstract_id: PUBMED:27981471 Metabolic Tumor Burden Assessed by Dual Time Point [18F]FDG PET/CT in Locally Advanced Breast Cancer: Relation with Tumor Biology. Purpose: The aim of the study was to investigate the influence of dual time point 2-deoxy-2-[18F]fluoro-D-glucose ([18F]FDG) positron emission tomography/x-ray computed tomography (PET/CT) on the standard uptake value (SUV) and volume-based metabolic variables of breast lesions and their relation with biological characteristics and molecular phenotypes. Procedures: Retrospective analysis including 67 patients with locally advanced breast cancer (LABC). All patients underwent a dual time point [18F]FDG PET/CT, 1 h (PET-1) and 3 h (PET-2) after [18F]FDG administration. Tumors were segmented following a three-dimensional methodology. Semiquantitative metabolic variables (SUVmax, SUVmean, and SUVpeak) and volume-based variables (metabolic tumor volume, MTV, and total lesion glycolysis, TLG) were obtained. Biologic prognostic parameters, such as the hormone receptors status, p53, HER2 expression, proliferation rate (Ki-67), and grading were obtained. Molecular phenotypes and risk-classification [low: luminal A, intermediate: luminal B HER2 (-) or luminal B HER2 (+), and high: HER2 pure or triple negative] were established. Relations between clinical and biological variables with the metabolic parameters were studied. The relevance of each metabolic variable in the prediction of phenotype risk was assessed using a multivariate analysis. Results: SUV-based variables and TLG obtained in the PET-1 and PET-2 showed high and significant correlations between them. MTV and SUV variables (SUVmax, SUVmean, and SUVpeak) where only marginally correlated. Significant differences were found between mean SUV variables and TLG obtained in PET-1 and PET-2. High and significant associations were found between metabolic variables obtained in PET-1 and their homonymous in PET-2. Based on that, only relations of PET-1 variables with biological tumor characteristics were explored. SUV variables showed associations with hormone receptors status (p < 0.001 and p = 0.001 for estrogen and progesterone receptor, respectively) and risk-classification according to phenotype (SUVmax, p = 0.003; SUVmean, p = 0.004; SUVpeak, p = 0.003). As to volume-based variables, only TLG showed association with hormone receptors status (estrogen, p < 0.001; progesterone, p = 0.031), risk-classification (p = 0.007), and grade (p = 0.036). Hormone receptor negative tumors, high-grade tumors, and high-risk phenotypes showed higher TLG values. No association was found between the metabolic variables and Ki-67, HER2, or p53 expression. Conclusion: Statistical differences were found between mean SUV-based variables and TLG obtained in the dual time point PET/CT. Most of PET-derived parameters showed high association with molecular factors of breast cancer. However, dual time point PET/CT did not offer any added value to the single PET acquisition with respect to the relations with biological variables, based on PET-1 SUV, and volume-based variables were predictors of those obtained in PET-2. abstract_id: PUBMED:35896912 Recent topics of the clinical utility of PET/MRI in oncology and neuroscience. Since the inline positron emission tomography (PET)/magnetic resonance imaging (MRI) system appeared in clinical, more than a decade has passed. In this article, we have reviewed recently-published articles about PET/MRI. There have been articles about staging in rectal and breast cancers by PET/MRI using fluorodeoxyglucose (FDG) with higher diagnostic performance in oncology. Assessing possible metastatic bone lesions is considered a proper target by FDG PET/MRI. Other than FDG, PET/MRI with prostate specific membrane antigen (PSMA)-targeted tracers or fibroblast activation protein inhibitor have been reported. Especially, PSMA PET/MRI has been reported to be a promising tool for determining appropriate sites in biopsy. Independent of tracers, the clinical application of artificial intelligence (AI) for images obtained by PET/MRI is one of the current topics in this field, suggesting clinical usefulness for differentiating breast lesions or grading prostate cancer. In addition, AI has been reported to be helpful for noise reduction for reconstructing images, which would be promising for reducing radiation exposure. Furthermore, PET/MRI has a clinical role in neuroscience, including localization of the epileptogenic zone. PET/MRI with new PET tracers could be useful for differentiation among neurological disorders. Clinical applications of integrated PET/MRI in various fields are expected to be reported in the future. abstract_id: PUBMED:36766472 Prone versus Supine FDG PET/CT in the Staging of Breast Cancer. Supine [18F]Fluorodeoxyglucose (FDG) positron emission technology/computed tomography (PET/CT) is a commonly used modality for the initial staging of breast cancer, and several previous studies have shown superior sensitivity and specificity of prone FDG PET/CT in comparison to its supine counterpart. This retrospective study included 25 females with breast cancer referred for staging. They underwent supine FDG PET/CT followed by prone FDG PET/CT. The outcomes were: number of primary breast lesions, anatomical site of FDG-avid lymph nodes (LNs), and number and type of bone lesions, with SUVmax of all corresponding parameters. Performance was superior in prone acquisition compared to supine acquisition, with the respective results: 29 vs. 22 breast tumor lesions detected, 62 vs. 27 FDG-avid axillary LNs detected, sensitivity of 68% vs. 57%, specificity of 64% vs. 53%. The detection rate of axillary LNs in the prone position was significantly higher (p = 0.001). SUVmax for breast tumor lesions (p = 0.000) and number of detected axillary LNs (p = 0.002) were significantly higher in prone acquisition. Five patients were upstaged after experts read the prone acquisition. Prone FDG PET/CT acquisition is a promising technique in detecting primary breast lesions and metastatic LNs possibly missed in supine acquisition, which may lead to change in patient staging and management. abstract_id: PUBMED:32992995 Characterisation of MRI Indeterminate Breast Lesions Using Dedicated Breast PET and Prone FDG PET-CT in Patients with Breast Cancer-A Proof-of-Concept Study. Magnetic resonance imaging (MRI) in patients with breast cancer to assess extent of disease or multifocal disease can demonstrate indeterminate lesions requiring second-look ultrasound and ultrasound or MRI-guided biopsies. Prone positron emission tomography-computed tomography (PET-CT) is a dedicated acquisition performed with a breast-supporting device on a standard PET-CT scanner. The MAMmography with Molecular Imaging (MAMMI, Oncovision, Valencia, Spain) PET system (PET-MAMMI) is a true tomographic ring scanner for the breast. We investigated if PET-MAMMI and prone PET-CT were able to characterise these MRI- indeterminate lesions further. A total of 10 patients with breast cancer and indeterminate lesions on breast MRI were included. Patients underwent prone PET-MAMMI and prone PET-CT after injection of FDG subsequently on the same day. Patients then resumed their normal pathway, with the clinicians blinded to the results of the PET-MAMMI and prone PET-CT. Of the MRI-indeterminate lesions, eight were histopathologically proven to be malignant and two were benign. PET-MAMMI and prone PET-CT only were able to demonstrate increased FDG uptake in 1/8 and 0/8 of the MRI-indeterminate malignant lesions, respectively. Of the MRI-indeterminate benign lesions, both PET-MAMMI and prone PET-CT demonstrated avidity in 1/2 of these lesions. Our findings do not support the use of PET-MAMMI to characterise indeterminate breast MRI lesions requiring a second look ultrasound. abstract_id: PUBMED:17373334 Breast MRI and 18F FDG PET/CT in the management of breast cancer. Goals: 18F FDG PET/CT is used for diagnosis, staging and establishing the response to therapy in various malignancies, including breast cancer (BC). Dedicated breast MRI (BMRI) is gaining a role in the management of BC patients (pts), demonstrating high sensitivity and specificity for detection of small lesions. We were therefore prompted to review our experience with PET and BMRI in BC. Methods: This is a retrospective study of 21 women with BC, 30-76 years old, who had BMRI and whole-body FDG PET/CT at our institution from Jun 2002 to May 2005. A total of 6 patients (group A) had BMRI and PET/CT in the preoperative period and 15 patients (group B) had BMRI and PET/CT after surgery. Reinterpretation of the imaging studies for accuracy and data analysis from medical records were performed. Results: For group A, BMRI identified breast lesions in 4 patients, while PET/CT was able to identify breast lesions in 5 patients. All these were proven to be malignancy on pathology examination. In group B, BMRI detected recurrent breast lesions in 8 patients, with 88.9% sensitivity and 83.3% specificity. In the same patient population, PET/CT was 33.3% sensitive and 91.7% specific. As a whole body examination, PET/CT revealed metastatic disease in 6 patients (100% sensitive and 90% specific). Overall, sensitivities and specificities for breast disease detection were 85.7% and 85.7% for BMRI, and 75% and 92.3% for 18F FDG PET/ CT. Conclusions: As expected, BMRI is more sensitive than PET/CT in the detection of breast lesions. However, PET/CT as a whole-body examination changed the management of disease by detection of distant lesions in 6 of the 21 patients. Our study suggests that 18F FDG PET/CT and BMRI should be considered as complimentary imaging tools in the pre- and postoperative work-up of patients diagnosed with breast cancer. abstract_id: PUBMED:20017002 Dual-time-point [18F]-FDG PET/CT in the diagnostic evaluation of suspicious breast lesions. Purpose: The authors sought to evaluate whether the reacquisition of images 3 h after administration of radiotracer improves the sensitivity of fluorine-18 fluorodeoxyglucose positron emission tomography computed tomography ([(18)F]-FDG PET/CT) in patients with suspicious breast lesions. Materials And Methods: Forty-eight patients with 59 breast lesions underwent an [(18)F]-FDG PET/CT study in the prone position with a dual-time-point acquisition performed in the early phase 1 h after FDG administration (PET-1) and in the delayed phase 3 h after FDG administration (PET-2). Both examinations were evaluated qualitatively and semiquantitatively with calculation of the mean percentage variation of the standard uptake values (Delta% SUV(max)) between PET-1 and PET-2. All lesions with an SUV(max) >or=2.5 at PET-1 or a reduction in SUV between PET-1 and PET-2 were considered benign. The definitive histopathological diagnosis was available for all patients included in the study. Results: The dual-time-point acquisition of [(18)F]-FDG PET/CT displayed an accuracy of 85% for lesions with an SUV(max) >or=2.5 and/or positive Delta% SUV(max), with sensitivity and specificity values of 81% and 100% compared with 69%, 63% (both p<0.001) and 100% (p=n.s.), respectively, for the single-time-point acquisition. Malignant lesions showed an increase in FDG uptake between PET-1 and PET-2, with a Delta% SUV(max) of 10+/-7 (p<0.04). In contrast, benign lesions showed a decrease in SUV between PET-1 and PET-2, with a Delta% SUV(max) of -21+/-7 (p<0.001). Conclusions: The delayed repeat acquisition of PET images improves the accuracy of [(18)F]-FDG PET/CT in patients with suspicious breast lesions with respect to the single-time-point acquisition. In addition, malignant breast lesions displayed an increase in FDG uptake over time, whereas benign lesions showed a reduction. These variations in FDG uptake between PET-1 and PET-2 are a reliable parameter that can be used for differentiating between benign and malignant breast lesions. abstract_id: PUBMED:11702142 Comparison of pharmacokinetic MRI and [18F] fluorodeoxyglucose PET in the diagnosis of breast cancer: initial experience. It was the aim of this methodology-oriented clinical pilot study to compare the potential of dynamic MRI and 2-[18F]fluoro-2-deoxy-D-glucose (FDG) positron emission tomography (PET) for the detection and characterization of breast cancer. Fourteen women with suspicious breast lesions were examined. The MRI data were acquired with a turbo fast low-angle shot sequence and analyzed using a pharmacokinetic model. Emission data were detected in the sensitive 3D modus, iteratively reconstructed, and superimposed onto corresponding transmission images. In the 14 patients, 13 breast masses with a suspicious contrast enhancement and FDG uptake were detected. For these lesions, no statistically significant correlation between evaluated MR and PET parameters was found. Of the 9 histologically confirmed carcinomas, 8 were correctly characterized with MRI and PET. Two inflammatory lesions were concordantly classified as cancer. Moreover, dynamic MRI yielded another false-positive finding. In 6 patients, PET detected occult lymph node and/or distant metastases. Although both functional imaging techniques provide independent tissue information, the results concerning the diagnosis of primary breast lesions were almost identical. An advantage of PET, however, is its ability to localize lymph node involvement and distant metastases as an integral part of the examination. Answer: Yes, 18F FDG PET can improve the evaluation of suspicious breast lesions on MRI. Studies have shown that the inclusion of 18F-FDG PET in the evaluation of suspicious breast lesions on MRI helps to differentiate benign from malignant breast lesions, particularly for mass lesions larger than 10 mm, with improved sensitivity, specificity, and accuracy (PUBMED:24931922). Additionally, sequential multiparametric 18[F]FDG PET-MRI has been demonstrated to provide complementary quantitative prognostic information on breast tumors, including clinical endpoints, which could be used to tailor treatment for precision medicine in breast cancer (PUBMED:30728757). Furthermore, FDG PET/CT has been found to have a positive predictive value of 90.6% and a negative predictive value of 72% for additional lesions detected by breast MRI, suggesting that a negative FDG PET/CT finding could replace a breast biopsy for lesions larger than 1 cm (PUBMED:27003055). Metabolic tumor burden assessed by dual time point [18F]FDG PET/CT also showed a correlation with biological characteristics and molecular phenotypes of breast cancer (PUBMED:27981471). Prone FDG PET/CT has been reported to have superior sensitivity and specificity compared to supine FDG PET/CT in the staging of breast cancer, indicating that it could be a promising technique in detecting primary breast lesions and metastatic lymph nodes that may be missed in supine acquisition (PUBMED:36766472). However, it is important to note that not all studies have found PET to be effective in all cases. For instance, a proof-of-concept study found that PET-MAMMI and prone PET-CT were not able to characterize the majority of MRI-indeterminate malignant lesions, suggesting that these methods may not be suitable for characterizing indeterminate breast MRI lesions (PUBMED:32992995). Overall, while breast MRI is highly sensitive in detecting breast lesions, PET/CT can provide additional whole-body information that may change the management of the disease by detecting distant lesions (PUBMED:17373334). Dual-time-point [18F]-FDG PET/CT has also been shown to improve the accuracy of [18F]-FDG PET/CT in patients with suspicious breast lesions compared to single-time-point acquisition (PUBMED:20017002).
Instruction: Can a standard treatment be proposed for penile cancer? Abstracts: abstract_id: PUBMED:18271420 Can a standard treatment be proposed for penile cancer? Objective: Malignant tumours of the penis are rare and their management is complex. The objective of this study was to review the management and report the results based on a personal series. Materials And Methods: Cases of penile cancer treated in our department between 2000 and 2006 were retrospectively reviewed. The following data were recorded: mode of presentation, staging, site, pathology, TNM stage, treatment, course and survival. Results: Seven cases of squamous cell carcinoma of the penis were diagnosed. The mean age of the patients was 58.8 years (range: 41-74), Two precancerous lesions (Bowen's disease) and one case of Human Papilloma Virus (HPV) infection were observed. Five patients received multimodal conservative treatment and 2 patients were treated by total amputation of the penis. Four patients have died. The mean survival was 19.3 months and the 5-year specific survival was 42.8%. Conclusion: Cancers of the penis are aggressive tumours with a poor prognosis. Treatment decisions, which are poorly defined in these cases of rare cancers, should be based on multidisciplinary consultations. abstract_id: PUBMED:33898656 Underutilization of Surgical Standard of Care for Insured Men with Invasive Penile Cancer. Purpose: Prior studies of mixed insurance populations have demonstrated poor adherence to surgical standard of care (SOC) for penile cancer. We used data from the Surveillance, Epidemiology and End Results (SEER) cancer registry linked to Medicare to calculate SOC adherence to surgical treatment of penile cancer in insured men over the age of 65, focusing on potential social and racial disparities. Methods: This is an observational analysis of patients with T2-4 penile cancer of any histologic subtype without metastasis in the SEER-Medicare database (2004-2015). SOC was defined as penectomy (partial or radical) with bilateral inguinal lymph node dissection (ILND) based on the National Comprehensive Cancer Network guidelines. We calculated proportions of those receiving SOC and constructed multivariate models to identify factors associated with receiving SOC. Results: A total of 447 men were included. Of these men, 22.1% (99/447) received SOC while 18.8% (84/447) received no treatment at all. Only 23.3% (104/447) had ILND while 80.9% (362/447) underwent total or partial penectomy. Race and socioeconomic status (SES) were not associated with decreased SOC. Increasing age (OR 0.93, 95%CI:0.89-0.96), Charlson Comorbidity Index score ≥ 2 (OR 0.53, 95%CI:0.29-0.97), and T3-T4 disease (OR 0.34, 95%CI:0.18-0.65) were associated with not receiving SOC on adjusted analysis. Conclusions: Rates of SOC are low among insured men 65 years of age or older with invasive penile cancer, regardless of race or SES. This finding is largely driven by low rates of ILND. Strategies are needed to overcome barriers to SOC treatment for men with invasive penile cancer. abstract_id: PUBMED:21309862 External validation of the proposed T and N categories of squamous cell carcinoma of the penis. Objectives: The aim of this study was to validate recently proposed modifications to the current TNM classification of penile squamous cell carcinoma (PSCC) by using data from four German urological centers. Methods: We identified 89 patients treated for histologically confirmed PSCC between 1996 and 2008 and reclassified them according to the proposed TNM staging revisions. The proposed changes restricted T2 to tumoral invasion of the corpus spongiosum, whereas invasion of the corpus cavernosum was considered as T3. No changes were made to T1 and T4. Furthermore, N1 was limited to unilateral and N2 to bilateral inguinal lymph node involvement regardless of their number. Pelvic lymph node involvement and fixed lymph node were considered as N3 tumors. The range of follow up after initial treatment was 1-142 months (mean 38). Results: Node-negative cases following the current classification were 65.2% (30/46), 48.5% (16/33) and 87.5% (7/8) for T1, T2 and T3, respectively. According to the proposed classification, N0 cases were markedly reduced in the T3 group (55.5%, 10/18) and relatively changed in the T2 group (56.5%, 13/23). T4 patients had no negative disease status. The 3-year disease-specific survival (DSS) rates for the proposed categories were 85.4%, 71.6% and 62.4% for T1, T2 and T3, respectively. For the current categories, the 3-year DSS rates were 85.4%, 66.9% and 100% for T1, T2 and T3, respectively. The 3-year DSS of the current N categories was 78.7%, 51% and 13.3% for N1, N2 and N3, respectively. According to the newly proposed categories, the 3-year DSS was 70%, 50% and 13.3% for N1, N2 and N3, respectively. Conclusion: Tumor and nodal staging of the newly proposed TNM classification show a more distinctive survival compared to the current one. However, a multi-institutional validation is still required to further corroborate the proposed modifications. abstract_id: PUBMED:10439515 Our standard method of reconstruction of the penis and urethra in female to male transsexuals. The authors used as a standard a one-stage method of microsurgical reconstruction of the penis, urethra and glans in 38 transsexual patients, a combination of Biemer's method, Trengove-Jones and Horton's suture of the glans. The reconstruction method is described in detail. Although reconstruction of the penis and urethra involved a group of 49 patients and accounted for less than 10% of all microsurgical tissue transfers in traumatology, oncological surgery and congenital defects, they are an important activity in systematic surgery. abstract_id: PUBMED:25236319 Melanoma of the glans penis successfully treated with topical imiquimod: dermoscopy usefulness in clinical monitoring and review of the literature. Melanoma in situ (MIS) of the penis is very rare in dermatologic literature. The standard of care for MIS is surgical removal by excision with a 5-mm margins or Mohs micrographic surgery. Nevertheless, surgery is occasionally not feasible for a number of reasons, such as patient comorbidities, potential aesthetic and functional impairment and patient preference. Recently, topical treatment with an immunomodulator, imiquimod, has been proposed as an alternative treatment for MIS. Dermoscopy, beyond its well established usefulness in the diagnostic evaluation of melanocytic lesions, has also shown to be an important tool in monitoring therapeutic response to various dermatoses. We present a case of a 38-year-old man who presented a MIS of the glans penis, histopathologically diagnosed, treated successfully using imiquimod. abstract_id: PUBMED:22403282 Surgical treatment of hemangioma on the dorsum of the penis. Therapies for hemangiomas of the penis include surgical excision, electrofulguration, cryotherapy, sclerotherapy, and neodymium:yttrium aluminum garnet laser treatment. Urologists often face difficulties in deciding whether to use surgery in penile hemangioma cases. In this study, we investigated the safety and feasibility of surgical treatment of hemangioma in the penis. The study included 6 patients, 19 to 42 years of age (median age 23 years), with hemangiomas on the dorsal aspect of the penis. All patients were treated with surgery. All operations were successful, and no complications were observed. Patients were followed up for 2 to 5 years (median 2.5 years) after discharge from the hospital. Five patients were assessed for sexual function. Lesions healed completely, and all patients were satisfied with the aesthetic results. All patients had returned to normal sexual activity within 3 months of the operation. Sexual function and sexual satisfaction were well maintained after the operation. A therapeutic reference standard for the treatment of penile hemangioma is still lacking because of the rarity of the disease. The results of our experience confirm that surgical treatment for penile hemangioma represents a safe radical curative procedure. Surgical treatment offers an alternative to the more conservative therapy previously advocated. abstract_id: PUBMED:31928389 Short-Term Surgical Outcomes of Standard and Lateral Video Endoscopic Inguinal Lymphadenectomy: A Multinational Retrospective Study. Background: Video endoscopic inguinal lymphadenectomy (VEIL) is a minimally invasive technique that gives superior surgical outcomes than open inguinal lymphadenectomy (IL) for treating lymph node metastasis in penile, vulvar, and skin cancers. This study compared surgical outcomes obtained with two different approaches of VEIL, standard VEIL and lateral VEIL (L-VEIL), in cancer patients. Methods: Sixty-two patients who underwent standard VEIL (n = 15) or L-VEIL (n = 47) for treatment of lymph node metastasis were evaluated retrospectively from three centers in Brazil, Egypt, and India. Primary endpoint analyzed was conversion rate to open IL in the two groups, and the secondary endpoints included operative time, estimated blood loss, nodal yield, nodal positivity, postoperative drain duration, and postoperative complications. Results: The conversion rate to open IL was higher in L-VEIL compared with VEIL group (2% vs. 0%). Significantly lesser blood loss was reported with L-VEIL compared with VEIL (mean difference: 3.63 mL; P = .01). Postoperative drain duration was significantly lower with L-VEIL (-4.34 days; P < .05) than VEIL. The L-VEIL group had a higher number of lymph nodes without infiltration (mean difference: -0.48; P = .02). Operative time, nodal yield, nodal positivity, and hospitalization duration were similar in both groups. Postoperative complications were higher in the L-VEIL versus VEIL group (35 vs. 11 cases). Lymphedema events were significantly higher with L-VEIL in comparison with VEIL (38.8% vs. 16.7%; P = .03). Among patients with penile cancer, no significant difference was observed in outcomes obtained with VEIL and L-VEIL. Conclusion: As L-VEIL and VEIL approaches lead to comparable surgical outcomes, surgeons may choose either of these as per their convenience. abstract_id: PUBMED:27526657 Topical imiquimod in the treatment of extramammary Paget's disease: A 10 year retrospective analysis in an Asian tertiary centre. Extramammary Paget's disease (EMPD) is a rare intraepithelial adenocarcinoma usually found in apocrine-rich areas. Although surgery remains standard treatment, topical imiquimod has emerged as a promising drug for the treatment of EMPD in recent years. We present our experience in treating EMPD in Asian skin successfully with topical imiquimod 5% cream, over the past 10 years in our tertiary institution. abstract_id: PUBMED:35044047 Incidence of penile intraepithelial neoplasia and treatment strategies in Sweden 2000-2019. Objectives: To analyse the incidence, treatment strategies and complications associated with penile intraepithelial neoplasia (PeIN) in Sweden over a period of 20 years. Materials And Methods: Data on PeIN from the Swedish National Penile Cancer Register were analysed regarding treatment in relation to age, size of the PeIN lesion, localization of the PeIN lesion and complications using chi-squared tests and logistic regression. The incidence of PeIN was calculated and age-standardized according to the European Standard population. Results: Between 2000 and 2019 a total of 1113 PeIN cases were reported. The age-standardized incidence of PeIN was 1.40 per 100 000 men (95% confidence interval [CI] 1.32-1.49). An increase in incidence over time was seen, with a standardized incidence rate of 2.37 (95% CI 1.56-3.70) in 2019 compared to the baseline year, 2000. Surgical or topical treatments were given in 75.0% and 14.6% of cases, respectively. The complication rate was higher in laser surgery (12.1%, 7/58) compared to local surgery (4.6%, 16/348; P = 0.03) with an age-adjusted odds ratio (OR) of 2.82 (95% CI 1.10-7.19; P = 0.03). Local surgery was more common than laser surgery in the last 5 years compared to the first 5 years of the study period: OR 5.75 (95% CI 2.94-11.27). Treatments with imiquimod and topical 5-fluorouracil (5-FU) were more common than destructive methods such as photodynamic therapy, cryotherapy, curettage and electrocautery in the last 5 years compared to the first 5 years: OR 9.48 (95% CI 2.29-39.24). Conclusions: A twofold increase in the age-standardized incidence of PeIN was seen in Sweden over 20 years. Complications were three times more common in laser surgery compared to local surgery. Changes in treatment showed an increase of treatment strategies such as local surgery and treatment with imiquimod and topical 5-FU over time. abstract_id: PUBMED:32414626 Risks from Deferring Treatment for Genitourinary Cancers: A Collaborative Review to Aid Triage and Management During the COVID-19 Pandemic. Context: The coronavirus disease 2019 (COVID-19) pandemic is leading to delays in the treatment of many urologic cancers. Objective: To provide a contemporary picture of the risks from delayed treatment for urologic cancers to assist with triage. Evidence Acquisition: A collaborative review using literature published as of April 2, 2020. Evidence Synthesis: Patients with low-grade non-muscle-invasive bladder cancer are unlikely to suffer from a 3-6-month delay. Patients with muscle-invasive bladder cancer are at risk of disease progression, with radical cystectomy delays beyond 12 wk from diagnosis or completion of neoadjuvant chemotherapy. Prioritization of these patients for surgery or management with radiochemotherapy is encouraged. Active surveillance should be used for low-risk prostate cancer (PCa). Treatment of most patients with intermediate- and high-risk PCa can be deferred 3-6 mo without change in outcomes. The same may be true for cancers with the highest risk of progression. With radiotherapy, neoadjuvant androgen deprivation therapy (ADT) is the standard of care. For surgery, although the added value of neoadjuvant ADT is questionable, it may be considered if a patient is interested in such an approach. Intervention may be safely deferred for T1/T2 renal masses, while locally advanced renal tumors (≥T3) should be treated expeditiously. Patients with metastatic renal cancer may consider vascular endothelial growth factor targeted therapy over immunotherapy. Risks for delay in the treatment of upper tract urothelial cancer depend on grade and stage. For patients with high-grade disease, delays of 12 wk in nephroureterectomy are not associated with adverse survival outcomes. Expert guidance recommends expedient local treatment of testis cancer. In penile cancer, adverse outcomes have been observed with delays of ≥3 mo before inguinal lymphadenectomy. Limitations include a paucity of data and methodologic variations for many cancers. Conclusions: Patients and clinicians should consider the oncologic risk of delayed cancer intervention versus the risks of COVID-19 to the patient, treating health care professionals, and the health care system. Patient Summary: The coronavirus disease 2019 pandemic has led to delays in the treatment of patients with urologic malignancies. Based on a review of the literature, patients with high-grade urothelial carcinoma, advanced kidney cancer, testicular cancer, and penile cancer should be prioritized for treatment during these challenging times. Answer: Based on the provided abstracts, it appears that a standard treatment for penile cancer can be proposed, but it is complex and should be tailored to the individual patient's condition. The management of penile cancer involves a variety of treatment options, including surgery, which is considered the standard of care (SOC) for invasive penile cancer. According to the National Comprehensive Cancer Network guidelines, SOC for penile cancer includes penectomy (partial or radical) with bilateral inguinal lymph node dissection (ILND) (PUBMED:33898656). However, adherence to SOC is low, with only 22.1% of men in a study receiving SOC, and 18.8% receiving no treatment at all. Factors such as increasing age, higher Charlson Comorbidity Index score, and advanced T-stage were associated with not receiving SOC (PUBMED:33898656). The treatment decisions for penile cancer are often based on multidisciplinary consultations due to the rarity and aggressive nature of the disease (PUBMED:18271420). Additionally, the staging of penile cancer is important for treatment planning, and external validation of proposed T and N categories has shown a more distinctive survival compared to the current TNM classification, suggesting that staging revisions may improve treatment outcomes (PUBMED:21309862). Other treatment modalities for penile cancer and related conditions include topical treatments such as imiquimod for melanoma of the glans penis (PUBMED:25236319) and extramammary Paget's disease (PUBMED:27526657), as well as surgical treatments for conditions like hemangioma on the dorsum of the penis (PUBMED:22403282). Video endoscopic inguinal lymphadenectomy (VEIL) is a minimally invasive technique that has been compared to open inguinal lymphadenectomy for treating lymph node metastasis, with similar surgical outcomes between standard VEIL and lateral VEIL (L-VEIL) (PUBMED:31928389). In summary, while a standard treatment for penile cancer can be proposed, it is important to consider individual patient factors, adherence to SOC, and multidisciplinary input. Surgical treatment, including penectomy and ILND, remains a cornerstone of management, but other treatments such as topical therapies and minimally invasive techniques are also important components of care. Additionally, the timing of treatment is critical, and delays in treatment, such as those due to the COVID-19 pandemic, can impact outcomes for patients with penile cancer. It is recommended that patients with penile cancer be prioritized for treatment to avoid adverse outcomes associated with treatment delays of three months or more (PUBMED:32414626). Furthermore, the incidence and treatment strategies for penile intraepithelial neoplasia (PeIN) have evolved over time, with an increase in the use of local surgery and topical treatments such as imiquimod and 5-fluorouracil (5-FU) in recent years. Complications were found to be more common with laser surgery compared to local surgery, indicating a shift towards treatment strategies that may offer fewer complications (PUBMED:35044047). In conclusion, while a standard treatment for penile cancer can be proposed, it must be personalized and based on a thorough evaluation of the patient's specific cancer stage, overall health, and preferences. Surgical treatment is central, but other modalities and the timing of treatment are also critical to optimize patient outcomes.
Instruction: Cryopreservation of parathyroid tissue after parathyroid surgery for renal hyperparathyroidism: does it really make sense? Abstracts: abstract_id: PUBMED:22851148 Cryopreservation of parathyroid tissue after parathyroid surgery for renal hyperparathyroidism: does it really make sense? Background: Metachronous autotransplantation of cryopreserved parathyroid tissue is a technique for treating postoperative hypoparathyroidism after parathyroid surgery for renal hyperparathyroidism (rHPT). The aim of the present study was to evaluate our institution's experience with metachronous autotransplantation to analyze the role of cryopreservation in the treatment of rHPT and to determine for whom and when cryopreservation of parathyroid tissue should be deemed necessary. Methods: A prospective database of patients with rHPT who underwent surgery between 1976 and 2011 was screened for patients with hypoparathyroidism who received a metachronous autotransplantation. Data were analyzed regarding clinical data, histopathological findings of the cryopreserved parathyroid tissues, and patient outcome after metachronous replantation of parathyroid tissue. Results: Fifteen of 883 patients with rHPT underwent a metachronous autotransplantation under local anesthesia at a mean time of 23 months following the last cervical surgery. Histopathology of the parathyroid tissue chosen for transplantation revealed a necrosis rate of 0 % in 14 and 70 % in one patient. Mean preoperative serum calcium and parathyroid hormone (PTH) levels were 2.0 mmol/l and 3.7 pg/ml, respectively. Autotransplantation raised mean serum calcium and PTH levels to 2.2 mmol/l and 97.5 pg/ml, respectively, after a mean follow-up of 78 months. Conclusions: Metachronous autotransplantation following parathyroid surgery in patients with rHPT effectively normalizes PTH and calcium levels. The success rate is high if an adequate cryopreservation procedure is applied. However, it is rarely necessary, and therefore the cryopreservation of parathyroid tissue in all patients has to be questioned, at least from an economic point of view. abstract_id: PUBMED:6470067 Our experience with autotransplantation and cryopreservation of parathyroid tissue in thyro-parathyroid surgery Results are reported of four and a half years of experience with autotransplantation and cryopreservation of parathyroid tissue in thyro-parathyroid surgery. One hundred patients with hyperparathyroidism secondary to renal insufficiency were treated by total parathyroidectomy and partial autotransplantation, immediate in 98 cases and secondarily using their frozen stored tissue in the other two cases. Results were compared with those in 13 other patients treated during the same period by subtotal parathyroidectomy. Seven of 46 patients with primary parathyroid hyperplasia underwent total parathyroidectomy with immediate partial autotransplantation. Grafts were not used in the other two previously cervicotomized patients. All parathyroid tissue removed during these operations was frozen and stored if not used for grafting. Of 116 patients treated by enlarged total thyroidectomy for cancer, 34 received parathyroid autotransplants; complementary in 19 cases and substitutive in 15. Technique and results are described, and respective indications for autotransplantation and parathyroid cryopreservation discussed with respect to primary, secondary, persisting or recurrent hyperparathyroidism, and to total thyroidectomy for cancer. abstract_id: PUBMED:9426420 The current role of parathyroid cryopreservation and autotransplantation in parathyroid surgery: an institutional experience. Background: Hypoparathyroidism after cervical exploration is a rare but problematic complication. Cryopreservation and subsequent autotransplantation of parathyroid tissue are infrequently used to combat this problem; effective usage of this surgical adjunct remains variable. Methods: From 1981 through 1995 we performed 3080 cervical explorations for hyperparathyroidism. Cryopreservation was performed in 112 (3.6%) patients. This review evaluates our indications and usage of cryopreservation and autotransplantation and the eventual outcome after autotransplantation. Results: Of 81 women and 31 men, 106 (95%) had undergone previous exploration for hyperparathyroidism or thyroid disease. The primary indication for cryopreservation was uncertainty about the viability and number of remaining parathyroid glands. After operation 23 patients (20%) were permanently hypocalcemic and became autotransplantation candidates. Thirteen patients underwent a total of 15 autotransplantations (median postoperative interval, 7 months). Although 6 of 15 grafts (40%) were shown to secrete parathyroid hormone, only three patients (23%) were normocalcemic without supplemental therapy. Conclusions: Cryopreservation with autotransplantation is in theory a sound but difficult practice to correct postexploration hypocalcemia. The principal indication for cryopreservation is the uncertainty regarding the status of remaining normal parathyroid tissue. Because we cannot predict postexploration hypocalcemia, cryopreservation plays a small but sometimes integral role in parathyroid surgery. Indications for cryopreservation in our practice are rare, and the rate of cryopreservation tissue usage is low. abstract_id: PUBMED:426546 Autotransplantation of cryopreserved parathyroid tissue in man. Human cryopreserved parathyroid autografts have been performed in six patients following reoperative parathyroid surgery. All patients were rendered hypoparathyroid by their most recent reoperation. Parathyroid tissue was successfully autotransplanted after as long as eighteen months of cryopreservation. Viability and expected in vivo function of cryopreserved parathyroid tissue may be predicted by in vitro testing of parathyroid hormone secretion in response to varying ambient calcium concentration. Parathyroid cryopreservation with subsequent autotransplantation is a practical solution to the problem of permanent hypoparathyroidism that may follow multiple surgical procedures for persistent hyperparathyroidism. abstract_id: PUBMED:23504122 Parathyroid cryopreservation after parathyroidectomy: a worthwhile practice? Background: Parathyroid cryopreservation is often utilized for patients having parathyroidectomy. This allows for future autotransplantation if a patient becomes permanently hypocalcemic after surgery. However, the practice of cryopreservation is costly and time-consuming, while the success rate of delayed autotransplantation is highly variable. We sought to determine the rate and outcomes of parathyroid cryopreservation and delayed autotransplantation at our institution to further evaluate its utility. Methods: At our institution, 2,083 parathyroidectomies for hyperparathyroidism (HPT) were performed from 2001 to 2010. Of these, parathyroid cryopreservation was utilized in 442 patients (21 %). Patient demographics, preoperative diagnoses, and other characteristics were analyzed, as well as the rate and success of delayed autotransplantation. Results: Of the 442 patients with cryopreservation, the mean age was 55 ± 1 years and 313 (70.8 %) were female. A total of 308 (70 %) had primary HPT, 46 (10 %) had secondary HPT, and 88 (20 %) had tertiary HPT. Delayed autotransplantation of cryopreserved parathyroid tissue was used in 4 (1 %) patients at an average time of 9 ± 4 months after initial surgery. Three of the 4 patients remained hypoparathyroid after this procedure. The single cured patient underwent the procedure only 4 days after the initial parathyroidectomy. Conclusions: Although cryopreservation was used in over one-fifth of patients undergoing parathyroidectomy, the need for parathyroid reimplantation was very low (1 %). Furthermore, the success rate of parathyroid autotransplantation was poor in these patients. Therefore, the continued practice of parathyroid cryopreservation is questionable. abstract_id: PUBMED:17111433 Successful implant of long-term cryopreserved parathyroid glands after total parathyroidectomy. Background: Parathyroid cryopreservation is essential in some cases of parathyroid surgery. The fate of autografted tissue after long-term cryopreservation is not fully discussed in the literature. Methods: The successful experience with the use of parathyroid tissues preserved for 21 months and 30 months is reported. Results: Both patients were women with renal hyperparathyroidism who underwent total parathyroidectomy without autotransplantation. Patient 1 was a 40-year-old woman. At 21 months of follow-up, her parathyroid hormone (PTH) level was undetectable, and despite oral calcium supplements, she was hypocalcemic. Forty-five cryopreserved fragments were thawed and implanted in her forearm. Calcium levels improved, and PTH steadily increased in both arms. PTH levels at 18 months after the autograft were 37.0 pg/mL in the contralateral arm and 1150.0 pg/mL in the implant arm. Patient 2 was a 44-year-old woman. After 30 months, her PTH was undetectable, and she underwent cryopreserved tissue implantation. Conclusion: These cases show that parathyroid tissue may remain viable even after long-term storage. abstract_id: PUBMED:3727844 Autotransplantation of cryopreserved human parathyroid tissue Replantation of autologous cryopreserved parathyroid tissue is a procedure well established in surgery of hyperparathyroidism. It has proved to be a good approach to successful surgical treatment of hypocalcaemia in the wake of parathyroid operations. Cryopreservation of parathyroid tissue is indicated in cases of subtotal parathyroidectomy, total parathyroidectomy with autotransplantation of fresh tissue, re-operation for persistent or recurrent hyperparathyroidism, and surgery for hyperparathyroidism following thyroidectomy. abstract_id: PUBMED:7885307 Diagnosis of dysfunctions of replanted parathyroid gland tissue by bilateral analysis of intact parathyroid hormones in cubital vein blood. A prospective study Aim: In a prospective long-term follow-up study after operative therapy of hyperparathyroidism the value of bilateral determination of parathyroid hormone levels in cubital venous blood after total parathyroidectomy/autologous parathyroid gland reimplantation (musculus brachioradialis) for diagnosis of dysfunctioning grafted tissue is evaluated. Patients And Material: From August, 1, 1987 to March, 31, 1994 68 of 243 patients operated on for hyperparathyroidism underwent total parathyroidectomy. Autologous reimplantation of parathyroid gland was carried out simultaneously in 64 patients. Twice delayed reimplantation of cryopreserved tissue was carried out, and there was no reimplantation in two patients up to this day. Three patients were operated on for hyperfunctioning parathyroid autograft after former total parathyroidectomy/reimplantation. Results: During follow-up 5 patients developed dysfunction of (reimplanted) parathyroid gland. Because of low or unprovable levels of intact parathormone the gradients of intact parathyroid hormone between grafted and nongrafted forearm were about 1:1 in postoperative hypoparathyroidism as well as in hypofunction of parathyroid gland. After successful replantation of cryopreserved parathyroid tissue gradients of intact parathyroid hormone increased (> 1:10). In hyperfunction of grafted parathyroid tissue hormone gradients were high (> 1:20 to 1:45,3) because of excessive high levels of intact parathormone in the cubital vein of the graft bearing arm. Successful reduction of parathyroid graft was followed by decrease of parathyroid hormone gradients. Conclusion: Regular follow-up of intact parathormone gradients together with intact parathyroid hormone levels and serumcalcium analysis allow the determination of parathyroid graft function. Also differentiation between graft dependent hyperparathyroidism and hyperfunctioning parathyroid tissue in the neck or mediastinum seems to be possible by bilateral determination of intact parathormone. Normal values or a normal range for intact parathyroid hormone gradients can not yet be defined. abstract_id: PUBMED:2180041 Current status of parathyroid autotransplantation. Autotransplantation of the parathyroid glands is a clinically useful modality for the management of patients with certain forms of hyperparathyroidism. In conjunction with total parathyroidectomy, this procedure has been used to treat patients with primary and secondary hyperparathyroidism who have generalized parathyroid hyperplasia. Parathyroid autotransplantation is also an important adjunct to the management of patients undergoing radical thyroid or laryngeal surgery to prevent permanent hypoparathyroidism from devascularization of all in situ parathyroid tissue. The technique of parathyroid cryopreservation has been well established and provides surgeons with greater flexibility in the approach to patients undergoing reoperative parathyroid surgery in whom there is uncertainty about the functional status of the remaining parathyroid tissue. Parathyroid allotransplantation has been successfully performed in immunosuppressed patients but is rarely indicated. Parathyroid autografts can be placed heterotopically in a forearm muscle or in the sternocleidomastoid, but the former site is preferred in patients with hyperplastic or adenomatous tissue. This technique results in a low incidence of permanent hypoparathyroidism after radical parathyroidectomy, and recurrent hypercalcemia can be easily managed by local excision of a portion of the grafted tissue. abstract_id: PUBMED:15790694 Best practice no 183. Examination of parathyroid gland specimens. The pathological examination of parathyroid glands is an essential component of the evaluation of hyperparathyroidism. Traditionally, this has involved intraoperative frozen sections during bilateral surgical exploration of the neck, to confirm removal of parathyroid tissue. With recent developments in imaging, some diseased glands can be localised preoperatively, enabling removal by minimally invasive, targetted surgery, with or without additional non-histological intraoperative procedures to confirm the removal of all hyperfunctioning parathyroid tissue. This article reviews these developments and describes the ideal approach to reporting parathyroid specimens. Answer: Cryopreservation of parathyroid tissue after parathyroid surgery for renal hyperparathyroidism (rHPT) is a technique used to treat postoperative hypoparathyroidism through metachronous autotransplantation. The practice, however, has been met with varying opinions regarding its necessity and effectiveness. On one hand, studies have shown that metachronous autotransplantation can effectively normalize parathyroid hormone (PTH) and calcium levels in patients with rHPT. For instance, a study reported that 15 out of 883 patients with rHPT underwent successful metachronous autotransplantation, with a high success rate when an adequate cryopreservation procedure was applied (PUBMED:22851148). Another report indicated that human cryopreserved parathyroid autografts were successful in patients rendered hypoparathyroid after reoperative parathyroid surgery, suggesting that cryopreservation with subsequent autotransplantation is a practical solution to permanent hypoparathyroidism following multiple surgeries for persistent hyperparathyroidism (PUBMED:426546). However, the necessity of cryopreservation for all patients undergoing parathyroidectomy has been questioned due to its economic implications and the low frequency of its necessity. For example, one study found that out of 442 patients who had parathyroid tissue cryopreserved, only 4 (1%) required delayed autotransplantation, and the success rate of these procedures was poor (PUBMED:23504122). Another study highlighted that while cryopreservation is theoretically sound, it is a difficult practice to correct postexploration hypocalcemia, and the rate of cryopreserved tissue usage is low (PUBMED:9426420). In conclusion, while cryopreservation of parathyroid tissue can be beneficial for certain patients, particularly those who develop postoperative hypoparathyroidism, the overall low rate of necessity and variable success rates call into question the practicality of routinely cryopreserving parathyroid tissue for all patients undergoing surgery for rHPT. The decision to cryopreserve tissue should be carefully considered, taking into account the likelihood of its future use and the costs associated with the procedure.
Instruction: Can we skill and activate children through primary school physical education lessons? Abstracts: abstract_id: PUBMED:29766699 Еvaluation of health status of children attending primary schools with different organization of physical education lessons Background: The mandatory swimming lesson in primary schools, equipped with swimming pools, was introduced without studying of its health-saving effectiveness. Objective: The purpose of this study was to evaluate the health status of pupils studying in schools with different organization of physical education lessons. Materials And Methods: Cross-sectional study was organized in two schools with different organization of physical education lessons. The experimental group (E) consisted of 408 children of 1‑4 year of study (210 girls and 198 boys) who during one of the lessons of physical education were engaged in swimming in the school basin. Control group (C) consisted of 279 primary school children (210 girls and 156 boys) from a neighboring educational institution where all physical education lessons were organized in the gym. The health status was evaluated using classical method of complex assessment of the state of health with the subsequent assignment of each child to one of the health groups. Results: In result of evaluation of state of health there was established that among pupils from E group the proportion of boys with harmonious anthropometric parameters is higher (p<0.05), children from this group are stronger than C group (p<0.05). The prevalence of diseases of the musculoskeletal system and the eyes among pupils in E group was lower comparing to the C group (p<0.01). In the E group percentage of pupils assigned to health group I was significantly higher and lower for the II (children with some morpho-functional abnormalities) comparing to the C group. Conclusion: Organization of one mandatory swimming lesson per week in primary school has positive effect on health status of children. abstract_id: PUBMED:29783221 Hygienic assessment of different forms of physical education lessons organization in primary school. Objective: Introduction: Different forms of physical education lessons organization, including swimming classes were introduced in education process without studying of its healthsaving effectiveness. The aim of this study was hygienic assessment of different forms of physical education lessons organization in primary school. Patients And Methods: Materials and methods: Cross-sectional study was conducted in two schools with different organization of physical education lessons. The experimental group (E) consisted of 408 children of 1‑4 year of study (210 girls and 198 boys) who were engaged in swimming in the school basin during one of the lessons of physical education. Control group (C) consisted of 279 primary school children (210 girls and 156 boys) from a neighboring educational institution where all physical education lessons were organized in the gym. The integral estimation of physical education classes was performed. Action timing with pulse measurement during gym and swimming lessons was held. Target heart rate level was calculated using Carvonen formula. Heart rate variability parameters were measured using diagnostic complex FAZAGRAF® . Results: Results: A low level of physical load in E and C groups causes functional tension of cardio-vascular system. Pulse restitution and distribution on the types of vegetative regulation indicates that children attended one swimming lesson per week recovered better after physical load comparing to the C group. Conclusion: Conclusions: Primary schoolchildren have low physical endurance. Attending one swimming lesson per week contributes to better recovery after physical load and improves vegetative regulation. abstract_id: PUBMED:33499424 Moderate-to-Vigorous Physical Activity in Primary School Children: Inactive Lessons Are Dominated by Maths and English. Background: A large majority of primary school pupils fail to achieve 30-min of daily, in-school moderate-to-vigorous physical activity (MVPA). The aim of this study was to investigate MVPA accumulation and subject frequency during academic lesson segments and the broader segmented school day. Methods: 122 children (42.6% boys; 9.9 ± 0.3 years) from six primary schools in North East England, wore uniaxial accelerometers for eight consecutive days. Subject frequency was assessed by teacher diaries. Multilevel models (children nested within schools) examined significant predictors of MVPA across each school-day segment (lesson one, break, lesson two, lunch, lesson three). Results: Pupils averaged 18.33 ± 8.34 min of in-school MVPA, and 90.2% failed to achieve the in-school 30-min MVPA threshold. Across all school-day segments, MVPA accumulation was typically influenced at the individual level. Lessons one and two-dominated by maths and English-were less active than lesson three. Break and lunch were the most active segments. Conclusion: This study breaks new ground, revealing that MVPA accumulation and subject frequency varies greatly during different academic lessons. Morning lessons were dominated by the inactive delivery of maths and English, whereas afternoon lessons involved a greater array of subject delivery that resulted in marginally higher levels of MVPA. abstract_id: PUBMED:28133599 Physical activity of German children during different segments of the school day. Aim: This study objectively investigated the amount and intensity of German primary school children's physical activity (PA) during different segments of the school day and explored the contribution of physical education (PE) and break times to daily moderate to vigorous PA (MVPA). Subject And Methods: PA of 294 children (7.1 ± 0.7 years, 48 % male) was objectively measured for 6 days using Actiheart®. Based on children's timetables, break times and PE periods were determined and PA was calculated individually and subsequently classified in light (1.5-3 MET), moderate (3-6 MET) and vigorous (>6 MET) intensities. Weight status was determined during a school visit. Results: Children spent 133 ± 61 min in MVPA; on weekdays, this amount increased significantly (141 ± 66 min, p ≤ 0.01). 45.9 % of children reached physical activity guidelines of 60 min of MVPA daily, with boys achieving this goal significantly more often than girls (65.6 vs. 28.7 %, respectively; p ≤ 0.01). PE lessons and break times accounted for 15 ± 13 min (12.7 %) and 7 ± 6 min (5.8 %) of daily MVPA, respectively. On days with PE, children spent 144 ± 68 min in MVPA, whereas on days without PE, this time decreased significantly to 122 ± 63 min (p ≤ 0.01). Conclusion: The findings suggest that segments such as PE lessons and morning breaks are important sources for MVPA for boys and girls. This should therefore be considered for policies, timetables and curriculums in order to offer sufficient opportunities for children to be physically active during the school day. abstract_id: PUBMED:37809003 Using the ADDIE model to design and develop physical education lessons incorporated with a functional training component. Introduction: Good physical fitness is the foundation of a healthy lifestyle. For students, the school becomes the main place to improve their physical fitness. The traditional physical education class places little emphasis on improving physical fitness and students' physical fitness have continued to decline. To address these challenges, this study aimed to design and develop a functional training program that can be incorporated into existing physical education lessons to improve students' physical fitness levels. Methods: This study adopted the instructional design framework of the Analysis, Design, Development, Implementation, Evaluation (ADDIE) model to guide the design and development of the functional training program. After development, the program was implemented and evaluated. Results: The results showed the program aroused students' interest and significantly improved students' physical fitness. Conclusion: This study showed the usefulness of functional training in improving the physical fitness of primary school students. In addition, it provided a reference for how to use functional training in conjunction with the physical education syllabus. abstract_id: PUBMED:12649058 Can we skill and activate children through primary school physical education lessons? "Move it Groove it"--a collaborative health promotion intervention. Background: Physical education (PE) lessons are an ideal setting to improve child fundamental movement skills (FMSs) and increase physical activity (PA) for optimal health. Despite this, few studies have assessed the potential to do both simultaneously. The "Move It Groove It" primary school intervention in New South Wales, Australia, had this opportunity. Methods: A whole school approach to implementation included establishment of school project teams, a teacher "buddy" system, project Web site, teacher training workshops, and small grants for equipment. The quasi-experimental evaluation involved 1,045 year 3 and 4 children (aged 7 to 10 years) in nine intervention and nine control rural primary schools (53% boys/47% girls). It utilised pre- and postobservational surveys of (1) mastery or near mastery levels for each of eight FMSs, (2) proportion of PE lesson time spent in moderate to vigorous PA (MVPA) and vigorous PA (VPA), and (3) teacher- and lesson-related contextual covariates. Data were analysed by hierarchical logistic multiple regression. Results: For FMSs, overall mastery or near mastery level at baseline was 47% ranging from 22.7% for the overarm throw among girls to 75.4% for the static balance among boys. The intervention delivered substantial improvements in every FMS for both genders ranging from 7.2% to 25.7% (13 of 16 comparisons were significant). For PA level, mean MVPA at baseline was 34.7%. Baseline MVPA for boys was 38.7% and for girls was 33.2%. The intervention was associated with a nonsignificant 4.5% increase in MVPA and a significant 3.0% increase in VPA. This translates to a gain of <1 minute of MVPA per average 21-minute lesson. Conclusions: This is the first study to show that by modifying existing PE lessons, significant improvements in FMS mastery can be gained without adversely affecting children's MVPA and VPA. To increase PA levels, we recommend increasing the number of PE lessons per week. abstract_id: PUBMED:35329366 Physical Education on the Beach: An Alternative Way to Improve Primary School Children's Skill- and Health-Related Outcomes during the COVID-19 Pandemic. The COVID-19 restrictions could preclude children from participating in physical education (PE) interventions. This study aimed to evaluate the efficacy of a PE intervention conducted on the beach on children's skill- and health-related outcomes, as a possible alternative PE intervention that could be also applied during the COVID-19 pandemic. The study involved 106 primary school children, randomly assigned to the traditional indoor (TI) intervention or to the experimental outdoor (EO) intervention. The intervention period lasted 4 months and consisted of two 1-h sessions per week. Intervention was conducted just before the beginning of the COVID-19 pandemic. Children's anthropometric parameters (height, weight, BMI, body fat percentage, and abdominal circumference), fitness parameter (VO2peak), health parameters (resting heart rate, and systolic and diastolic blood pressure), gross motor coordination, and physical activity level were assessed before and after intervention. Both groups significantly improved fitness and motor coordination but worsened some anthropometric parameters (weight, abdominal circumference) after the intervention period. The EO group showed a higher increase of gross motor coordination than the TI group. Results of this study demonstrated that children benefited from a well-structured PE intervention conducted in the natural environment of the beach improving physical fitness and gross motor coordination. Therefore, planning outdoor PE interventions could be an alternative and safe way to encourage and implement physical activity at school during the particular period of COVID-19 pandemic. abstract_id: PUBMED:25536540 Benefits of Strength and Skill-based Training During Primary School Physical Education. Physical education (PE) programs are evolving from a traditional skill-centered model to a health-centered model that focuses on time engaged in moderate to vigorous physical activity (MVPA). However, improvements in muscular fitness and fundamental movement skills are prerequisites for continuous participation in MVPA. The purpose of this study was to examine the effects of integrative strength and skill-based training on measures of physical fitness in children during primary school PE. Children from two fourth grade PE classes were cluster randomized into either a fundamental integrative training (FIT) group (n = 20) or a control (CON) group (n = 21). The 8-week FIT program was performed twice per week during the first ∼ 1 5 minutes of each PE class and consisted of a circuit of strength and skill-based exercises. All participants were assessed for health- and skill-related fitness before and after the intervention. The outcome variables were analyzed via 2 × 2 repeated measures analysis of variance with post hoc analysis. A significant (p ≤ 0.05) interaction of group by time was observed in FIT participants with improvements in aerobic capacity, push-ups, sit and reach flexibility, and single-leg hop. There were no groups by time effects for the sit-up and long jump tests. No injuries were reported. These findings highlight the potential benefits of integrating both health- and skill-related fitness components into primary school PE as evidenced by improvements in measures of aerobic capacity and muscular fitness in children. abstract_id: PUBMED:30103471 Preliminary Efficacy and Feasibility of the "Thinking While Moving in English": A Program with Integrated Physical Activity into the Primary School English Lessons. Background: The physical, cognitive, and learning benefits of physical activity for children have already been established. However, many schools are failing to provide children with sufficient activity at school due to a crowded school curriculum. Physical activity interventions that integrate physical activity with learning is a way to enhance physical and cognitive benefits without loss of academic time. This study evaluated the preliminary efficacy and feasibility of "Thinking While Moving in English", a primary school program that integrates physical activity into English lessons. Method: Two classes of Grade 4 students (n = 55, 10⁻11 years old) were randomly assigned to the intervention (n = 29) or control (n = 26) conditions. The program components consisted of 3 × 40 min physically active academic lessons per week, delivered over a 4-week period. The following measures were taken at baseline and immediate post-intervention: on-task behavior, cognition (inhibition and working memory), and learning outcomes (spelling and grammar). Results: Results revealed significant improvements in on-task behavior and spelling in the intervention group, compared to the control group. There were no observed improvements in cognitive outcomes or grammar. Conclusions: This study provides preliminary evidence for the efficacy of physically active English lessons to enhance children's educational outcomes. abstract_id: PUBMED:26742063 Effects of a Theory-Based Education Program to Prevent Overweightness in Primary School Children. The effectiveness of the "Extra Fit!" (EF!) education program in promoting healthy diet and physical activity to prevent and reduce overweightness among primary school children aged 9 to 11 was evaluated. A randomized controlled design was carried out in 45 primary schools (n = 1112) in the Netherlands, 23 intervention and 22 control schools. The intervention schools received the education program for two successive school years in grades (U.S. system) 4, 5, and 6 (mean 7.6 h during 16 weeks per school per year). The control schools followed their usual curriculum. No positive effects of EF! were found with regard to behavior and anthropometric measures when follow-up measurements were compared to the baseline. However, from baseline to follow-up after one and two school years, the intervention group improved their knowledge score significantly compared to the control group. Moreover, an effect was observed for mean time spent inactively that increased more in the control group than in the intervention group. In conclusion, limited intervention effects were found for the intervention on knowledge and inactivity. To improve the effectiveness of education programs, we advise focusing on parental involvement, attractive lessons to enlarge the acceptability of the program, and multi-component environmental strategies. Answer: Yes, we can skill and activate children through primary school physical education lessons. The "Move it Groove it" intervention demonstrated that modifying existing PE lessons can lead to significant improvements in fundamental movement skills (FMS) mastery without adversely affecting children's moderate to vigorous physical activity (MVPA) and vigorous physical activity (VPA) levels. The intervention resulted in substantial improvements in every FMS for both genders, although the increase in MVPA was less than one minute per average 21-minute lesson (PUBMED:12649058). Additionally, the "Thinking While Moving in English" program, which integrates physical activity into English lessons, showed significant improvements in on-task behavior and spelling in the intervention group compared to the control group (PUBMED:30103471). Moreover, the introduction of a mandatory swimming lesson per week in primary school was found to have a positive effect on the health status of children, with those in the swimming group being stronger and having a lower prevalence of diseases of the musculoskeletal system and the eyes compared to those in the control group (PUBMED:29766699). Similarly, attending one swimming lesson per week contributed to better recovery after physical load and improved vegetative regulation, indicating that different forms of physical education lessons can have health-saving effectiveness (PUBMED:29783221). Furthermore, a study using the ADDIE model to design and develop physical education lessons incorporated with a functional training component showed that the program significantly improved students' physical fitness (PUBMED:37809003). Another study found that integrative strength and skill-based training during primary school PE led to improvements in measures of aerobic capacity and muscular fitness in children (PUBMED:25536540). In conclusion, primary school physical education lessons can be effectively designed to both skill and activate children, contributing to their physical, cognitive, and health-related outcomes.
Instruction: Bulimia nervosa in adolescents: a disorder in evolution? Abstracts: abstract_id: PUBMED:18361276 Long-term evolution and complications of eating disorders Eating disorders long-term evolution is good in 50% of cases, middle in 25% (recovery from eating disorders, but still psychological suffering) and bad in 25% of cases, with chronic eating disorders, anxious or depressive comorbid disorder, and bad consequences in social patients' life. Anorexia nervosa has a considerably worse long-term outcome than bulimia nervosa or binge eating disorders. Never the less, purging bulimia nervosa is often associated with other impulsive symptoms, such as addictions and suicide attempts. Chronic undernutrition leads to main long-term medical complications of eating disorders: linear growth in adolescents with anorexia nervosa, infertility, and osteoporosis. These complications need a specific medical follow up, at least once a year, added to the psychiatric and psychotherapist follow-up. abstract_id: PUBMED:28110674 Evolution of cognitive-behavioral therapy for eating disorders. The evolution of cognitive-behavioral therapy (CBT) for the treatment of bulimic disorders is described in this review. The impacts of successive attempts to enhance CBT such as the addition of exposure and response prevention; the development of enhanced CBT; and broadening the treatment from bulimia nervosa to binge eating disorder are considered. In addition to developing advanced forms of CBT, shortening treatment to guided self-help was the first step in broadening access to treatment. The use of technology such as computer-based therapy and more recently the Internet, promises further broadening of access to self-help and to therapist guided treatment. Controlled studies in this area are reviewed, and the balance of risks and benefits that accompany the use of technology and lessened therapist input are considered. Looking into the future, more sophisticated forms of treatment delivered as mobile applications ("apps") may lead to more personalized and efficacious treatments for bulimic disorders, thus enhancing the delivery of treatments for eating disorders. abstract_id: PUBMED:33574961 Psychological Meanings of Eating Disorders and Their Association With Symptoms, Motivation Toward Treatment, and Clinical Evolution Among Outpatients. Unlike patients suffering from egodystonic disorders, people with eating disorders sometimes attribute positive meanings to their symptoms, and this attribution process contributes to the maintenance of the disorder. This study aims at exploring psychological meanings of eating disorders and their associations with symptoms, motivation toward treatment, and clinical evolution. Eighty-one adults with an eating disorder (anorexia nervosa, n = 46 and bulimia nervosa, n = 35) treated in a day-hospital program were asked, each week over an 8-week period, to identify the psychological meanings they ascribed to their eating disorder. Avoidance was the most frequently identified meaning, followed by mental strength, security, death, confidence, identity, care, and communication. Avoidance was more frequently mentioned by participants with bulimia than in cases of anorexia. Security and mental strength were associated with less motivation toward treatment. Death was associated with more depressive and anxious symptoms. An exploratory factor analysis showed that these meanings formed three main dimensions: Avoidance, Intrapsychic, and Relational. Findings suggest that psychological meanings associated with eating disorders can be assessed and used as a clinical tool to increase treatment acceptability and effectiveness. abstract_id: PUBMED:34574382 Eating Behavior during First-Year College Students, including Eating Disorders-RUVIC-RUNEAT-TCA Project. Protocol of an Observational Multicentric Study. (1) Introduction: Changes in eating behavior and eating disorders are especially common in young people, especially teenage and college women. The first year of college is a critical period, as students acquire freedoms that can lead to poor eating habits. During this first year, students usually gain weight. The aims of this project are to analyze the risk of developing eating disorders, the composition and dietary intake and the changes in the body composition of two groups of college students (independent from the family nucleus or still living within the family) in the first year of college. (2) Material and Methods: Multicentric prospective observational study protocol in which first-year students at the Universidad Europea del Atlántico and Universitat de Vic-Universitat Central de Catalunya voluntarily took part in the study. The students will be divided into two groups, independent and those residing in the family home, and the evolution of both groups will be compared at the beginning and at the end of the school year by performing anthropometric measurements, tests on lifestyle and eating habits (Test of Adherence to the Mediterranean Diet, MEDAS-14; Emotional Eater Questionnaire, EEQ), validated questionnaires on eating disorders (Eating Attitude Test, EAT26; Teen Figure Drawing Scales; SCOFF, Eating Behavior Test; Bulimia Investigatory Test Edinburgh, BITE) and their intake will be evaluated through 72 h dietary records. (3) Discussion: Determining the risk of suffering eating disorders of alimentary behavior, knowing eating consumption, perception of the corporal image and body composition through the first year of college will be decisive in establishing alimentary education strategies to prevent possible eating disorders in young students. abstract_id: PUBMED:29065413 An Evolutionary Genetic Perspective of Eating Disorders. Eating disorders (ED) including anorexia nervosa (AN), bulimia nervosa (BN), and binge eating disorder (BED) affect up to 5% of the population in Western countries. Risk factors for developing an ED include personality traits, family environment, gender, age, ethnicity, and culture. Despite being moderately to highly heritable with estimates ranging from 28 to 83%, no genetic risk factors have been conclusively identified. Our objective was to explore evolutionary theories of EDs to provide a new perspective on research into novel biological mechanisms and genetic causes of EDs. We developed a framework that explains the possible interactions between genetic risk and cultural influences in the development of ED. The framework includes three genetic predisposition categories (people with mainly AN restrictive gene variants, people with mainly BED variants, and people with gene variants predisposing to both diseases) and a binary variable of either the presence or absence of pressure to be thin. We propose novel theories to explain the overlapping characteristics of the subtypes of AN (binge/purge and restrictive), BN, and BED. For instance, mutations/structural gene variants in the same gene causing opposite effects or mutations in nearby genes resulting in partial disequilibrium for the genes causing AN (restrictive) and BED may explain the overlap of phenotypes seen in AN (binge/purge). abstract_id: PUBMED:15123482 Bulimia nervosa in adolescents: a disorder in evolution? Background: There are few reports that describe the manifestation of bulimia nervosa (BN) among adolescents. Moreover, none make reference to the comparative clinical manifestation of adolescent BN and adolescent anorexia nervosa. Nor are any reports available of how distinct partial-syndrome BN cases are from those that meet full diagnostic criteria for BN. Objectives: To describe 3 groups of adolescents, those with a full-syndrome eating disorder (BN and anorexia nervosa) or partial-syndrome BN, and to compare these groups along demographic, general psychopathology, and eating disorder variables. Design: The study population included 120 adolescents with eating disorders who were initially seen at The University of Chicago Eating Disorders Program, Chicago, Ill, for treatment. All participants completed an assessment prior to treatment. Measures: Weight and height were obtained from all participants. Participants also completed a baseline demographic questionnaire (eg, menstrual status, ethnicity, family status), Beck Depression Inventory, Rosenberg Self-Esteem Scale, and the Eating Disorder Examination. Results: Partial-syndrome BN cases are clinically quite similar to their full-syndrome counterparts. Only objective binge eating episodes and purge frequency distinguished BN and partial-syndrome BN cases. Anorexia nervosa cases, on the other hand, were quite distinct from BN and partial-syndrome BN cases on almost all variables. Conclusion: Early recognition and swift treatment of eating disorders in adolescents, regardless of whether a diagnostic threshold is met, are imperative because they will lead to early intervention thereby potentially improving eating disorder recovery rates. abstract_id: PUBMED:34309320 Treatment of eating disorders in adolescents: case series Eating disorders are highly prevalent diseases in adolescence and have an impact on overall health. The objective was to describe the evolution and treatment of eating disorders in adolescents. Adolescents under 18 years of age with at least 6 months of follow-up and treated by an interdisciplinary team were studied. Forty one adolescents were included, 23 presented anorexia nervosa, 9 unspecified eating disorders, 7 bulimia nervosa and 2 binge eating disorders. Thirty five percent of patients with anorexia nervosa required hospitalization for complications of malnutrition. Sixty nine percent of the patients with anorexia nervosa, 57 % with bulimia nervosa, and 78 % with unspecified eating disorders had total or partial remission and there were no deceased patients. The majority presented a favorable evolution with the treatment. abstract_id: PUBMED:20143321 Bulimia nervosa with history of anorexia nervosa: could the clinical subtype of anorexia have implications for clinical status and treatment response? Objective: To investigate whether the subtype of anorexia nervosa (AN) could influence the bulimic syndrome and treatment response in bulimia nervosa (BN) patients with a history of AN (BN/AN+). Method: Seventy female BN patients were assessed for eating disorder, psychopathology and personality, and treated for 6 months. Results: BN/AN+ patients retained some traits of their past condition: lower body weight, higher perfectionism, and higher reward dependence. They also had more severe symptoms, longer evolution time, and worse response to treatment. A history of restricting AN was associated to higher age, longer evolution time, higher impulsivity, and greater tendency to use diet pills and diuretics. A history of binge-purging AN was associated to more severe vomiting, lower body dissatisfaction and less depressive symptoms, as well as better outcome. Discussion: The influence of the AN subtype in BN/AN+ patients is worthy of further study as it might have clinical implications. abstract_id: PUBMED:9550880 Diagnostic criteria for eating disorders: current concerns and future directions. This article is a critical review of the evolution of diagnostic criteria for anorexia nervosa, bulimia nervosa, and binge eating disorder. The shortcomings of the current diagnostic criteria for all three disorders are addressed, as are the primary nosological challenges facing the field. abstract_id: PUBMED:28683881 Influence of the internet on eating disorders in teenagers Thanks to new technologies, young people can keep themselves informed, communicate, socialise and perform a host of other activities more easily, online. Adolescence is also a period of vulnerability for the development of eating disorders. This expansion of digital media in the daily lives of adolescents raises questions regarding the long-term implications and the possible impacts of the internet on the evolution and treatment of people with eating disorders. Answer: Bulimia nervosa (BN) in adolescents is indeed a disorder in evolution. Adolescents with BN may present with clinical manifestations that are similar to their adult counterparts, but there are unique aspects to consider in this younger population. Partial-syndrome BN cases in adolescents are clinically quite similar to full-syndrome BN cases, with the primary distinctions being the frequency of objective binge eating episodes and purging behaviors. Anorexia nervosa (AN) cases, however, are quite distinct from both BN and partial-syndrome BN cases on almost all variables (PUBMED:15123482). The clinical subtype of AN has implications for the clinical status and treatment response in BN patients with a history of AN (BN/AN+). BN/AN+ patients retain some traits of their past condition, such as lower body weight, higher perfectionism, and higher reward dependence. They also exhibit more severe symptoms, longer evolution time, and a worse response to treatment. The subtype of AN influences the bulimic syndrome, with a history of restricting AN being associated with higher age, longer evolution time, higher impulsivity, and greater use of diet pills and diuretics. Conversely, a history of binge-purging AN is associated with more severe vomiting, lower body dissatisfaction, less depressive symptoms, and a better outcome (PUBMED:20143321). The evolution of BN in adolescents is also influenced by the increasing presence of the internet and digital media in their lives, which raises questions about the long-term implications for the development and treatment of eating disorders (PUBMED:28683881). Overall, early recognition and swift treatment of eating disorders in adolescents are crucial, regardless of whether a diagnostic threshold is met, as this can lead to early intervention and potentially improve recovery rates (PUBMED:15123482). The treatment of eating disorders in adolescents often involves an interdisciplinary approach, and the majority show a favorable evolution with treatment (PUBMED:34309320).
Instruction: Can experienced CT radiologists use technique parameters to predict excessive patient dose? Abstracts: abstract_id: PUBMED:21458767 Can experienced CT radiologists use technique parameters to predict excessive patient dose? An analysis of the ACR CT accreditation database. Purpose: The aim of this study was to determine, for 3 basic clinical examinations, whether blinded, experienced CT radiologists participating in the ACR's CT Accreditation Program could use scan parameters such as tube current-time product (mAs), tube voltage (kVp), and pitch to predict scanner output settings, expressed as weighted CT dose index (CTDIw) and volume CT dose index (CTDIvol), exceeding CTDIw diagnostic reference levels (DRLs) set by the ACR in 2002 and CTDIvol DRLs adopted by the ACR in 2008. Methods: CT sites with 829 scanners submitted examinations to the ACR between 2002 and 2004, yielding 518 eligible examinations for analysis (138 adult head CT scans, 333 adult abdominal CT scans, and 47 pediatric abdominal CT scans). The sites' measured CTDIw values for each type of examination were compared with the ACR's CTDIw DRLs in effect from 2002 to 2004 and compared with comments regarding excessively high mAs or kVp made by radiologist clinical reviewers to determine if excessively high-dose index measurements could be predicted. The same analysis was repeated using CTDIvol DRLs adopted by the ACR in 2008 and compared with excessively high mAs or kVp and excessively low-pitch comments. Results: Excessively high mAs or kVp comments yielded sensitivity of only 21.2% in predicting examinations above the CTDIw DRLs, with specificity of 87.6%. Using the 2008 CTDIvol DRLs, the corresponding sensitivity was 13.1% and specificity was 86.2%. Significance was not achieved for use of clinical parameters in predicting either the CTDIw or CTDIvol. Conclusion: Experienced CT radiologists cannot reliably use scan parameters to predict examinations that exceed CTDIw or CTDIvol DRLs. abstract_id: PUBMED:24183553 CT scan parameters and radiation dose: practical advice for radiologists. Although there has been increasing recognition of the importance of reducing radiation dose when performing multidetector CT examinations, the increasing complexity of CT scanner technology, as well as confusion about the importance of many different CT scan parameters, has served as an impediment to radiologists seeking to create lower dose protocols. The authors seek to guide radiologists through the manipulation of 8 fundamental CT scan parameters that can be altered or optimized to reduce patient radiation dose, including detector configuration, tube current, tube potential, reconstruction algorithm, patient positioning, scan range, reconstructed slice thickness, and pitch. Although there is always an inherent trade-off between image quality or noise and patient radiation dose, in many cases, a reasoned manipulation of these 8 parameters can allow the safer imaging of patients (with lower dose) while preserving diagnostic image quality. abstract_id: PUBMED:22247630 CT radiation dose optimization and estimation: an update for radiologists. In keeping with the increasing utilization of CT examinations, the greater concern about radiation hazards from examinations has been addressed. In this regard, CT radiation dose optimization has been given a great deal of attention by radiologists, referring physicians, technologists, and physicists. Dose-saving strategies are continuously evolving in terms of imaging techniques as well as dose management. Consequently, regular updates of this issue are necessary especially for radiologists who play a pivotal role in this activity. This review article will provide an update on how we can optimize CT dose in order to maximize the benefit-to-risk ratio of this clinically useful diagnostic imaging method. abstract_id: PUBMED:27612627 Single-Phase Versus Multiphase CT Angiography in Middle Cerebral Artery Clot Detection-Benefits for Less Experienced Radiologists and Neurologists. Objectives: CT angiography (CTA) is recommended as a standard of stroke imaging. We investigated accuracy and precision of standard or single-phase CTA as compared with novel technique or multiphase CTA in clot detection in the middle cerebral artery. Methods: Twenty single-phase CTA and twenty multiphase CTA with prevailing M2 occlusion were assessed by 10 radiologists and 10 neurologists blinded to clinical information (7 less experienced and 3 experienced). Sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) were calculated as compared with reading by two seniors. Reliability was calculated using Krippendorff's alpha (K-alpha). Results: Sensitivity, specificity, PPV, and NPV of single-phase CTA compared with multiphase CTA for M2 clot presence were, respectively, .86, .75, .90, and .67 versus .88, .82, .92, and .72. For secondary or distal clots, sensitivity, specificity, PPV, and NPV of single-phase CTA compared with multiphase CTA were .41, .83, .50, and .78 versus .65, .77, .71, and .67. Agreement increased significantly in favor of multiphase CTA for detection of primary clots from moderate (.43) to substantial (.65) in less experienced radiologists and from slight (.10) to moderate (.30) in less experienced neurologists. Agreement significantly increased for distal or secondary clot detection in favor of multiphase CTA from fair (.24) to moderate (.49) in experienced radiologists and from slight (.12) to moderate (.46) in experienced neurologists. Conclusions: Multiphase CTA is a reliable imaging tool in M2 clot detection and might represent a beneficial imaging tool in clot detection for less experienced physicians. abstract_id: PUBMED:38302037 A Vision for Global CT Radiation Dose Optimization. The topic of CT radiation dose management is receiving renewed attention since the recent approval by CMS for new CT dose measures. Widespread variation in CT dose persists in practices across the world, suggesting that current dose optimization techniques are lacking. The author outlines a proposed strategy for facilitating global CT radiation dose optimization. CT radiation dose optimization can be defined as the routine use of CT scan parameters that consistently produce images just above the minimum threshold of acceptable image quality for a given clinical indication, accounting for relevant patient characteristics, using the most dose-efficient techniques available on the scanner. To accomplish this, an image quality-based target dose must be established for every protocol; for nonhead CT applications, these target dose values must be expressed as a function of patient size. As variation in outcomes is reduced, the dose targets can be decreased to more closely approximate the minimum image quality threshold. Maintaining CT radiation dose optimization requires a process control program, including measurement, evaluation, feedback, and control. This is best accomplished by local teams made up of radiologists, medical physicists, and technologists, supported with protected time and needed tools, including analytics and protocol management applications. Other stakeholders critical to facilitating CT radiation dose management include researchers, funding agencies, industry, regulators, accreditors, payers, and the ACR. Analogous coordinated approaches have transformed quality in other industries and can be the mechanism for achieving the universal goal of CT radiation dose optimization. abstract_id: PUBMED:32064284 Size specific dose estimate (SSDE) for estimating patient dose from CT used in myocardial perfusion SPECT/CT. Objectives: Size specific dose estimate (SSDE) is a new parameter that includes patient size factor in its calculation. Recent studies have produced mixed results on the utility of SSDE, especially when automatic exposure control (AEC) was used. The objective of the study was to find out if there is a relationship between patient size and each of the parameters, SSDE and CTDIvol, when AEC is used. Methods: CT data of consecutively selected 111 patients were included for analysis. CTDIvol values of the CT scans were extracted for each patient. Effective diameter of each patient was calculated as geometric mean of anteroposterior and lateral diameters measured on axial CT images. Corresponding conversion factors for effective diameters were obtained from American Association of Physicists in Medicine (AAPM) report 204. SSDE was obtained as the product of CTDIvol and conversion factor values. Linear regression model was used to evaluate the relationship between patient size and the parameters SSDE and CTDIvol. Results: Mean weight was 62 (11.5) and range was 34 - 103 kg. Median CTDIvol (mGy) on AEC mode was 7.27(IQ range 7.27, 7.65) and mean effective diameter was 26.2 cm (2.4). Mean SSDE (mGy) was 10.6 (0.84). Good positive correlation was obtained between CTDIvol and effective diameter (r=0.536; p<0.0005). Strong inverse correlation was noted between SSDE and effective diameter (r=-0.777; p<0.0005). Linear regression model for establishing relationship between CTDIvol and effective diameter showed slope of 0.314mGy/cm (R=0.561; R2=0.314; P<0.0005) whereas between effective diameter and SSDE slope was -0.23mGy/cm (R=0.676; R2=0.457; P< 0.0005). Conclusion: The study shows that CTDIvol and SSDE vary but divergently, with patient size. SSDE is a better estimate of patient radiation dose from CT of MPI SPECT/CT than CTDIvol in systems that use automated exposure control. abstract_id: PUBMED:21614279 Dose management in CT facility. Computed Tomography (CT) examinations have rapidly increased in number over the last few years due to recent advances such as the spiral, multidetector-row, CT fluoroscopy and Positron Emission Tomography (PET)-CT technology. This has resulted in a large increase in collective radiation dose as reported by many international organisations. It is also stated that frequently, image quality in CT exceeds the level required for confident diagnosis. This inevitably results in patient radiation doses that are higher than actually required, as also stressed by the US Food and Drug Administration (FDA) regarding the CT exposure of paediatric and small adult patients. However, the wide range in exposure parameters reported, as well as the different CT applications reveal the difficulty in standardising CT procedures. The purpose of this paper is to review the basic CT principles, outline the recent technological advances and their impact in patient radiation dose and finally suggest methods of radiation dose optimisation. abstract_id: PUBMED:35798962 Accurate staging of non-metastatic colon cancer with CT: the importance of training and practice for experienced radiologists and analysis of incorrectly staged cases. Purpose: To investigate whether locoregional staging of colon cancer by experienced radiologists can be improved by training and feedback to minimize the risk of over-staging into the context of patient selection for neoadjuvant therapy and to identify potential pitfalls of CT staging by characterizing pathologic traits of tumors that remain challenging for radiologists. Methods: Forty-five cases of stage I-III colon cancer were included in this retrospective study. Five experienced radiologists evaluated the CTs; 5 baseline scans followed by 4 sequential batches of 10 scans. All radiologists were trained after baseline scoring and 2 radiologists received feedback. The learning curve, diagnostic performance, reader confidence, and reading time were evaluated with pathologic staging as reference. Pathology reports and H&E slides of challenging cases were reviewed to identify potential pitfalls. Results: Diagnostic performance in distinguishing T1-2 vs. T3-4 improved significantly after training and with increasing number of reviewed cases. Inaccurate staging was more frequently related to under-staging rather than over-staging. Risk of over-staging was minimized to 7% in batch 3-4. N-staging remained unreliable with an overall accuracy of 61%. Pathologic review identified two tumor characteristics causing under-staging for T-stage in 5/7 cases: (1) very limited invasive part beyond the muscularis propria and (2) mucinous composition of the invading part. Conclusion: The high accuracy and specificity of T-staging reached in our study indicate that sufficient training and practice of experienced radiologists can ensure high validity for CT staging in colon cancer to safely use neoadjuvant therapy without significant risk of over-treatment, while N-staging remained unreliable. abstract_id: PUBMED:34914007 Image quality and radiologists' subjective acceptance using model-based iterative and deep learning reconstructions as adjuncts to ultrahigh-resolution CT in low-dose contrast-enhanced abdominopelvic CT: phantom and clinical pilot studies. Purpose: In contrast-enhanced abdominopelvic CT (CE-APCT) for oncologic follow-up, ultrahigh-resolution CT (UHRCT) may improve depiction of fine lesions and low-dose scans are desirable for minimizing the potential adverse effects by ionizing radiation. We compared image quality and radiologists' acceptance of model-based iterative (MBIR) and deep learning (DLR) reconstructions of low-dose CE-APCT by UHRCT. Methods: Using our high-resolution (matrix size: 1024) and low-dose (tube voltage 100 kV; noise index: 20-40 HU) protocol, we scanned phantoms to compare the modulation transfer function and noise power spectrum between MBIR and DLR and assessed findings in 36 consecutive patients who underwent CE-APCT (noise index: 35 HU; mean CTDIvol: 4.2 ± 1.6 mGy) by UHRCT. We used paired t-test to compare objective noise and contrast-to-noise ratio (CNR) and Wilcoxon signed-rank test to compare radiologists' subjective acceptance regarding noise, image texture and appearance, and diagnostic confidence between MBIR and DLR using our routine protocol (matrix size: 512; tube voltage: 120 kV; noise index: 15 HU) for reference. Results: Phantom studies demonstrated higher spatial resolution and lower low-frequency noise by DLR than MBIR at equal doses. Clinical studies indicated significantly worse objective noise, CNR, and subjective noise by DLR than MBIR, but other subjective characteristics were better (P < 0.001 for all). Compared with the routine protocol, subjective noise was similar or better by DLR, and other subjective characteristics were similar or worse by MBIR. Conclusion: Image quality, except regarding noise characteristics, and acceptance by radiologists were better by DLR than MBIR in low-dose CE-APCT by UHRCT. abstract_id: PUBMED:32698332 Radiologists' Knowledge and Attitudes towards CT Radiation Dose and Exposure in Saudi Arabia-A Survey Study. Computed tomography (CT) is a key imaging technique in diagnostic radiology, providing highly sensitive and specific information. While its use has increased dramatically in recent years, the quantity and associated risks of radiation from CT scans present major challenges, particularly in paediatrics. The fundamental principles of radiation protection require that radiation quantities be as low as reasonably achievable and CT use must be justified, particularly for paediatric patients. CT radiation knowledge is a key factor in optimising and minimising radiation risk. The objective of this study was to analyse knowledge level, expertise, and competency regarding CT radiation dose and its hazards in paediatrics among radiologists in Saudi Arabian hospitals. A self-reported, multiple-choice questionnaire assessed the attitudes and opinions of radiologists involved in imaging studies using ionising radiation. Among the total respondents, 65% ± 13.5% had a good comprehension of the dangers of carcinogenicity to the patient resulting from CT scans, with 80% presuming that cancer risks were elevated. However, only 48.5%, 56.5%, and 65% of the respondents were aware of specific radiation risks in head, chest, and abdominal paediatric examinations, respectively. Regular, frequent, and specific training courses are suggested to improve the fundamental knowledge of CT radiation among radiologists and other physicians. Answer: No, experienced CT radiologists cannot reliably use scan parameters to predict examinations that exceed weighted CT dose index (CTDIw) or volume CT dose index (CTDIvol) diagnostic reference levels (DRLs). A study analyzing the ACR CT accreditation database found that comments made by radiologist clinical reviewers regarding excessively high tube current-time product (mAs) or tube voltage (kVp) yielded a sensitivity of only 21.2% in predicting examinations above the CTDIw DRLs, with a specificity of 87.6%. When using the 2008 CTDIvol DRLs, the sensitivity was even lower at 13.1%, with a specificity of 86.2%. The study concluded that clinical parameters are not significant predictors of excessive dose index measurements (PUBMED:21458767).
Instruction: Do ethnic disparities extend to subspecialty diabetes care? Abstracts: abstract_id: PUBMED:23337143 Do ethnic disparities extend to subspecialty diabetes care? Objective: It is unclear if disparities described in diabetes primary care extend to subspecialty diabetes care. This retrospective observational study examined disparities in diabetes outcomes in a subspecialty practice by assessing glycemic improvement in type 2 diabetes patients during the first year of enrollment. Methods: Electronic data were gathered on 3,945 subjects. The outcome was the proportion of white and minority (Asian, black, and Hispanic) subjects achieving a hemoglobin A1C (A1C) level of ≤7% after the first year of care. Logistic regression was used to identify factors associated with odds of achieving A1C ≤7%. Results: Minority patients had greater diabetes duration, more social disadvantages and missed appointments, and worse control at presentation than whites. The proportion of patients reaching target A1C rose from 37 to 52% among white patients and from 28 to 40% among minority patients. Significant differences between whites and minorities in the rates of patients reaching A1C ≤7% were found only among those with higher initial A1C (iA1C) levels (32% vs. 20.9%; P = .002 in third iA1C quartile, and 28.2% vs. 17.9%; P = .0003 in fourth iA1C quartile). The interaction between race/ethnicity and the top two iA1C quartiles remained significant in the fully adjusted model. Conclusion: Reaching an A1C level of ≤7% depends strongly upon the glycemic level at initial presentation to specialty care, not race. However, minority patients with the highest baseline A1C levels do not improve to the same degree as white patients, and therefore should be targeted for more intensive diabetes care management. abstract_id: PUBMED:33223802 Disparities in Cardiovascular Care and Outcomes for Women From Racial/Ethnic Minority Backgrounds. Purpose Of Review: Racial, ethnic, and gender disparities in cardiovascular care are well-documented. This review aims to highlight the disparities and impact on a group particularly vulnerable to disparities, women from racial/ethnic minority backgrounds. Recent Findings: Women from racial/ethnic minority backgrounds remain underrepresented in major cardiovascular trials, limiting the generalizability of cardiovascular research to this population. Certain cardiovascular risk factors are more prevalent in women from racial/ethnic minority backgrounds, including traditional risk factors such as hypertension, obesity, and diabetes. Female-specific risk factors including gestational diabetes and preeclampsia as well as non-traditional psychosocial risk factors like depressive and anxiety disorders, increased child care, and familial and home care responsibility have been shown to increase risk for cardiovascular disease events in women more so than in men, and disproportionately affect women from racial/ethnic minority backgrounds. Despite this, minimal interventions to address differential risk have been proposed. Furthermore, disparities in treatment and outcomes that disadvantage minority women persist. The limited improvement in outcomes over time, especially among non-Hispanic Black women, is an area that requires further research and active interventions. Summary: Understanding the lack of representation in cardiovascular trials, differential cardiovascular risk, and disparities in treatment and outcomes among women from racial/ethnic minority backgrounds highlights opportunities for improving cardiovascular care among this particularly vulnerable population. abstract_id: PUBMED:34634249 Racial and Ethnic Disparities in Health Care and Health Outcomes for Pregnant Women With Diabetes. This article summarizes the current literature on racial and ethnic differences among women with diabetes in pregnancy. The PubMed, Scopus, CINAHL, and Embase databases were searched for original qualitative or quantitative studies published in English from January 1, 2009, to May 31, 2020. Consensus statements were excluded. Results of this synthesis indicate that racial and ethnic differences exist among pregnant women with diabetes, including social determinants of health, disparities in maternity care and perinatal care, and maternal and neonatal health outcomes. Health care providers should implement tailored interventions that specifically target racial and ethnic disparities in maternal and neonatal health to promote health equity in pregnant women with diabetes and their offspring, including later in life. abstract_id: PUBMED:28583968 Racial And Ethnic Disparities Persist At Veterans Health Administration Patient-Centered Medical Homes. Patient-centered medical homes are widely promoted as a primary care delivery model that achieves better patient outcomes. It is unknown if their benefits extend equally to all racial/ethnic groups. In 2010 the Veterans Health Administration, part of the Department of Veterans Affairs (VA), began implementing patient-centered medical homes nationwide. In 2009 significant disparities in hypertension or diabetes control were present for most racial/ethnic groups, compared with whites. In 2014 hypertension disparities were similar for blacks, had become smaller but remained significant for Hispanics, and were no longer significant for multiracial veterans, whereas disparities had become significant for American Indians/Alaska Natives and Native Hawaiians/other Pacific Islanders. By contrast, in 2014 diabetes disparities were similar for American Indians/Alaska Natives, blacks, and Hispanics, and were no longer significant for Native Hawaiians/other Pacific Islanders. We found that the modest benefits of the VA's implementation of patient-centered medical homes were offset by competing multifactorial external, health system, provider, and patient factors, such as increased patient volume. To promote health equity, health care innovations such as patient-centered medical homes should incorporate tailored strategies that account for determinants of racial/ethnic variations. Evaluations of patient-centered medical homes should monitor outcomes for racial/ethnic groups. abstract_id: PUBMED:19914755 Factors influencing the effectiveness of interventions to reduce racial and ethnic disparities in health care. Reducing racial and ethnic disparities in health care has become an important policy goal in the United States and other countries, but evidence to inform interventions to address disparities is limited. The objective of this study was to identify important dimensions of interventions to reduce health care disparities. We used qualitative research methods to examine interventions aimed at improving diabetes and/or cardiovascular care for patients from racial and ethnic minority groups within five health care organizations. We interviewed 36 key informants and conducted a thematic analysis to identify important features of these interventions. Key elements of interventions included two contextual factors (external accountability and alignment of incentives to reduce disparities) and four factors related to the organization or intervention itself (organizational commitment, population health focus, use of data to inform solutions, and a comprehensive approach to quality). Consideration of these elements could improve the design, implementation, and evaluation of future interventions to address racial and ethnic disparities in health care. abstract_id: PUBMED:33719622 Disparities in Telemedicine Use for Subspecialty Diabetes Care During COVID-19 Shelter-In-Place Orders. Background: During the COVID-19 pandemic, telemedicine use rapidly and dramatically increased for management of diabetes mellitus. It is unknown whether access to telemedicine care has been equitable during this time. This study aimed to identify patient-level factors associated with adoption of telemedicine for subspecialty diabetes care during the pandemic. Methods: We conducted an explanatory sequential mixed-methods study using data from a single academic medical center. We used multivariate logistic regression to explore associations between telemedicine use and demographic factors for patients receiving subspecialty diabetes care between March 19 and June 30, 2020. We then surveyed a sample of patients who received in-person care to understand why these patients did not use telemedicine. Results: Among 1292 patients who received subspecialty diabetes care during the study period, those over age 65 were less likely to use telemedicine (OR: 0.34, 95% CI: 0.22-0.52, P < .001), as were patients with a primary language other than English (OR: 0.53, 95% CI: 0.31-0.91, P = .02), and patients with public insurance (OR: 0.64, 95% CI: 0.49-0.84, P = .001). Perceived quality of care and technological barriers were the most common reasons cited for choosing in-person care during the pandemic. Conclusions: Our findings suggest that, amidst the COVID-19 pandemic, there have been disparities in telemedicine use by age, language, and insurance for patients with diabetes mellitus. We anticipate telemedicine will continue to be an important care modality for chronic conditions in the years ahead. Significant work must therefore be done to ensure that telemedicine services do not introduce or widen population health disparities. abstract_id: PUBMED:33104466 Interventions Targeting Racial/Ethnic Disparities in Stroke Prevention and Treatment. Systemic racism is a public health crisis. Systemic racism and racial/ethnic injustice produce racial/ethnic disparities in health care and health. Substantial racial/ethnic disparities in stroke care and health exist and result predominantly from unequal treatment. This special report aims to summarize selected interventions to reduce racial/ethnic disparities in stroke prevention and treatment. It reviews the social determinants of health and the determinants of racial/ethnic disparities in care. It provides a focused summary of selected interventions aimed at reducing stroke risk factors, increasing awareness of stroke symptoms, and improving access to care for stroke because these interventions hold the promise of reducing racial/ethnic disparities in stroke death rates. It also discusses knowledge gaps and future directions. abstract_id: PUBMED:27268472 Racial and Ethnic Health Disparities Among People With Intellectual and Developmental Disabilities. Racial and ethnic health disparities are a pervasive public health problem. Emerging research finds similar health disparities among people with intellectual and developmental disabilities (IDD) compared to nondisabled adults. However, few studies have examined racial and ethnic health disparities among adults with IDD. Using national data, we examined racial and ethnic disparities in health status among adults with IDD, and investigated differences in health status between adults with IDD and nondisabled adults within each racial and ethnic group. We found that Latino and Black adults with IDD had worse health outcomes compared to White adults with IDD, and Latino and Black adults with IDD had worse health outcomes than nondisabled adults from the same racial and ethnic group. abstract_id: PUBMED:37989347 Influence of depression on racial and ethnic disparities in diabetes control. Introduction: We tested the hypotheses that depression diagnoses influence racial and ethnic disparities in diabetes control and that mental health treatment moderates that relationship. Research Design And Methods: We created a national cohort of Veterans Health Administration (VHA) patients with diabetes using administrative data (n=815 067). Cross-sectional linear mixed effects regression models tested the hypothesized indirect effect of depression on poor diabetes control (glycosylated hemoglobin >9%) and tested whether mental health treatment (visits or antidepressant prescriptions) moderated the effect of depression (α=0.05). Results represent the percentage point difference in probability of poor diabetes control. Covariates included primary care visits, sex, age, and VHA facility. Results: Overall, 20% of the cohort had poor diabetes control and 22% had depression. Depression was more common among racial and ethnic minoritized groups. The probability of poor diabetes control was higher for most minoritized groups compared with White patients (largest difference: American Indian or Alaska Native patients, 5.2% (95% CI 4.3%, 6.0%)). The absolute value of the proportion of racial and ethnic disparities accounted for by depression ranged from 0.2% (for Hispanic patients) to 2.0% (for Asian patients), with similar effects when accounting for the moderating effect of mental health treatment. Patients with depression and 5+ mental health visits had a lower probability of poor diabetes control compared with those with fewer visits, regardless of antidepressant prescription status. Conclusions: The influence of depression on disparities in diabetes control was small. High rates of depression among people with diabetes, especially among those from racial and ethnic minoritized groups, highlight a need to ensure equitable and coordinated care for both conditions, as the effects of mental health treatment may extend to the control of physical health conditions. abstract_id: PUBMED:25523494 Assessing Diabetes Care Disparities with Ambulatory Care Quality Measures. Objective: To identify and describe racial/ethnic disparities in overall diabetes management. Data Source/study Setting: Electronic health record data from calendar year 2010 were obtained from all primary care clinics at one large health system in Minnesota (n = 22,633). Study Design: We used multivariate logistic regression to estimate the odds of achieving the following diabetes management goals: A1C <8 percent, LDL cholesterol <100 mg/dl, blood pressure <140/90 mmHg, tobacco-free, and daily aspirin. Principal Findings: Blacks and American Indians have higher odds of not achieving all goals compared to whites. Disparities in specific goals were also found. Conclusions: Although this health system has above-average diabetes care quality, significant disparities by race/ethnicity were identified. This underscores the importance of stratifying quality measures to improve care and outcomes for all. Answer: Yes, ethnic disparities do extend to subspecialty diabetes care. A retrospective observational study examining disparities in diabetes outcomes in a subspecialty practice found that minority patients had greater diabetes duration, more social disadvantages and missed appointments, and worse control at presentation than white patients. Although the proportion of patients reaching target A1C levels increased for both white and minority patients during the first year of care, significant differences between whites and minorities in the rates of patients reaching A1C ≤7% were found, especially among those with higher initial A1C levels. The study concluded that minority patients with the highest baseline A1C levels do not improve to the same degree as white patients, indicating a need for more intensive diabetes care management for these groups (PUBMED:23337143). Furthermore, disparities in treatment and outcomes that disadvantage minority women persist in cardiovascular care, which is closely related to diabetes management due to the increased cardiovascular risk associated with diabetes. Women from racial/ethnic minority backgrounds are underrepresented in major cardiovascular trials, have more prevalent cardiovascular risk factors, and experience minimal interventions to address differential risk (PUBMED:33223802). Racial and ethnic disparities in health care and health outcomes are also evident among pregnant women with diabetes, with differences in social determinants of health, disparities in maternity and perinatal care, and maternal and neonatal health outcomes (PUBMED:34634249). Additionally, racial and ethnic disparities persist at Veterans Health Administration patient-centered medical homes, with disparities in hypertension or diabetes control present for most racial/ethnic groups compared with whites (PUBMED:28583968). The effectiveness of interventions to reduce racial and ethnic disparities in health care is influenced by factors such as external accountability, alignment of incentives, organizational commitment, population health focus, use of data to inform solutions, and a comprehensive approach to quality (PUBMED:19914755). During the COVID-19 pandemic, disparities in telemedicine use for subspecialty diabetes care were observed, with older patients, those with a primary language other than English, and patients with public insurance being less likely to use telemedicine (PUBMED:33719622). In summary, ethnic disparities are present in subspecialty diabetes care, and there is a need for targeted interventions and strategies to address these disparities and promote health equity (PUBMED:23337143; PUBMED:33223802; PUBMED:34634249; PUBMED:28583968; PUBMED:19914755; PUBMED:33719622).
Instruction: Is neuromyelitis optica associated with human leukocyte antigen? Abstracts: abstract_id: PUBMED:19299434 Is neuromyelitis optica associated with human leukocyte antigen? Background: To establish whether or not multiple sclerosis (MS) and neuromyelitis optica (NMO) are different pathological entities, we wondered whether MS patients and NMO patients share the same pattern of human leukocyte antigen (HLA) predisposition. Objective: To study a putative association between susceptibility to NMO and HLA class I or class II loci in Caucasians. Methods: A total of 39 unrelated Caucasian patients with NMO and six patients at a high risk of converting to NMO were studied. DNA genotyping of HLA class I and class II loci was assessed and allelic frequencies were reported at a high-resolution level. A case-control study by comparing the allelic distribution in the NMO patients with that of a French Caucasian MS group and a French Caucasian healthy group was carried out. Results: The frequencies of HLA-DQA1, DQB1, and HLA-DRB1 DR2 alleles in the NMO group were intermediate between the healthy control group and the MS group. The DPB1*0501 allele was not increased in the NMO group compared with the healthy control group. The distribution of HLA-DRB1 allele enabled to distinguish between NMO-IgG-positive patients and healthy controls (P = 0.01). NMO-IgG-negative patients presented an HLA II pattern closer to that of the MS group (P = 0.01). Conclusion: In contrast to the reported results in Asian opticospinal MS, we found no association between the DPB1*0501 allele and NMO in our Caucasian patients. Moreover, we suggest that NMO-IgG-positive patients could represent a distinct NMO group in terms of their genetic susceptibility. abstract_id: PUBMED:22884298 HLA-DPB1*0201 is associated with susceptibility to atopic myelitis in Japanese. To determine the relationship between susceptibility to atopic myelitis (AM) and polymorphisms of the human leukocyte antigen (HLA)-DPB1 and -DRB1 alleles, we compared each phenotype frequency between 55 AM patients and 367 unrelated healthy controls in Japan. The HLA-DPB1*0201 allele was significantly more frequent in AM patients than in healthy controls (54.5% vs. 31.9%, corrected P value=0.0150, odds ratio=2.564, 95% confidence interval=1.444-4.554). Our result suggests that the immunogenetic background of AM differs from that of other CNS autoimmune diseases, such as multiple sclerosis and neuromyelitis optica, which show distinct HLA class II associations. abstract_id: PUBMED:36268024 Rare variants and HLA haplotypes associated in patients with neuromyelitis optica spectrum disorders. Neuromyelitis optica spectrum disorders (NMOSD) are rare, debilitating autoimmune diseases of the central nervous system. Many NMOSD patients have antibodies to Aquaporin-4 (AQP4). Prior studies show associations of NMOSD with individual Human Leukocyte Antigen (HLA) alleles and with mutations in the complement pathway and potassium channels. HLA allele associations with NMOSD are inconsistent between populations, suggesting complex relationships between the identified alleles and risk of disease. We used a retrospective case-control approach to identify contributing genetic variants in patients who met the diagnostic criteria for NMOSD and their unaffected family members. Potentially deleterious variants identified in NMOSD patients were compared to members of their families who do not have the disease and to existing databases of human genetic variation. HLA sequences from patients from Belgrade, Serbia, were compared to the frequency of HLA haplotypes in the general population in Belgrade. We analyzed exome sequencing on 40 NMOSD patients and identified rare inherited variants in the complement pathway and potassium channel genes. Haplotype analysis further detected two haplotypes, HLA-A*01, B*08, DRB1*03 and HLA-A*01, B*08, C*07, DRB1*03, DQB1*02, which were more prevalent in NMOSD patients than in unaffected individuals. In silico modeling indicates that HLA molecules within these haplotypes are predicted to bind AQP4 at several sites, potentially contributing to the development of autoimmunity. Our results point to possible autoimmune and neurodegenerative mechanisms that cause NMOSD, and can be used to investigate potential NMOSD drug targets. abstract_id: PUBMED:9278633 Multiple sclerosis: comparison of the human T-cell response to S100 beta and myelin basic protein reveals parallels to rat experimental autoimmune panencephalitis. The adoptive transfer of autoreactive S100 beta-specific T cells induces experimental autoimmune panencephalomyelitis and uveoretinitis in the Lewis rat, mimicking the distribution of lesions seen in a subset of patients with multiple sclerosis. We studied the frequency and functional properties of the human T-cell response to S100 beta in eight patients (two relapsing-remitting multiple sclerosis, one chronic-progressive multiple sclerosis, two with multiple sclerosis and uveitis, two neuromyelitis optica, one panuveitis) and in seven healthy individuals, using bovine S100 beta for T-cell stimulation. Both in patients and controls, the frequency of S100 beta-specific T-cell responses was half of that obtained for myelin basic protein (MBP), and only 10% of that obtained using purified protein derivative (PPD). The stimulation indices obtained in response to S100 beta were also less than half those obtained with either MBP or PPD. However, four long-term S100 beta-specific T-cell lines were established and studied in more detail. The four T-cell lines all exhibited a CD4+, CD8-, T-cell receptor alpha beta + surface phenotype and secreted tumour necrosis factor-alpha, interferon-gamma, interleukin-10 and interleukin-4 upon antigenic stimulation, but they were heterogenous with respect to T-cell receptor usage; two T-cell lines expressed V beta 2, one V beta 6.7 and one V beta 13. Antigen-specificity was confirmed using bovine S100 beta beta and alpha beta-isoforms, as well as a recombinant rat S100 beta preparation. The response to S100 beta was shown to the HLA-(human leukocyte antigen-) DR-restricted for two of the S100 beta-specific T-cell lines. Human S100 beta-specific T-cell lines were cytotoxic, although to a lesser extent than MBP-specific T-cell lines derived from the same donors. The phenotypic and functional properties of human S100 beta-specific T-cell lines raise the possibility that these T cells are pathogenic, as they are in the rat. The low frequency and proliferative index of S100 beta-specific, as opposed to MBP-specific T-cell responses suggests that the T-cell response to this widely expressed calcium-binding protein is under more efficient regulatory control. abstract_id: PUBMED:32036261 Neuromyelitis optica spectrum disorder coexisting with ankylosing spondylitis: A case report. Neuromyelitis optica spectrum disorder (NMOSD) is a relapsing, demyelinating, inflammatory disease associated with aquaporin-4 (AQP4) antibody in the central nervous system. Ankylosing spondylitis (AS), which is closely associated with leukocyte antigen B27 (HLA-B27), is a chronic inflammatory disease that primarily affects the axial skeleton. We describe the case of a 46-year-old Chinese female, with one-year history of optic neuritis, was admitted to our hospital with a complaint of bilateral lower limbs and perineum numbness for one month. AQP4-IgG antibodies were positive in both the serum and cerebrospinal fluid of the patient. Additionally, HLA-B27 was positive in the serum. Contrast-enhanced computed tomography scan revealed bilateral sacroiliitis. Magnetic resonance imaging of the spinal cord showed T2 hyperintense lesions in conus medullaris, C8-T1 and T11 of spinal cord. The patient's symptom improved after twice methylprednisolone and mycophenolate mofetil treatment. To our best knowledge, this is the first reported case of concomitant AQP4-positive NMOSD and HLA-B27-positive AS. Systematic studies on the association of NMOSD and AS are needed to clarify whether this coincidence is just a casual phenomenon or whether it points to a yet undiscovered link. abstract_id: PUBMED:30053751 Neuromyelitis optica spectrum disorder and multiple sclerosis in a Sardinian family. The coexistence of multiple sclerosis (MS) and neuromyelitis optica spectrum disorder (NMOSD) in the same family is a rare event. We report a familial case originating from Sardinia of two siblings: one with NMOSD and one with MS. Human leukocyte antigen (HLA) typing showed that the two affected siblings were HLA-identical, sharing risk-increasing alleles, while a younger unaffected sister was haploidentical to her siblings but she also carried protective alleles. Our findings confirm the role of HLA in raising the risk to develop CNS inflammatory diseases and provide further knowledge on the relationship between NMOSD and MS. abstract_id: PUBMED:16955140 Spontaneous opticospinal encephalomyelitis in a double-transgenic mouse model of autoimmune T cell/B cell cooperation. We describe a double-transgenic mouse strain (opticospinal EAE [OSE] mouse) that spontaneously develops an EAE-like neurological syndrome closely resembling a human variant of multiple sclerosis, Devic disease (also called neuromyelitis optica). Like in Devic disease, the inflammatory, demyelinating lesions were located in the optic nerve and spinal cord, sparing brain and cerebellum, and the murine lesions showed histological similarity with their human correlates. OSE mice have recombination-competent immune cells expressing a TCR-alphabeta specific for myelin oligodendrocyte glycoprotein (MOG) aa 35-55 peptide in the context of I-Ab along with an Ig J region replaced by the recombined heavy chain of a monoclonal antibody binding to a conformational epitope on MOG. OSE mouse B cells bound even high dilutions of recombinant MOG, but not MOG peptide, and processed and presented it to autologous T cells. In addition, in OSE mice, but not in single-transgenic parental mice, anti-MOG antibodies were switched from IgM to IgG1. abstract_id: PUBMED:23038741 Distinct genetic and infectious profiles in Japanese neuromyelitis optica patients according to anti-aquaporin 4 antibody status. Objective: To clarify whether genetic and common infectious backgrounds are distinct, according to anti-aquaporin 4 (AQP4) antibody status in Japanese patients with neuromyelitis optica (NMO). Methods: We analysed human leucocyte antigen (HLA)-DRB1 and HLA-DPB1 alleles, and IgG antibodies against Helicobacter pylori, Chlamydia pneumoniae, varicella zoster virus and Epstein-Barr virus nuclear antigen (EBNA) in 116 patients with NMO, including 39 patients with neuromyelitis optica spectrum disorder (NMOSD), 145 multiple sclerosis (MS) patients and 367 unrelated healthy controls. 77 NMO/NMOSD patients were seropositive for AQP4 antibody while 39 were seronegative. Results: Compared with healthy controls, NMO/NMOSD patients showed a significantly lower frequency of DRB1*0901 and significantly higher frequencies of DRB1*1602 and DPB1*0501, which conferred susceptibility to anti-AQP4 antibody positive NMO/NMOSD, but not antibody negative NMO/NMOSD. DRB1*0901 was a common protective allele, irrespective of the presence or absence of anti-AQP4 antibody. Anti-H pylori and anti-C pneumoniae antibodies were more commonly detected in anti-AQP4 antibody positive NMO/NMOSD patients than healthy controls. Antibody negative NMO/NMOSD patients did not differ from healthy controls regarding the presence of these antibodies. The presence or absence of antibodies against varicella zoster virus and EBNA did not vary among the groups. The frequencies of antibodies against these four pathogens were not significantly different between MS patients and healthy controls. Conclusions: Our results suggest that HLA-DRB1*1602 and DPB1*0501 alleles and H pylori and Chlamydia pneumonia infection are risk factors only for anti-AQP4 antibody positive NMO/NMOSD but not for anti-AQP4 antibody negative NMO/NMOSD. abstract_id: PUBMED:37706008 Phenotypic characteristics of myelin oligodendrocyte glycoprotein antibody-associated disease in children: a single-center, retrospective study. Objective: To analyze the clinical characteristics and follow-up data of children with different clinical phenotypes of myelin oligodendrocyte glycoprotein antibody-associated disease (MOGAD). Methods: The basic demographic and clinical features, laboratory and imaging examination results, and follow-up data of 74 Chinese children with different phenotypes of MOGAD were retrospectively reviewed and analyzed. Results: The male-to-female ratio in this cohort was 1:1.39. The clinical phenotypes of MOGAD included acute disseminated encephalomyelitis (ADEM; n = 37), encephalitis (n = 11), optic neuritis (ON, n = 9), neuromyelitis optica spectrum disorder (NMOSD; n = 9), transverse myelitis (TM; n = 6), leukodystrophy-like manifestations (n = 1), and meningitis (n = 1). The mean age of disease onset was 86 months. The number of leukocytes in the cerebrospinal fluid of patients with ADEM was significantly higher than that in patients with ON but lower than that in patients with TM (p < 0.05). The pathogen detection rate among all patients was 36.5%. Recurrence occurred in 17 patients (23%), with the highest recurrence rate in patients with NMOSD and TM. Patients with recurrence had a significantly higher median age than those without any recurrence (109.00 vs. 82.44 months, p < 0.05). The male-to-female ratio in patients with recurrence was 1:4.67, which differed significantly from that at first onset (p < 0.05). Conclusion: The most common clinical phenotypes of MOGAD in this cohort were ADEM and encephalitis. Recurrence of MOGAD may be related to age and sex, with a higher recurrence rate observed in females. These findings provide a basis for further exploration of the characteristics of different MOGAD phenotypes. abstract_id: PUBMED:19140826 Association of the HLA-DPB1*0501 allele with anti-aquaporin-4 antibody positivity in Japanese patients with idiopathic central nervous system demyelinating disorders. There are two subtypes of multiple sclerosis (MS) in Asians: the opticospinal (OSMS) form that shows a selective involvement of the optic nerve and the spinal cord and the conventional (CMS) form that has disseminated lesions in the central nervous system including the cerebrum, cerebellum and brainstem. Both show distinct human leukocyte antigen (HLA) class II associations. OSMS has similar features to the relapsing form of neuromyelitis optica (NMO) in Western populations. Recently, it was shown that antibodies to aquaporin-4 (AQP4) are specifically detected in NMO patients and in some Japanese patients with OSMS or recurrent optic neuritis or myelitis. To clarify the immunogenetic background of anti-AQP4 antibody production, we studied HLA-DRB1 and -DPB1 gene polymorphisms in anti-AQP4 antibody-positive and -negative patients with idiopathic demyelinating diseases, such as MS, recurrent optic neuritis and recurrent myelitis. The phenotypic frequency of the HLA-DPB1*0501 allele was significantly increased in anti-AQP4 antibody-positive patients (89.5%, odds ratio = 4.8; 95% confidence interval = 1.6-14.3, n = 38, P(corr) = 0.032) compared with controls (64.0%, n = 125) but not in either anti-AQP4 antibody-negative OSMS (75.0%, n = 32) or CMS (69.2%, n = 52) patients. There was no significant correlation between any HLA-DRB1 allele and the existence of anti-AQP4 antibody. These findings suggest that the emergence of anti-AQP4 antibody is reinforced by the presence of the HLA-DPB1*0501 allele in Japanese. Answer: Yes, neuromyelitis optica (NMO) is associated with human leukocyte antigen (HLA). Studies have found various HLA associations with NMO, although these associations can differ by ethnicity and the presence of NMO-IgG antibodies. In Caucasian patients, no association was found between the DPB1*0501 allele and NMO, but there was a suggestion that NMO-IgG-positive patients could represent a distinct NMO group in terms of their genetic susceptibility, with a different HLA II pattern compared to NMO-IgG-negative patients (PUBMED:19299434). In contrast, in Japanese patients, the HLA-DPB1*0501 allele was significantly increased in anti-AQP4 antibody-positive patients, suggesting that the emergence of anti-AQP4 antibody is reinforced by the presence of this allele (PUBMED:19140826). Additionally, the HLA-DPB1*0201 allele was found to be associated with susceptibility to atopic myelitis, a condition distinct from NMO and multiple sclerosis, in Japanese patients (PUBMED:22884298). A study in Serbian patients identified two HLA haplotypes, HLA-A*01, B*08, DRB1*03 and HLA-A*01, B*08, C*07, DRB1*03, DQB1*02, that were more prevalent in NMOSD patients than in unaffected individuals (PUBMED:36268024). Moreover, in Japanese patients, HLA-DRB1*1602 and DPB1*0501 alleles were risk factors for anti-AQP4 antibody-positive NMO/NMOSD, but not for anti-AQP4 antibody-negative NMO/NMOSD (PUBMED:23038741). In a case of NMOSD coexisting with ankylosing spondylitis, the patient was positive for both AQP4-IgG and HLA-B27, although it was suggested that systematic studies are needed to clarify whether this coincidence indicates a link between the two conditions (PUBMED:32036261). Additionally, a familial case from Sardinia reported two siblings with NMOSD and MS who were HLA-identical, sharing risk-increasing alleles, which confirms the role of HLA in raising the risk to develop CNS inflammatory diseases (PUBMED:30053751).
Instruction: Do changes in sex steroid hormones precede or follow increases in body weight during the menopause transition? Abstracts: abstract_id: PUBMED:22723312 Do changes in sex steroid hormones precede or follow increases in body weight during the menopause transition? Results from the Study of Women's Health Across the Nation. Context: Whether menopause-related changes in sex steroids account for midlife weight gain in women or whether weight drives changes in sex steroids remains unanswered. Objective: The objective of the study was to characterize the potential reciprocal nature of the associations between sex hormones and their binding protein with waist circumference in midlife women. Design, Setting, And Participants: The study included 1528 women (mean age 46 yr) with 9 yr of follow-up across the menopause transition from the observational Study of Women's Health Across the Nation. Main Outcome Measures: Waist circumference, SHBG, testosterone, FSH, and estradiol were measured. Results: Current waist circumference predicted future SHBG, testosterone, and FSH but not vice versa. For each SD higher current waist circumference, at the subsequent visit SHBG was lower by 0.04-0.15 SD, testosterone was higher by 0.08-0.13 SD, and log(2) FSH was lower by 0.15-0.26 SD. Estradiol results were distinct from those above, changing direction across the menopause transition. Estradiol and waist circumference were negatively associated in early menopausal transition stages and positively associated in later transition stages (for each SD higher current waist circumference, future estradiol was lower by 0.15 SD in pre- and early perimenopause and higher by 0.38 SD in late peri- and postmenopause; P for interaction <0.001). In addition, they appeared to be reciprocal, with current waist circumference associated with future estradiol and current estradiol associated with future waist circumference. However, associations in the direction of current waist circumference predicting future estradiol levels were of considerably larger magnitude than the reverse. Conclusions: These Study of Women's Health Across the Nation data suggest that the predominant temporal sequence is that weight gain leads to changes in sex steroids rather than vice versa. abstract_id: PUBMED:9183303 Interrelationships between weight loss, body fat distribution and sex hormones in pre- and postmenopausal obese women. Objectives: Relationships between regional body fat distribution and sex hormones as well as changes in sex hormones after weight loss were evaluated. Setting: All subjects were hospitalized in the Institute of Internal Medicine of the University of Verona. Subjects: Twenty-six premenopausal (age 33.7 +/- 10.2 years) and 15 postmenopausal (age 57.9 +/- 5.9 years) obese women. Interventions: Body weight, body-mass index, waist and hip circumferences, visceral fat by computed tomography and sex hormones were evaluated before and after 4 weeks on a very low energy diet. Results: Body-mass index was higher in pre-than in postmenopausal women, although the difference was not significant. Total and free testosterone were significantly higher in the pre- than in the postmenopausal group (P < 0.001). Significant negative correlations were found between age and total testosterone (r = -0.65; P < 0.001), free testosterone (r = -0.54; P < 0.001), androstenedione (r = -0.46; P < 0.01) and urinary cortisol excretion (r = -0.50; P < 0.01). A negative correlation was found between visceral fat and total testosterone (r = -0.41; P < 0.01). After adjusting for age, the negative correlation between total testosterone and visceral fat encountered both in the subject group as a whole and in premenopausal women was no longer significant, whilst a significant negative association between visceral fat and sex hormone binding globulin (SHBG) (r = -0.56; P < 0.001) was always found. When step-down regression analysis was used to evaluate the joint effect of age, menopausal status, and anthropometric and metabolic variables on sex hormones, age was the most powerful independent variable for predicting total testosterone, free testosterone and androstenedione levels, whilst menopausal status was the most powerful predictor of FSH and LH levels. Changes in hormones after VLED were analysed separately in pre- and postmenopausal women. None of the hormones changed significantly after VLED in the postmenopausal group, except for FSH values. LH, free testosterone and urinary cortisol excretion values decreased significantly after VLED in the premenopausal group. Conclusions: Our data show that age, to a greater extent than visceral fat, seems to be negatively associated with steroid sex hormones. Weight loss seems to be associated with changes in sex hormones only in premenopausal women. abstract_id: PUBMED:27013284 Body fatness and endogenous sex hormones in the menopausal transition. Background: Age at the final menstrual period is of clinical and public health interest because the age at which natural menopause occurs may be a marker of ageing and health, and in general the menopausal transition increases the risk of many diseases, e.g. redistribution in the pattern of adiposity during the menopausal transition may increase risk of metabolic disease. The purpose of this research was to study the relationship between the menopausal status and body fatness. Subjects And Methods: A random sample of 1932 Hungarian women was studied. Body composition was estimated by body impedance analysis. In a subsample free estradiol and progesterone levels in saliva were quantified. Results: Body fat mass increased until the late 50s and then had a decrease through senescence. Premenopausal women who were much older than the median age at menopause had a higher amount of fat than their postmenopausal age-peers, while postmenopausal women, whose menopause occurred much earlier than the median age at menopause, had less fat than their premenopausal age-peers. The body fat mass in premenopausal women with low levels of sex hormones was always below the age-median value of the menopausal status subgroups, while the body fat mass of postmenopausal women with high levels of sex hormone levels was above the age-median values. Conclusions: The analysis of body fatness in the menopausal transition revealed that (1) the rate of reproductive ageing and the body fat pattern were significantly related, and (2) body fat mass of women with unexpected levels of sex hormones was related more to their hormonal levels than to their menopausal status or their age. Thus future epidemiological screenings of women exposed to higher levels of menopause-related health risks should be expanded beyond the estimation of menopausal status based only on menstrual history to include sex hormone level assessment, as well as body composition analysis. abstract_id: PUBMED:19812718 The influence of sex steroid hormones on gingiva of women. Steroid sex hormones have a significant effect on different organ systems. As far as gingiva are concerned, they can influence the cellular proliferation, differentiation and growth of keratinocytes and fibroblasts. Estrogen is mainly responsible for alterations in blood vessels and progesterone stimulates the production of inflammatory mediators. In addition, some micro-organisms found in the human mouth synthesize enzymes needed for steroid synthesis and catabolism. In women, during puberty, ovulation and pregnancy, there is an increase in the production of sex steroid hormones which results in increased gingival inflammation, characterized by gingival enlargement, increased gingival bleeding and crevicular fluid flow and microbial changes. abstract_id: PUBMED:36816502 Network segregation in aging females and evaluation of the impact of sex steroid hormones. Males and females show differential patterns in connectivity in resting-state networks (RSNs) during normal aging, from early adulthood to late middle age. Age-related differences in network integration (effectiveness of specialized communication at the global network level) and segregation (functional specialization at the local level of specific brain regions) may also differ by sex. These differences may be due at least in part to endogenous hormonal fluctuation, such as that which occurs in females during midlife with the transition to menopause when levels of estrogens and progesterone drop markedly. A limited number of studies that have investigated sex differences in the action of steroid hormones in brain networks. Here we investigated how sex steroid hormones relate to age-network relationships in both males and females, with a focus on network segregation. Females displayed a significant quadratic relationship between age and network segregation for the cerebellar-basal ganglia and salience networks. In both cases, segregation was still increasing through adulthood, highest in midlife, and with a downturn thereafter. However, there were no significant relationships between sex steroid hormone levels and network segregation levels in females, and they did not exhibit significant associations between progesterone or 17β-estradiol and network segregation. Patterns of connectivity between the cerebellum and basal ganglia have been associated with cognitive performance and self-reported balance confidence in older adults. Together, these findings suggest that network segregation patterns with age in females vary by network, and that sex steroid hormones are not associated with this measure of connectivity in this cross-sectional analysis. Though this is a null effect, it remains critical for understanding the extent to which hormones relate to brain network architecture. abstract_id: PUBMED:22554955 Gender, sex steroid hormones, and Alzheimer's disease. Age-related loss of sex steroid hormones is a established risk factor for the development of Alzheimer's disease (AD) in women and men. While the relationships between the sex steroid hormones and AD are not fully understood, findings from both human and experimental paradigms indicate that depletion of estrogens in women and androgens in men increases vulnerability of the aging brain to AD pathogenesis. We review evidence of a wide range of beneficial neural actions of sex steroid hormones that may contribute to their hypothesized protective roles against AD. Both estrogens and androgens exert general neuroprotective actions relevant to a several neurodegenerative conditions, some in a sex-specific manner, including protection from neuron death and promotion of select aspects of neural plasticity. In addition, estrogens and androgens regulate key processes implicated in AD pathogenesis, in particular the accumulation of β-amyloid protein. We discuss evidence of hormone-specific mechanisms related to the regulation of the production and clearance of β-amyloid as critical protective pathways. Continued elucidation of these pathways promises to yield effective hormone-based strategies to delay development of AD. abstract_id: PUBMED:11246820 Sex steroid hormones, sex hormone-binding globulin, and obesity in men and women. Sex steroid hormones in both males and females have been closely related to the regulation of adiposity, either through direct or indirect physiological mechanisms. Evidence also suggests a direct relationship between sex hormones and risk factors for cardiovascular disease. In the present review article, we will discuss recent studies that have examined the complex interrelationships between sex hormones, SHBG, obesity and risk factors for cardiovascular disease. Male obesity and excess abdominal adipose tissue accumulation is associated with reductions in gonadal androgen and low adrenal C19 steroid concentrations. Reduced C19 steroids are also related to an altered metabolic risk factor profile including glucose intolerance and an atherogenic dyslipidemic state. However, the concomitant visceral obese state appears as a major correlate in these associations. In women, menopause-induced estrogen deficiency and increased androgenicity are associated with increased abdominal obesity and with the concomitant alterations in the metabolic risk profile. The accelerated accretion of adipose tissue in the intra-abdominal region coincident with the onset of menopause may explain part of the increased risk of cardiovascular disease in postmenopausal women. In both men and women, plasma levels of sex hormone-binding globulin are strong correlates of obesity and risk factors for cardiovascular disease, and more importantly, the relationships between low SHBG and altered plasma lipid levels appear to be independent from the concomitant increased levels of visceral adipose tissue. SHBG concentration may, therefore, represent the most important and reliable marker of the sex hormone profile in the examination of the complex interrelation of sex steroid hormones, obesity, and cardiovascular disease risk. abstract_id: PUBMED:33706960 Sex hormones and cognition in aging. Hormones of the hypothalamic-pituitary-gonadal axis that regulate reproductive function are also potent neurosteriods that have multiple effects on the development, maintenance and function of the brain. There is a growing body of evidence linking sex hormones to cognitive functioning across the lifespan. Both subjective and objective cognitive changes can occur with aging. For women, cognitive complains are commonly associated with the menopause transition-a time of significant hormone flux. Sex differences in neurodegenerative conditions associated with cognitive dysfunction, such as Alzheimer's disease and Parkinson's disease, suggest a potential link between sex hormones and cognitive decline. Evidence for the effects of hormone therapy on cognition is growing, but remains inconclusive. This chapter provides an overview of sex hormones and cognition in association with healthy aging, including a focus on the menopause transition, as well as reviewing findings linking sex hormones to cognitive decline associated with Alzheimer's disease and Parkinson's disease. An overview of hormone therapy and cognition is also provided. abstract_id: PUBMED:36767197 Association between Body Mass Index and Diabetes Mellitus Are Mediated through Endogenous Serum Sex Hormones among Menopause Transition Women: A Longitudinal Cohort Study. Objective: To explore whether and to what extent endogenous sex hormones mediate the association between overweight and diabetes risk in menopausal transition women. Methods: Premenopausal women were from the Study of Women's Health Across the Nation, with measurements of serum sex hormone including sex hormone binding globulin (SHBG), testosterone (T), estradiol (E2), follicle-stimulating hormone (FSH), and dehydroepiandrosterone sulfate (DHAS) in first postmenopausal follow-up. At the last postmenopausal follow-up, hyperglycemia status was confirmed. The partial least squares (PLS) regression method was used to extract hormonal signals associated with body mass index (BMI). Hyperglycemia was defined as individuals with prediabetes or diabetes; overweight was defined as BMI ≥ 25 kg/m2. Causal mediation analysis was used to examine the mediation effect on the association between perimenopause overweight and post-menopause hyperglycemia through PLS score and individual sex hormones. Results: The longitudinal study included 1438 normal glucose women with a baseline mean age (SD) of 46.5 (2.6) years and a mean follow-up period of 9.9 years. During the follow-up period, 145 (10.1) cases of hyperglycemia occurred. Compared with normal-weight participants, overweight women were associated with a higher hyperglycemia risk during the transition period (OR = 4.06, 95% CI: 2.52 to 6.80). Overweight women had higher T, E2, and lower SHBG, FSH, and DAHS concentrations (β = 0.26, 0.38, -0.52, -0.52, and -0.13, p < 0.05 for all). After adjusting for overweight and covariates, lower SHBG and FSH levels were associated with higher hyperglycemia risk (OR = 0.70 and 0.69, all p < 0.05). As a linear combination of sex hormones, the PLS score was positively associated with T, E2, and negatively with SHBG, FSH, and DHAS. PLS score interpreted 36.50% (p < 0.001) of the overweight-hyperglycemia association. Considering single-sex hormones, the mediation proportion of SHBG and FSH were 21.38% (p < 0.001) and 24.08% (p < 0.001). Conclusions: Sex hormones mediated the association of overweight and diabetes risk in menopause transition women. SHBG and FSH have the dominant mediation effect. abstract_id: PUBMED:31610152 Racial differences in body composition and cardiometabolic risk during the menopause transition: a prospective, observational cohort study. Background: Obesity disproportionately affects more women than men. The loss of ovarian function during the menopause transition coincides with weight gain, increases in abdominal adiposity, and impaired metabolic health. Racial differences in obesity prevalence that results from the menopause transition are not well understood. Objective: The purpose of the study was to assess longitudinal changes in body composition and cardiometabolic risk among black and white women during the menopausal transition. Study Design: In a secondary analysis of a prospective, observational cohort study (the Healthy Transitions study), 161 women ≥43 years old with a body mass index of 20-40 kg/m2 and who had not yet transitioned through menopause were enrolled at Pennington Biomedical Research Center. Women were seen annually for body composition by dual-energy X-ray absorptiometry, for abdominal adipose tissue distribution by computed tomography, for sex steroid hormones, and for cardiometabolic risk factors that include fasting glucose, insulin, and lipids. Surrogate measures of insulin sensitivity were also calculated. Results: Ninety-four women (25 black, 69 white) transitioned through menopause and were included within the analyses. At menopause onset, black women weighed more (77.8±3.0 vs 70.8±1.8 kg) and had a higher systolic (125±16 vs 118±14 mm Hg) and diastolic (80±8 vs 74±7 mm Hg) blood pressure compared with white women (all P≤.05). No other differences in body composition, sex steroid hormones, or cardiometabolic risk factors were observed at menopause onset. Before menopause, white women gained significant weight (3 kg), total body adiposity (6% percent body fat, 9% fat mass, 12% trunk fat mass) and abdominal adipose tissue (19% subcutaneous fat, 15% visceral fat, 19% total adipose tissue), which coincided with significant decreases in estradiol, sex hormone-binding globulin, and estrone sulfate and increases in follicle-stimulating hormone, total cholesterol, and low-density lipoprotein cholesterol. Conversely, black women had more abdominal adipose tissue before menopause, which was maintained across the menopause transition. Black women also had significant decreases in estrone sulfate and total testosterone and increases in follicle-stimulating hormone before menopause. In the postmenopausal years, abdominal subcutaneous adipose tissue, total adipose tissue, follicle-stimulating hormone, total cholesterol, and low-density and high-density lipoprotein cholesterol increased only in white women. Conclusion: White women gained more abdominal adiposity during the menopause transition compared with black women, which, in part, may be due to differences in the pattern of sex steroid hormone changes between women of different racial backgrounds. The gains in abdominal adiposity in white women were observed in tandem with increased cardiometabolic risk factors. Future studies should consider comprehensive lifestyle approaches to target these increased gains in abdominal adiposity (ie, nutrition and physical activity coaching), while taking into account the potential interactions of race, body adiposity, sex steroid hormones, and their influence on cardiometabolic risk. Answer: Changes in sex steroid hormones appear to follow increases in body weight during the menopause transition rather than precede them. According to the Study of Women's Health Across the Nation, the data suggest that the predominant temporal sequence is that weight gain leads to changes in sex steroids rather than the other way around (PUBMED:22723312). This study found that current waist circumference predicted future levels of sex hormone-binding globulin (SHBG), testosterone, and follicle-stimulating hormone (FSH), but not vice versa. Additionally, the study observed that for each standard deviation (SD) higher current waist circumference, future levels of SHBG were lower, testosterone was higher, and FSH was lower. Estradiol showed a distinct pattern, with negative associations in early menopausal transition stages and positive associations in later stages, suggesting a reciprocal relationship with waist circumference, but the influence of waist circumference on future estradiol levels was of considerably larger magnitude than the reverse. These findings are consistent with other research that has shown that age, rather than visceral fat, seems to be negatively associated with steroid sex hormones, and that weight loss is associated with changes in sex hormones only in premenopausal women (PUBMED:9183303). Furthermore, the association between body mass index (BMI) and diabetes mellitus among menopause transition women is mediated through endogenous serum sex hormones, with SHBG and FSH having a dominant mediation effect (PUBMED:36767197). In summary, the evidence suggests that increases in body weight during the menopause transition are likely to lead to changes in sex steroid hormones, rather than changes in these hormones being the cause of weight gain.
Instruction: Do Obstetrical Providers, Counsel Women About Postpartum Pelvic Floor Dysfunction? Abstracts: abstract_id: PUBMED:26126305 Do Obstetrical Providers, Counsel Women About Postpartum Pelvic Floor Dysfunction? Objective: To assess prenatal counseling practices of obstetrics providers related to postpartum pelvic floor dysfunction at centers with integrated urogynecology services. Study Design: A cross-sectional survey was distributed to obstetrical providers through urogynecology colleagues. The survey included questions about level of training as well as counseling practices related to common postpartum pelvic floor symptoms. All statistical tests were two-sided, and p values < 0.05 were considered statistically significant. Results: A total of 192 surveys were received; 19 respondents did not perform their own prenatal counseling and were excluded. Among the remaining 173 respondents, 94 (56.3%) of those who answered the question reported never discussing postpartum urinary incontinence, and 73.7% reported never discussing postpartum fecal incontinence during prenatal counseling. Obstetrics and gynecology residents were significantly less likely than attending physicians to report discussing various pelvic floor dysfunction topics in prenatal counseling. Among those who reported not counseling women regarding pelvic floor dysfunction, the most common reason cited was lack of time (39.9%) followed by lack of sufficient information (30.1%). Conclusion: Prenatal counseling of pelvic floor dysfunction risk is lacking at all levels of obstetrical training. Limitations of time and information are the obstacles most often cited by providers. abstract_id: PUBMED:37288437 Pelvic floor dysfunction and electrophysiology in postpartum women at 6-8 weeks. Objective: To investigate the incidence of pelvic floor dysfunction (PFD) and electrophysiological indicators in postpartum women at 6-8 weeks and explore the influence of demographic characteristics and obstetric factors. Methods: A survey questionnaire collected information about the conditions of women during their pregnancy and puerperal period and their demographic characteristics; pelvic organ prolapse quantitation (POP-Q) and pelvic floor muscle electrophysiology (EP) examination were conducted in postpartum women at 6-8 weeks. Results: Vaginal delivery was a risk factor for anterior pelvic organ prolapse (POP) (OR 7.850, 95% confidence interval (CI) 5.804-10.617), posterior POP (OR 5.990, 95% CI 3.953-9.077), anterior and posterior stage II POP (OR 6.636, 95% CI 3.662-15.919), and postpartum urinary incontinence (UI) (OR 6.046, 95% CI 3.894-9.387); parity was a risk factor for anterior POP (OR 1.397,95% CI 0.889-2.198) and anterior and posterior stage II POP (OR 4.162, 95% CI 2.125-8.152); age was a risk factor for anterior POP (OR 1.056, 95% CI 1.007-1.108) and postpartum UI (OR 1.066, 95% CI 1.014-1.120); body mass index (BMI) was a risk factor for postpartum UI (OR 1.117, 95% CI 1.060-1.177); fetal birth weight was a risk factor for posterior POP (OR 1.465, 95% CI 1.041-2.062); and the frequency of pregnancy loss was a risk factor for apical POP (OR 1.853, 95% CI 1.060-3.237). Conclusion: Pelvic floor muscle EP is a sensitive index of early pelvic floor injury. The changes in muscle strength and fatigue degree coexist in different types of postpartum PFD, and each has its own characteristics. abstract_id: PUBMED:29778086 Pelvic Floor Physical Therapy and Women's Health Promotion. Pelvic floor dysfunction is defined as abnormal function of the pelvic floor and includes conditions that can have significant adverse impacts on a woman's quality of life, including urinary incontinence (stress, urge, and mixed), fecal incontinence, pelvic organ prolapse, sexual dysfunction, diastasis recti abdominis, pelvic girdle pain, and chronic pain syndromes. Women's health care providers can screen for, identify, and treat pelvic floor dysfunction. This article examines the case of a woman with multiple pelvic-floor-related problems and presents the evidence for the use of pelvic floor physical therapy (PFPT) for pregnancy-related pelvic floor dysfunction. PFPT is an evidence-based, low-risk, and minimally invasive intervention, and women's health care providers can counsel women about the role that PFPT may play in the prevention, treatment, and/or management of pelvic floor dysfunction. abstract_id: PUBMED:38236284 Efficacy of Warm Acupuncture Therapy Combined with Kegel Exercise on Postpartum Pelvic Floor Dysfunction in Women. Introduction And Hypothesis: The objective was to observe the clinical efficacy of warm acupuncture combined with Kegel exercise in treating postpartum pelvic floor dysfunction in women. Methods: A total of 70 primiparous women with postpartum pelvic floor muscle (PFM) injury were randomly divided into a combination group (n = 35, receiving warm acupuncture at Zhibian (BL54) acupoint and Kegel exercise) and a sham control group (n = 35, receiving sham warm acupuncture and Kegel exercise). Both groups were treated three times a week for 4 consecutive weeks. The recovery of PFM strength and changes in Urethral Rotation Angle (URA), Bladder Neck Descent (BND), and Retrovesical Angle (RVA) in pelvic floor ultrasound reports, the scores of pelvic floor dysfunction-related questionnaires, and the efficacy of urinary incontinence treatment of the two groups were compared before and after treatment. Results: After treatment, the recovery rates of type I and II PFM strength, pelvic floor ultrasound parameters, pelvic floor dysfunction-related scale scores, and urinary incontinence treatment efficacy in the combination group were significantly better than those in the sham control group (p < 0.05). Conclusion: Warm acupuncture combined with Kegel exercise can significantly improve PFM strength and promote the recovery of postpartum pelvic floor function in women. abstract_id: PUBMED:32019378 Device-assisted pelvic floor muscle postpartum exercise programme for the management of pelvic floor dysfunction after delivery. Background: Pelvic floor dysfunction (PFD) is a multifactorial condition that clinically manifests as the pelvic prolapse, urinary and/or rectal incontinence, and sexual dysfunction. Aim: We aimed to evaluate the efficacy of two pelvic floor trainers for the prevention of PFD in women during the postpartum period. Materials And Methods: This was a prospective, randomized, open-label study in 70 women in the postpartum period. Participants were randomized to complete a daily, 20-min set of pelvic floor muscle exercises using the EmbaGYN (UK; Group 1, n = 40) or the Magic Kegel Master device (China; Group 2, n = 40) for 4 weeks. All participants anonymously completed the PFDI-20 questionnaire and FSFI form at baseline and last visit. Pelvic floor muscle strength was measured using the XFT-0010 device. Results: After the completion of the 4-week pelvic floor muscle exercise program, there was a significant decrease in the rates of all PFD symptoms including pelvic organ prolapse and urinary and/or fecal incontinence in both groups. The rates of sexual dysfunction after the exercise program decreased significantly only in Group II (69.4 versus 25.0%; р =.001). After the program, Group I showed a significant reduction in the number of women with symptoms of urgent urinary incontinence versus baseline (35.3 versus 8.8% p = .009). Similarly, the rates of urine loss associated with coughing, sneezing or laughing in Group I decreased from 41.2% at baseline to 11.8% after the program (р =.006) and the rates of urine leakage independent of physical activity from 23.5 to 5.9% (p = .040), respectively. Conclusion: The 4-week postpartum pelvic floor muscle exercise program utilizing the EmbaGYN or Magic Kegel Master device has significantly increased the pelvic floor muscle strength and decreased the symptoms of pelvic organ prolapse, urinary and fecal incontinence. The use of the Magic Kegel Master device significantly reduced the symptoms of sexual dysfunction. The use of the EmbaGYN device was effective in addressing the individual symptoms of urinary incontinence. abstract_id: PUBMED:33923810 Pelvic-Floor Dysfunction Prevention in Prepartum and Postpartum Periods. Every woman needs to know about the importance of the function of pelvic-floor muscles and pelvic organ prolapse prevention, especially pregnant women because parity and labor are the factors which have the biggest influence on having pelvic organ prolapse in the future. In this article, we searched for methods of training and rehabilitation in prepartum and postpartum periods and their effectiveness. The search for publications in English was made in two databases during the period from August 2020 to October 2020 in Cochrane Library and PubMed. 77 articles were left in total after selection-9 systematic reviews and 68 clinical trials. Existing full-text papers were reviewed after this selection. Unfinished randomized clinical trials, those which were designed as strategies for national health systems, and those which were not pelvic-floor muscle-training-specified were excluded after this step. Most trials were high to moderate overall risk of bias. Many of reviews had low quality of evidence. Despite clinical heterogeneity among the clinical trials, pelvic-floor muscle training shows promising results. Most of the studies demonstrate the positive effect of pelvic-floor muscle training in prepartum and postpartum periods on pelvic-floor dysfunction prevention, in particular in urinary incontinence symptoms. However more high-quality, standardized, long-follow-up-period studies are needed. abstract_id: PUBMED:34778951 Prevalence and severity of pelvic floor disorders in pregnant and postpartum women. Objective: This multicenter study aimed to evaluate risk factors, prevalence and severity of pelvic floor disorders (PFDs) as well as their consequences on women's emotional well-being, using a questionnaire validated specifically for pregnancy and postpartum. Methods: Prospective study conducted in eight teaching hospitals in Italy and Italian-speaking Switzerland. Pregnant and postpartum women completed the Italian Pelvic Floor Questionnaire for Pregnancy and Postpartum anonymously. Prevalence of, severity of, and risk factors for PFDs were evaluated for all the four domains considered: bladder, bowel, prolapse, and sexual function. Results: A total of 2007 women were included: 983 of the patients were bothered by at least one kind of PFD: bladder, bowel, and sexual dysfunction were more frequently reported. There were no significant differences in PFD prevalence between pregnancy and postpartum, except for bladder disorders, which were more prevalent in pregnancy. Familiarity for PFDs, pelvic floor contraction inability, cigarette smoking, body mass index more than 25 (calculated as weight in kilograms divided by the square of height in meters), and age more than 35 years were confirmed risk factors for the development of PFDs during pregnancy and postpartum. Conclusion: Almost half of the women included in the study suffered from PFD-related symptoms with important consequences on quality of life. Validated questionnaires are fundamental in early diagnosis and treatment of PFDs. abstract_id: PUBMED:36078788 Effect of Pelvic Floor Workout on Pelvic Floor Muscle Function Recovery of Postpartum Women: Protocol for a Randomized Controlled Trial. Background: There is a risk of pelvic floor dysfunction (PFD) from baby delivery. Many clinical guidelines recommend pelvic floor muscle training (PFMT) as the conservative treatment for PFD because pelvic floor muscles (PFMs) play a crucial role in development of PFD. However, there is disagreement about the method and intensity of PFM training and the relevant measurements. To pilot the study in PFM training, we designed a Pelvic Floor Workout (PEFLOW) for women to train their pelvic through entire body exercises, and we planned a trial to evaluate its effectiveness through comparing the outcomes from a group of postpartum women who perform PELFLOW at home under professional guidance online with the control group. Methods/design: The randomized controlled trial was projected to be conducted from November 2021 to March 2023. A total of 260 postpartum women would be recruited from the obstetrics departments of the study hospital and women would be eligible for participation randomized into experimental or control groups (EG/CG) if their PFM strength are scaled by less than Modified Oxford grading Scale (MOS) to be less than grade 3. Women in EG would perform a 12-week PEFLOW online under the supervision and guidance of a physiotherapist, while women in CG would have no interventions. Assessments would be conducted at enrollment, post intervention (for EG) or 18th to 24th week postpartum (for CG), and 1 year postpartum. Assessment would be performed in terms of pelvic floor symptoms, including MOS, cough stress test, urinary leakage symptoms, pelvic organ prolapse quantitation (POP-Q), and vaginal relaxation, clinic examinations including Pelvic floor electrophysiological test, Pelvic floor ultrasound and Spine X-ray, overall body test including trunk endurance test, handgrip test, body composition test, and questionnaires including International Physical Activity Questionnaire Score-Short Form(IPAQ-SF), Pelvic Floor Distress Inventory Questionnaire-20 (PFDI-20), Pelvic Floor Impact Questionnaire-7 (PFIQ-7), the 6-item Female Sexual Function Index (FSFI-6), and the Pittsburgh Sleep Quality Index (PSQI). Primary analysis will be performed to test our main hypothesis that PEFLOW is effective with respect to strengthen PFM strength. Discussion: This trial will demonstrate that pelvic floor-care is accessible to most women and clinical practice on PFD may change relevantly should this study find that Online PEFLOW approach is effective to improve PFMs. Trial Registration: ClinicalTrials.gov, NCT05218239. abstract_id: PUBMED:35767025 Swedish validation of the Pelvic Floor Questionnaire for pregnant and postpartum women. Introduction And Hypothesis: The German "Pelvic Floor Questionnaire for pregnant and postpartum women" is a self-administered questionnaire customized for pregnancy and the postpartum period that assesses four domains of pelvic floor function regarding perceived symptoms, suffering, and impact on quality of life: bladder, bowel, prolapse, and sexual function. No similar questionnaire is available in Swedish, despite a high prevalence of pregnancy and postpartum pelvic floor dysfunction. Thus, we aimed to translate the validated German questionnaire into Swedish and test its validity and reliability in a Swedish population. Methods: Translation and cultural adaptation were performed according to guidelines. Of the 248 women who answered the Swedish questionnaire, 57 filled out the questionnaire twice to evaluate test-retest reliability. We also assessed internal consistency and discriminant validity. Results: The Swedish version of the questionnaire showed good face and content validity. Cronbach's alpha was in the acceptable to excellent range (bladder 0.82, bowel 0.78, prolapse 0.91, and sexual 0.83), showing adequate internal consistency. A comparison of means (≥ 1 point) showed that the questionnaire significantly (p < 0.05) distinguished between women who reported suffering and those who did not. Cohen's kappa for all individual items showed fair to almost perfect agreement (0.24-0.87) between test and retest scores. The intraclass correlation coefficients for domain scores (0.92-0.97) were all in an optimal range. Conclusions: The Swedish version of the questionnaire is a reliable and valid instrument for assessing pelvic floor disorders, symptom severity, and impact on quality of life during pregnancy and the postpartum period. abstract_id: PUBMED:31053946 Recovery from pelvic floor dysfunction symptoms in the postpartum is associated with the duration of the second stage of labor. Purpose: Pregnancy and labor are known risk factors for pelvic floor dysfunction (PFD). Yet not much is known regarding recovery from PFD. We hypothesized that the recovery from PFD during the postpartum period would be associated with the duration of the second stage of labor (SSL). Methods: We conducted a case-control study of patients who gave birth at the Soroka University Medical Center, Beer-Sheva, Israel. Those who consented completed the Pelvic Floor Distress Inventory-20 (PFDI-20), a questionnaire developed to measure the extent of injury to the pelvic floor, after delivery and 3-month postpartum. The difference between the scores was calculated, representing recovery of symptoms. The duration of the SSL, and clinical and obstetrical characteristics were retrieved from the patients' medical records. Wilcoxon rank test was used, assessing the significance of the recovery. The association between the degree of the recovery and the duration of SSL was tested using Mann-Whitney ranking. Results: A total of 92 patients completed the PFDI-20 after delivery and 3-month postpartum. We found a significant difference between PFD symptoms during pregnancy and 3-month postpartum (P < 0.001). This difference remained consistent in all components of the PFDI-20. In addition, a more profound recovery of colorectal and anal dysfunction (CRAD) symptoms was associated with a shorter duration of the SSL (P = 0.03). Conclusions: There is a statistically significant recovery of PFD symptoms in the postpartum period. Furthermore, greater recovery from CRAD symptoms is associated with a shorter duration of the SSL. Answer: The survey conducted among obstetrical providers revealed that prenatal counseling practices related to postpartum pelvic floor dysfunction are lacking. Specifically, 56.3% of respondents reported never discussing postpartum urinary incontinence, and 73.7% reported never discussing postpartum fecal incontinence during prenatal counseling. The most common reasons cited for not counseling women regarding pelvic floor dysfunction were lack of time (39.9%) and lack of sufficient information (30.1%) (PUBMED:26126305).
Instruction: The Victorian CPAP program: is there a need for additional education and support? Abstracts: abstract_id: PUBMED:12412935 The Victorian CPAP program: is there a need for additional education and support? Background: The Victorian Continuous Positive Airways Pressure (CPAP) Program provides CPAP services to financially disadvantaged individuals with moderate to severe sleep apnoea. Aims: To evaluate health outcomes in patients referred to the pilot program in order to: (i) assess the magnitude of health benefit from treatment in this highly selected population and (ii) identify patient characteristics or factors related to service provision that may influence outcome. Methods: We adopted a simple before-after research design. Patients who were referred to the program were recruited from five sleep centres. Questionnaires were administered at baseline and 1 and 3 months after commencing CPAP. Generic and disease-specific quality of life were assessed using the MOS 36-Item Short-form Health Survey and the Sleep Apnoea Quality-of-life Index, respectively. Subjective daytime sleepiness was measured using the Epworth Sleepiness Scale and the Sleep-Wake Activity Inventory. Results: Of the 68 subjects enrolled in the study, 59 were available for follow up. There were significant improvements in daytime sleepiness (P < 0.0005). Treatment-related symptoms had a negative impact on overall disease-specific quality of life, however there were significant improvements in all other domains of disease-specific quality of life (P < 0.0005). Improvements in generic quality of life were small but statistically significant (P < 0.05). Hospital, disease severity, baseline sleepiness, gender and CPAP-machine type were not predictors of outcome (P > 0.05). Conclusion: This review of the Victorian CPAP Program identified significant improvements in subjective daytime sleepiness and quality of life, despite the negative impact of treatment-related symptoms. Future research should explore whether services can be modified to help reduce the impact of treatment-related side-effects. abstract_id: PUBMED:35047948 The LeVe CPAP System for Oxygen-Efficient CPAP Respiratory Support: Development and Pilot Evaluation. Background: The COVID-19 pandemic, caused by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), has placed a significant demand on healthcare providers (HCPs) to provide respiratory support for patients with moderate to severe symptoms. Continuous Positive Airway Pressure (CPAP) non-invasive ventilation can help patients with moderate symptoms to avoid the need for invasive ventilation in intensive care. However, existing CPAP systems can be complex (and thus expensive) or require high levels of oxygen, limiting their use in resource-stretched environments. Technical Development + Testing: The LeVe ("Light") CPAP system was developed using principles of frugal innovation to produce a solution of low complexity and high resource efficiency. The LeVe system exploits the air flow dynamics of electric fan blowers which are inherently suited to delivery of positive pressure at appropriate flow rates for CPAP. Laboratory evaluation demonstrated that performance of the LeVe system was equivalent to other commercially available systems used to deliver CPAP, achieving a 10 cm H2O target pressure within 2.4% RMS error and 50-70% FiO2 dependent with 10 L/min oxygen from a commercial concentrator. Pilot Evaluation: The LeVe CPAP system was tested to evaluate safety and acceptability in a group of ten healthy volunteers at Mengo Hospital in Kampala, Uganda. The study demonstrated that the system can be used safely without inducing hypoxia or hypercapnia and that its use was well-tolerated by users, with no adverse events reported. Conclusions: To provide respiratory support for the high patient numbers associated with the COVID-19 pandemic, healthcare providers require resource efficient solutions. We have shown that this can be achieved through frugal engineering of a CPAP ventilation system, in a system which is safe for use and well-tolerated in healthy volunteers. This approach may also benefit other respiratory conditions which often go unaddressed in Low and Middle Income Countries (LMICs) for want of context-appropriate technology designed for the limited oxygen resources available. abstract_id: PUBMED:37491072 Noninvasive Ventilation or CPAP for Postextubation Support in Small Infants. Background: Infants with a high risk of extubation failure are often treated with noninvasive ventilation (NIV) or CPAP, but data on the role of these support modalities following extubation are sparse. This report describes our experience using NIV or CPAP to support infants following extubation in our pediatric ICUs (PICUs). Methods: We performed a retrospective study of children < 10 kg receiving postextubation NIV or CPAP in our PICUs. Data on demographics, medical history, type of support, vital signs, pulse oximetry, near-infrared spectroscopy (NIRS), gas exchange, support settings, and re-intubation were extracted from the electronic medical record. Support was classified as prophylactic if planned before extubation and rescue if initiated within 24 h of extubation. We compared successfully extubated and re-intubated subjects using chi-square test for categorical variables and Mann-Whitney test for continuous variables. Results: We studied 51 subjects, median age 44 (interquartile range 0.5-242) d and weight 3.7 (3-4.9) kg. There were no demographic differences between groups, except those re-intubated were more likely to have had cardiac surgery prior to admission (0% vs 14%, P = .040). NIV was used in 31 (61%) and CPAP in 20 (39%) subjects. Prophylactic support was initiated in 25 subjects (49%), whereas rescue support was needed in 26 subjects (51%). Twenty-two subjects (43%) required re-intubation. Re-intubation rate was higher for rescue support (58% vs 28%, P = .032). Subjects with a pH < 7.35 (4.3% vs 42.0%, P = .003) and lower somatic NIRS (39 [24-56] vs 62 [46-72], P = .02) were more likely to be re-intubated. The inspiratory positive airway pressure, expiratory positive airway pressure, and FIO2 were higher in subjects who required re-intubation. Conclusions: NIV or CPAP use was associated with a re-intubation rate of 43% in a heterogeneous sample of high-risk infants. Acidosis, cardiac surgery, higher FIO2 , lower somatic NIRS, higher support settings, and application of rescue support were associated with the need for re-intubation. abstract_id: PUBMED:27148863 Adherence to CPAP therapy: comparing the effect of three educational approaches in patients with obstructive sleep apnoea. Introduction: Continuous positive airway pressure (CPAP)-therapy is the first-line treatment for moderate to severe obstructive sleep apnoea (OSA). A significant limitation of CPAP treatment is the poor therapy adherence, compromising the beneficial effects. Objective: This study evaluates three different educational approaches and their effect on therapy adherence. Method: This single-center, retrospective study compared three groups of 100 consecutive, CPAP-naive patients with moderate to severe OSA who were started on CPAP therapy. Group 1 and 2 received the same individual structured education on two consecutive days with an extra phone call 7 to 10 days after CPAP start in group 2. Group 3 received individual structured education on the first day and participated in a group education using a slide presentation open for discussion on the second day. Re-evaluation was performed after 24 weeks. Results: Baseline characteristics did not differ significantly between groups. During the 24 weeks follow-up there was a drop-out rate of 16% (group 1), 12% (group 2) and 5% (group 3). In the patients still on CPAP after 24 weeks, the mean nightly CPAP usage was, respectively, 4.7 ± 1.8, 5.2 ± 2.3 and 5.7 ± 2.1 h/night. In group 3 both the drop-out rate and mean CPAP usage were significantly different (P values, respectively, P < 0.05 and P < 0.01) compared with group 1. Conclusion: Improving CPAP adherence is an ongoing challenge. This study shows that a multi-modality approach, using a combination of individual and group education using a slide presentation open for discussion resulted in improved therapy adherence. abstract_id: PUBMED:19540131 Assessing advanced life support (ALS) competence: Victorian practices. Advanced life support (ALS) assessments are performed to assess nurses' abilities to recognize cardiac arrest events, and appropriately manage patients according to resuscitation guidelines. Although there is evidence for conducting assessments after initial ALS education, there is little evidence to guide educators about ongoing assessments in terms of methods, format and frequency. The aim of this study was to determine methods used by educators to assess ALS skills and knowledge for nurses in Victorian intensive care units. This descriptive study used telephone interviews to collect data. Data were analysed using content analysis. Twenty intensive care educators participated in this study. Thirteen educators (65%) were employed in public hospitals, and 7 educators (35%) worked in private hospitals across 12 Level 3 (60%) and 8 Level 2 (40%) intensive care units. Results showed all educators used scenarios to assess ALS skills, with 12 educators (60%) including an additional theoretical test. There was variability in ALS assessment frequency, assessment timing in relation to initial/ongoing education, person performing the assessment, and the assessor/participant ratio. Nineteen educators (95%) reported ALS skill competency assessments occurred annually; 1 educator (5%) reported assessments occurred every 2 years. Assessments were conducted during a designated month (n=10), numerous times throughout the year (n=8), or on nurses' employment anniversaries (n=2). All educators reported many nurses avoided undertaking assessments. Variability in ongoing ALS assessment methods was evident in Victorian intensive care units with some units applying evidence-based practices. Consideration should be given to the purposes and methods of conducting annual ALS assessments to ensure resources and strategies are directed appropriately. To encourage nurses to retain ALS skills and knowledge, regular practices are recommended as an alternative to assessments. However, further research is required to support this notion. abstract_id: PUBMED:9149582 Group education sessions and compliance with nasal CPAP therapy. Study Objectives: To determine an effective means of improving compliance with nasal continuous positive airway pressure (CPAP) for obstructive sleep apnea (OSA). Design: Retrospective chart review. Setting: An outpatient clinic at a Veterans Affairs Medical Center. Patients: Seventy-three patients with OSA. Interventions: Hour meters on CPAP machines provided documentation of nightly machine use. A 2-h group CPAP clinic, scheduled every 6 months, provided education, support, symptom treatment, and equipment monitoring for all CPAP patients. Results: Twenty-five patients had hour meter readings taken at their first CPAP clinic. In these patients, nightly CPAP use increased from 5.2 +/- 0.6 to 6.3 +/- 0.6 h per night after attendance at one CPAP clinic (p < 0.05). CPAP use increased from 5.2 +/- 0.5 before CPAP clinic to 6.3 +/- 0.6 h per night after attendance at all subsequent CPAP clinics for 34 patients (p < 0.05), an improvement that was sustained over 605 +/- 34 days. Twenty-nine percent of patients increased nightly CPAP use by at least 2 h, while only 6% decreased by > or = 2 h (p < 0.025). Patients receiving supplemental oxygen had higher CPAP use prior to CPAP clinic compared to patients not receiving oxygen (p < 0.05). Conclusions: Attendance in a group clinic designed to encourage patient compliance with CPAP therapy provided a simple and effective means of improving treatment of OSA. abstract_id: PUBMED:36648976 The Effectiveness of NIV and CPAP Training on the Job in COVID-19 Acute Care Wards: A Nurses' Self-Assessment of Skills. Background: Noninvasive ventilation (NIV) in COVID-19 patients outside of intensive care unit (ICU) settings was a feasible support during the pandemic outbreak. The aim of this study was to assess the effectiveness of an “on the job” NIV training program provided to 66 nurses working in 3 COVID-19 wards in an Italian university hospital. Methods: A quasi-experimental longitudinal before−after study was designed. The NIV Team education program, provided by expert ICU nurses, included: 3 h sessions of training on the job during work-shifts about the management of helmet-continuous positive airway pressure (CPAP) Venturi systems, and NIV with oronasal and full-face masks. An eleven-item “brief skills self-report tool” was administered before and after the program to explore the perception of NIV education program attendees about their level of skills. Results: In total, 59 nurses responded to the questionnaire. There was an improvement in the skill levels of the management of Helmet-CPAP (median before training 2, inter-quartile range (IQR) 0−6; median after training 8, IQR 3−9; p < 0.0001), and mask-NIV (median before training 2, IQR 0−6; median after training 8, IQR 3−9; p < 0.0001). Conclusions: Training on the job performed by expert ICU nurses can be a valuable and fast means to implement new Helmet-CPAP and mask-NIV skills outside of ICUs. abstract_id: PUBMED:34926350 Feasibility and Effect of Physiological-Based CPAP in Preterm Infants at Birth. Background: Preterm infants are commonly supported with 5-8 cmH2O CPAP. However, animal studies demonstrate that high initial CPAP levels (12-15 cmH2O) which are then reduced (termed physiological based (PB)-CPAP), improve lung aeration without adversely affecting cardiovascular function. We investigated the feasibility of PB-CPAP and the effect in preterm infants at birth. Methods: Preterm infants (24-30 weeks gestation) were randomized to PB-CPAP or 5-8 cmH2O CPAP for the first 10 min after birth. PB-CPAP consisted of 15 cmH2O CPAP that was decreased when infants were stabilized (heart rate ≥100 bpm, SpO2 ≥85%, FiO2 ≤ 0.4, spontaneous breathing) to 8 cmH2O with steps of ~2/3 cmH2O/min. Primary outcomes were feasibility and SpO2 in the first 5 min after birth. Secondary outcomes included physiological and breathing parameters and short-term neonatal outcomes. Planned enrollment was 42 infants. Results: The trial was stopped after enrolling 31 infants due to a low inclusion rate and recent changes in the local resuscitation guideline that conflict with the study protocol. Measurements were available for analysis in 28 infants (PB-CPAP n = 8, 5-8 cmH2O n = 20). Protocol deviations in the PB-CPAP group included one infant receiving 3 inflations with 15 cmH2O PEEP and two infants in which CPAP levels were decreased faster than described in the study protocol. In the 5-8 cmH2O CPAP group, three infants received 4, 10, and 12 cmH2O CPAP. During evaluations, caregivers indicated that the current PB-CPAP protocol was difficult to execute. The SpO2 in the first 5 min after birth was not different [61 (49-70) vs. 64 (47-74), p = 0.973]. However, infants receiving PB-CPAP achieved higher heart rates [121 (111-130) vs. 97 (82-119) bpm, p = 0.016] and duration of mask ventilation was shorter [0:42 (0:34-2:22) vs. 2:58 (1:36-6:03) min, p = 0.020]. Infants in the PB-CPAP group required 6:36 (5:49-11:03) min to stabilize, compared to 9:57 (6:58-15:06) min in the 5-8 cmH2O CPAP group (p = 0.256). There were no differences in short-term outcomes. Conclusion: Stabilization of preterm infants with PB-CPAP is feasible but tailoring CPAP appeared challenging. PB-CPAP did not lead to higher SpO2 but increased heart rate and shortened the duration of mask ventilation, which may reflect faster lung aeration. abstract_id: PUBMED:32765155 CPAP Therapeutic Options for Obstructive Sleep Apnea. Introduction: There are many options available to patients who are placed on constant positive airway pressure (CPAP) for obstructive sleep apnea. Despite the success of CPAP in correcting apnea, a significant number of patients have difficulty with the therapy. A large number of those patients who have difficulty stop therapy and are often labeled as "CPAP Failure". Non-sleep specialists may view CPAP therapy as a singular course of treatment, but there are many ways CPAP may be ordered for a patient. Each patient experiences a unique set of options that constitute a unique order set. Methods: In order to demonstrate the magnitude of the possible options, estimates of the number of unique order sets were calculated. The author chose individual order options and the number of selections possible within each option. The calculated sets included a "Generous, Limited and Minimal" number of selections for each option. Calculations were done separately for standard CPAP and for auto-adjusting CPAP. Additional calculations were performed using the number of commercially available masks in the United States. Results: The maximum number of unique order sets was seen using a standard CPAP combined with commercially available masks: 49,152 unique order sets. The fewest number of unique order sets were seen with the auto-adjusting CPAP and the "Minimal" selections: 288 unique order sets. Discussion: There are a large number of unique CPAP orders that a patient may experience. CPAP treatment is not a singular or simple therapy. When evaluating obstructive sleep apnea patients with histories of CPAP failure or prior difficulty with CPAP, paying close attention to the patient's treatment experiences may help explain a significant number of those patients' CPAP therapy problems. abstract_id: PUBMED:31888148 Novel Aspects of CPAP Treatment and Interventions to Improve CPAP Adherence. Continuous positive airway pressure (CPAP) is an effective treatment for obstructive sleep apnea. However, the success of this treatment is hampered by nonadherence in half of the treated patients. Moreover, in clinical trials, poor adherence reduces adequate exposure required to determine its true effect. There is growing evidence that behavioral interventions, in addition to education, are a promising approach to improving adherence. Behavioral interventions include the use of cognitive behavioral therapy and motivational enhancement therapy designed to elevate a patient's self-efficacy. The abundance of data obtained by CPAP tracking systems enables daily surveillance of use, and this telemonitoring along with telehealth allows the provider to quickly intervene when nightly CPAP use falls below thresholds or mask leaks are present. Telehealth reaches a large number of patients who may not be able to regularly attend a clinic, providing support and reinforcement. Peer support may also be useful in improving adherence. Not all obstructive sleep apnea patients present with the same phenotype, and can, therefore, be clustered into several groupings. Which intervention is most successful with a given phenotype or cluster remains unexplored. Comprehensive adherence management requires a team approach with the unique contribution of different professionals. Answer: The Victorian CPAP program, which provides CPAP services to financially disadvantaged individuals with moderate to severe sleep apnoea, has shown significant improvements in subjective daytime sleepiness and quality of life for patients. However, the program also identified that treatment-related symptoms had a negative impact on overall disease-specific quality of life, despite the improvements in other domains (PUBMED:12412935). This suggests that while the program is beneficial, there may be a need for additional education and support to help reduce the impact of treatment-related side effects. Supporting this notion, studies have shown that different educational approaches can have a significant effect on CPAP therapy adherence. For instance, a study comparing three educational approaches found that a multi-modality approach, combining individual and group education, resulted in improved therapy adherence (PUBMED:27148863). Another study demonstrated that attendance in a group clinic designed to encourage patient compliance with CPAP therapy provided a simple and effective means of improving treatment of OSA (PUBMED:9149582). Furthermore, the effectiveness of "on the job" training for nurses in managing noninvasive ventilation (NIV) and CPAP in COVID-19 wards showed that training by expert ICU nurses can quickly implement new skills outside of ICUs (PUBMED:36648976). This indicates that targeted education and training can enhance the skills and confidence of healthcare providers, which could translate into better support for patients using CPAP. In summary, while the Victorian CPAP program has been successful in improving health outcomes for patients, the evidence suggests that there is a need for additional education and support to address treatment-related symptoms and to enhance therapy adherence. This could involve a combination of individual and group education sessions, as well as targeted training for healthcare providers involved in the management of CPAP therapy.
Instruction: Does preeclampsia involve the pancreas? Abstracts: abstract_id: PUBMED:24939629 Does preeclampsia involve the pancreas? A report of original research. Objective: To analyze the effect, if any, of pregnancy-related hypertension on the pancreas. Methods: A database of pregnant women with hyperbilirubinemia was reviewed for cases with serum amylase values. These cases were linked to a computerized obstetric database, and women were analyzed according to the diagnosis of preeclampsia. Results: A total of 292 consecutive women with hyperbilirubinemia during pregnancy delivered at our institution between 2005 and 2011. Of these, 52 (18%) were diagnosed with preeclampsia, and in only nine was serum amylase abnormally elevated. Conclusion: Using serum analytes as surrogates for organ injury, it appears preeclampsia infrequently affects the pancreas. abstract_id: PUBMED:32714442 Two pregnancies after simultaneous pancreas-kidney transplantation: A case report. Simultaneous kidney-pancreas transplantation can successfully restore fertility in women with type 1 diabetes mellitus and kidney failure. There have been few cases of pregnancy after simultaneous kidney-pancreas transplantation, and a second pregnancy is rare. We report a case of two live births in rapid succession in a simultaneous kidney-pancreas transplantation recipient, complicated by pre-eclampsia but with excellent fetal and graft outcomes. The first pregnancy was achieved with IVF, while the second was achieved naturally. Simultaneous kidney-pancreas transplantation with stable pre-pregnancy graft function enabled this patient to achieve successful pregnancy. However, both pregnancies were complicated by transient renal graft dysfunction and pre-eclampsia precipitating pre-term birth. abstract_id: PUBMED:34292634 Outcomes of pregnancy in simultaneous pancreas and kidney transplant recipients: A single-center retrospective study. Simultaneous pancreas and kidney (SPK) transplantation, in uremic women with insulin-dependent diabetes, increases the chance of a successful pregnancy and minimizes the risk to infants. The aim of this study was to document pregnancy and explore the challenges in this cohort of women. Retrospective analysis of women who underwent pancreas transplantation between January 1, 1998 and 8 January, 2019 was conducted. Seventeen pregnancies were identified in 13 women. Mean transplant-to-pregnancy interval was 4.6 years (range, 1.1-10.2 years). Eleven pregnancies resulted in live birth (65%), and six (35%) ended in miscarriage/fetal loss at a median gestational age of 8.5 weeks. Mean gestational age at delivery was 34.9 weeks (SD ±3 weeks). Preeclampsia and C-section rates were 77% and 67%, respectively. Adverse fetal and graft outcomes were observed in 100% of unplanned pregnancies, compared to 10% of planned pregnancies (P < .001). One kidney allograft was lost during pregnancy; one pancreas and two kidney allografts were lost within 3 years of pregnancy. This is a high-risk group for grafts and offspring. Pre-pregnancy planning is vital. A multidisciplinary approach by obstetric and transplant teams is important pre-pregnancy, antenatally, and peripartum. This is the largest published series of pregnancies in SPK recipients from a single center. abstract_id: PUBMED:29037563 Management of pregnancy in pancreas alone transplant recipient complicated with stage-4 chronic renal insufficiency and superimposed pre-eclampsia: Case report and literature review. Objective: With the prolonged life expectancy in solid organ transplant recipients, their quality of life and fertility desire become of particular concern. Pregnancy in pancreas-alone transplantation, although rare and complicated to manage, is not impossible anymore. We here report such a case with literature review to address this issue. Case Report: A 29-year-old, primigravida patient with underlying stage 4 chronic renal insufficiency and type 1 diabetes mellitus post pancreas-alone transplantation 5 years prior to her initial visit consulted our service. Antepartum care with intensive monitoring of blood pressure, renal function, and tacrolimus serum concentration were given. Successful maternal and fetal outcomes are presented here. Conclusion: Child-bearing in solid organ transplantation recipients has become more promising nowadays, even for a difficult case of pancreas-alone transplant recipient complicated with chronic renal insufficiency and superimposed pre-eclampsia. Thorough antepartum counseling and cautious monitoring of maternal, fetal and graft conditions by multidisciplinary specialties are key to favorable pregnancy outcomes. abstract_id: PUBMED:2444246 The effect of severe pre-eclampsia on the pancreas: changes in the serum cationic trypsinogen and pancreatic amylase. In 13 patients with severe pre-eclampsia mean serum concentrations of cationic trypsinogen and amylase were statistically significantly higher at 64 ng/ml and 1.6 mumol/l, respectively, than the corresponding mean serum concentrations in 30 normal pregnancies, 22 ng/ml and 1.1 mumol/l respectively. These rises exceeded the expected increased values due to the slight reduction of the renal function in the pre-eclamptic patients, and therefore indicate a concomitant injury of the pancreas. abstract_id: PUBMED:36582264 Vitreous hemorrhage in a pregnant woman with a history of simultaneous pancreas and kidney transplantation: A case report. Pregnancy after simultaneous pancreas and kidney transplantation (SPKT) carries a high risk of maternal and fetal complications. We report the case of a 39-year-old woman with three consecutive pregnancies with favorable outcomes after SPKT. Within the first year of SPKT, the patient had a spontaneous pregnancy. At 32 weeks of gestation, she underwent an emergency cesarean section (CS) due to severe preeclampsia and HELLP syndrome. The infant was of average birth weight and was transferred to the neonatal intensive care unit for further management. A second unplanned pregnancy occurred almost nine months after the first. The antenatal assessments for fetal growth, blood glucose, and blood pressure were normal throughout follow-up. Early in her pregnancy, the patient developed an uneventful retinopathy of the left eye. At 37 weeks of gestation, she underwent an elective CS due to a short inter-pregnancy interval and delivered a healthy baby with an average birth weight. At the age of 39 years, the patient had a third unplanned pregnancy. She was diagnosed with seronegative antiphospholipid syndrome. She suffered from bilateral vitreous hemorrhage and was managed successfully with a minimally invasive laser treatment combined with an intravitreal injection of anti-vascular endothelial growth factor during her third trimester. At 35 weeks of gestation, the patient presented with labor pain and underwent an emergency CS and delivered a healthy baby with an average birth weight. Pregnancy after SPKT requires a multidisciplinary approach with a careful workup. abstract_id: PUBMED:15077034 Pregnancy after kidney and kidney-pancreas transplantation under tacrolimus: a single center's experience. Background: Chronic renal failure leads to amenorrhea, and successful pregnancy is rare. The aim of the present report is to examine the outcome of pregnancies under tacrolimus after kidney transplantation (KTx) and simultaneous kidney-pancreas transplantation (SPKTx). Method: All pregnancies under tacrolimus after KTx or SPKTx from 1993 to April 2002 were retrospectively examined. Renal function and the mother's survival were followed until December 2002. Results: Thirteen mothers after KTx delivered 19 babies, and 2 mothers after SPKTx delivered 3 babies. All mothers survived the pregnancy and retained allograft function. One mother had a stillborn baby from an unrecognized amniotic fluid leak and a small ischemic placenta. The mean gestational period was 34.4 +/- 5.1 weeks. Mean birth weight was 2373 +/- 1001 g. Birth-weight percentile to gestational period was 40 +/- 28. None of the mothers experienced rejection during the pregnancy. Three pregnancies in mothers with KTx experienced toxemia of pregnancy, and one mother with SPKTx developed pre-eclampsia during both pregnancies. Five mothers (6 deliveries, 27.3%) required caesarian section. During the follow-up period, one mother died from a cerebrovascular accident. Another five mothers returned to dialysis 55.6 +/- 32.4 months after the last delivery and 99.4+28.5 months after the last KTx. Both SPKTx mothers have maintained normal renal and pancreatic allograft function 42 and 62 months postdelivery. Conclusion: All mothers survived the pregnancy. One baby was stillborn. Forty-one percent of babies were either preterm or premature, and 27% of babies were delivered by caesarean section. Toxemia of pregnancy or pre-eclampsia was observed in 23% of pregnancies postKTx and SPKTx. None of the mothers experienced rejection during their pregnancy. abstract_id: PUBMED:24536631 The pregnancy toxemias; the role of the trophoblast and the pancreas. N/A abstract_id: PUBMED:14203422 THE PANCREAS AND PREGNANCY. II N/A abstract_id: PUBMED:2806849 Pregnancies in females following kidney and pancreas transplantation Uraemic patients are in general infertile. Ovarian function is, however, restored after successful renal transplantation, thus making conception possible. We followed up 14 patients after renal transplantation involving 16 pregnancies. Two patients became pregnant twice, one with twins and the other following renal and pancreatic transplantation--the first recorded in the world. Caesarean section was performed in all patients due to increasing serum creatinin levels, avoid pre-eclampsia or premature rupture of membranes. Both, mother and child in all cases progressed without complications, although these pregnancies are associated with high risk for both. Therefore, a close co-operation between the mother, the nephrologist, the transplantation centre, the gynaecologist and the paediatrician is a prerequisite for a possible favourable course. Answer: The relationship between preeclampsia and pancreatic involvement is not straightforward. One study analyzing the effect of pregnancy-related hypertension on the pancreas found that preeclampsia infrequently affects the pancreas, as indicated by the low incidence of abnormally elevated serum amylase among women with preeclampsia and hyperbilirubinemia (PUBMED:24939629). However, another study reported statistically significant higher mean serum concentrations of cationic trypsinogen and amylase in patients with severe preeclampsia compared to normal pregnancies, suggesting a concomitant injury of the pancreas in severe preeclampsia (PUBMED:2444246). While these findings suggest some degree of pancreatic involvement in cases of severe preeclampsia, the overall evidence is not conclusive. It is important to note that preeclampsia is a complex condition that can have systemic effects, and while there may be some impact on the pancreas, it does not appear to be a common or primary concern in preeclampsia cases based on the available data. Additionally, reports of pregnancies in women with pancreas transplants, such as simultaneous pancreas-kidney transplantation, indicate that these pregnancies can be complicated by preeclampsia, but the focus is often on the management of the pregnancy and the health of the mother and fetus, rather than on the pancreas itself (PUBMED:32714442, PUBMED:34292634, PUBMED:29037563, PUBMED:36582264, PUBMED:15077034, PUBMED:2806849). In summary, while there is some evidence of pancreatic involvement in severe cases of preeclampsia, it does not appear to be a frequent or primary concern in the majority of preeclampsia cases. Further research would be needed to fully understand the relationship between preeclampsia and pancreatic function.
Instruction: Placental leptin: an important new growth factor in intrauterine and neonatal development? Abstracts: abstract_id: PUBMED:9200375 Placental leptin: an important new growth factor in intrauterine and neonatal development? Background: Leptin, the protein product of the ob gene, is produced by the adipocyte and seems to function as a link between adiposity, satiety, and activity. Leptin has also been found to be necessary for pubertal development, conception, and pregnancy in mice, and is increased in prepubertal children, independent of adiposity, suggesting a role in childhood growth and development. This study investigated 100 mother/newborn pairs to determine the role of leptin in neonatal development. Placental tissue was assayed for leptin mRNA to evaluate it as a source of leptin production in utero. Methods: One hundred mother/newborn pairs were enrolled in this study. Radioimmunoassay was performed for leptin on maternal venous and newborn cord blood. Leptin concentrations were measured in 43 children in Tanner stages 1 and 2 as a control group. Placental tissue was obtained from five mothers and assayed for leptin mRNA by reverse transcription/polymerase chain reaction (RT/PCR). Human placental cell lines JAR and JEG-3 were also assayed for leptin mRNA expression. Results: Leptin was present in all newborns studied at a mean concentration of 8.8 ng/mL (+/-9.6 standard deviations). Leptin concentrations in cord blood correlated with newborn weight (r = .51), body mass index (BMI) (r = .48), and arm fat (r = .42). There was no correlation between leptin and insulin. When statistically covarying for adiposity for newborns and Tanner stages 1 and 2 children, newborns had greater concentrations of leptin (mean, 10.57 ng/mL) than children (mean, 3.04 ng/mL). Leptin was present in all mothers at a mean value of 28.8 ng/mL (+/-22.2 standard deviations). Leptin concentration correlated with prepregnancy BMI (r = .56), BMI at time of delivery (r = .74), and arm fat (r = .73). Maternal leptin correlated with serum insulin (r = .49). There was no correlation between maternal and newborn leptin concentrations. Thirteen percent of newborns had higher leptin concentrations than their mothers. Placental tissue from five separate placentas expressed leptin mRNA at comparable or greater levels than adipose tissue. Two human trophoblastic placental cell lines, JAR and JEG-3, also expressed leptin mRNA. Conclusions: The correlation between leptin and adiposity found in children and adults was also found in newborns. Serum leptin concentrations in newborns were increased more than three-fold compared with children in Tanner stages 1 and 2 when controlling for adiposity, suggesting that leptin concentrations in the newborn are not explained by adiposity alone. Maternal leptin concentrations correlated with measures of adiposity at delivery but did not correlate with newborn adiposity or leptin. Leptin mRNA was expressed both in placental tissue and in two human placental cell lines. These data suggest that leptin has a role in intrauterine and neonatal development and that the placenta provides a source of leptin for the growing fetus. abstract_id: PUBMED:28651476 Molecular mechanisms of intrauterine growth restriction. Intrauterine growth restriction (IUGR) is a pregnancy specific disease characterized by decreased growth rate of fetus than the normal growth potential at particular gestational age. In the current scenario it is a leading cause of fetal and neonatal morbidity and mortality. In the last decade exhilarating experimental studies from several laboratories have provided fascinating proof for comprehension of molecular basis of IUGR. Atypical expression of enzymes governed by TGFβ causes the placental apoptosis and altered expression of TGFβ due to hyper alimentation causes impairment of lung function. Crosstalk of cAMP with protein kinases plays a prominent role in the regulation of cortisol levels. Increasing levels of NOD1 proteins leads to development of IUGR by increasing the levels of inflammatory mediators. Increase in leptin synthesis in placental trophoblast cells is associated with IUGR. In this review, we emphasize on the regulatory mechanisms of IUGR and its associated diseases. They may help improve the in-utero fetal growth and provide a better therapeutic intervention for prevention and treatment of IUGR. abstract_id: PUBMED:15369650 Markers for presymptomatic prediction of preeclampsia and intrauterine growth restriction. Preeclampsia and intrauterine growth restriction are both characterized by placental malfunction. The pathological processes of abnormal trophoblast invasion, partial absence of maternal spiral artery modification, increased apoptosis of trophoblast cells, and placental ischemia are all associated with the release of specific molecules. These proteins, as well as cell-free fetal DNA and RNA might be detected in the maternal peripheral circulation, quantified, and used for early identification and prediction of preeclampsia and intrauterine growth restriction, prior to the appearance of the clinical symptoms. As preeclampsia and intrauterine growth restriction are associated with increased maternal, perinatal, and neonatal morbidity and mortality, early identification of these pregnancy associated complications will permit the design of appropriate preventive measures. In this review a variety of factors reported to be useful as potential markers for early detection of pregnancies at increased risk will be discussed. Molecules associated with the establishment of the placenta and essential in fetal-maternal interactions, like interleukin 2-receptor, insulinlike growth factor-1, and insulinlike growth factor binding protein-1, placenta growth factor, hepatocyte growth factor, inhibin A, activin A, and human chorionic gonadotrophin seem to be the most likely candidates for presymptomatic markers for preeclampsia and/or intrauterine growth restriction. Detection and discrimination of these molecules through the placental RNA in maternal plasma based strategy has become a realistic option. abstract_id: PUBMED:11780443 Placental leptin correlates with intrauterine fetal growth and development. Objective: To study the role of placental leptin in intrauterine cord leptin production and its relationship with neonatal anthropometry. Methods: Forty women and their babies (40) were enrolled in this study. Placental tissues were assayed for leptin mRNA by reverse transcription/polymerase chain reaction (RT/PCR), and assayed for the obese gene protein leptin by Western-blot and immunohistochemistry. Blood was taken from the umbilical cord of the babies at delivery. Serum leptin was measured by radio-immunoassay. Neonatal anthropometric measurements were recorded within 48 hours after delivery. Linear regression analysis was used to explore the relationship between placental leptin, cord leptin and neonatal anthropometric measures. Results: The obese gene was expressed in placental tissue at comparable or greater levels than that in adipose tissue. The placentas of the small for gestational age (SGA) neonates expressed leptin mRNA and protein at significantly lower levels than those of the appropriate for gestational age (AGA) neonates (P = 0.0034 and 0.0076), while the placentas of the large for gestational age (LGA) neonates expressed leptin mRNA and protein at significantly higher levels than those of the AGA neonates (P = 0.043 and 0.021). Linear regression analysis showed placental ob gene transcription and leptin translation correlated significantly with cord leptin (r = 0.39 and 0.43), and neonatal Ponderal Index (r = 0.66 and 0.69). Conclusion: The placenta provides a source of leptin for the growing fetus, and this placental leptin might be a growth factor in intrauterine fetal development. abstract_id: PUBMED:32010348 DESACYLATED GHRELIN AND LEPTIN IN THE CORD BLOOD OF SMALL-FOR-GESTATIONAL-AGE NEWBORNS WITH INTRAUTERINE GROWTH RESTRICTION. Context: Ghrelin, in both its acylated and desacylated forms, and leptin can modulate fetal energy balance and development. Objective: The aim of our study is to assess desacylated ghrelin (DAG) and leptin values and influence on intrauterine and postnatal growth in infants with intrauterine growth restriction. Design Subjects And Methods: We performed a prospective study on 39 infants recruited over five months, 20 appropriate - for - gestational - age (AGA) infants and 19 small-for-gestational-age (SGA) infants, in which we measured DAG and leptin in the umbilical cord blood and we compared their respective values between the two groups, along with auxological parameters at birth and at 10 months of postnatal age. Results: Our results show that both DAG and leptin have lower values in SGA infants and correlate with most of the anthropometrical parameters at birth. Both hormones correlate with weight at 10 months in SGA infants, but this correlation lacks in AGA infants. Whereas DAG in the cord blood can be considered a predictor for weight at 10 months (β=0.207, p=0.001), the same cannot be stated about leptin (β=0.078, p=0.195). Conclusion: DAG and leptin are involved in both intrauterine and postnatal development, but the extent of their role is still to be determined. abstract_id: PUBMED:11798925 Placental leptin correlates with intrauterine fetal growth and development Objective: To study the role of placental leptin in intrauterine cord leptin production and its relationship with neonatal anthropometry. Methods: Forty women and their babies were enrolled and approved by Xinhua Hospital (Shanghai, China) and Jiangbei Hospital (Nanjing, China) in this study. Placental tissue was assayed for leptin mRNA by reverse transcription/polymerase chain reaction (RT/PCR), and assayed for ob gene protein, leptin, by Western-Blot and immunohistochemistry. Blood was taken from the umbilical cord of the babies at delivery. Serum leptin was measured by radio-immunoassay. Neonatal anthropometric measurements were recorded within 48 hours after delivery. Linear regression analysis was used to explore the relationship between placental leptin, cord leptin and neonatal anthropometric measures. Results: Ob gene was expressed in placental tissue at comparable or greater levels than that in adipose tissue. Comparison of the relative levels of leptin to beta-actin mRNA by multiplex RT/PCR revealed that the placenta of the small for gestational age (SGA) neonates expressed leptin mRNA at significantly lower levels 0.61 +/- 0.15 than that of the appropriate for gestational age (AGA) neonates 0.83 +/- 0.20 (P = 0.0034), while the placenta of the large for gestational age (LGA) neonates expressed leptin mRNA at significantly higher levels 1.00 +/- 0.23 than that of the AGA neonates (P = 0.043). Immunohistochemical techniques showed the immunostaining pattern in the cytoplasm of trophoblastic cells. Western-blot showed that the placenta of the SGA neonates expressed leptin at significantly lower levels 0.26 +/- 0.05 ng/mg than that of the AGA neonates 0.34 +/- 0.09 ng/mg (P = 0.007 6), while the placenta of the LGA neonates expressed leptin at significantly higher levels 0.43 +/- 0.10 ng/mg than that of the AGA neonates (P = 0.021). Linear regression analysis showed placental ob gene transcription and leptin translation correlated significantly with cord leptin (r = 0.39 and 0.43), and neonatal Ponderal Index (r = 0.66 and 0.69). Conclusions: Placenta provides a source of leptin for the growing fetus, and this placental leptin might be a growth factor in intrauterine fetal development. abstract_id: PUBMED:27565941 Relationship between insulin-like growth factor 1, leptin and ghrelin levels and catch-up growth in small for gestational age infants of 27-31 weeks during neonatal intensive care unit admission. Aim: Poor post-natal growth is related to later morbidity and poor cognitive development in preterm infants. We investigated the relationship between plasma insulin-like growth factor 1 (IGF-1), leptin, active ghrelin levels and post-natal growth in preterm infants small for gestational age (SGA). Methods: Plasma IGF-1, leptin and active ghrelin levels were measured at birth and at 2, 4, 6 and 8 weeks after birth in 42 very low birthweight (VLBW) infants (born between 27 and 31 weeks of gestation), including 14 SGA infants with extrauterine growth restriction (EUGR), 6 SGA infants without EUGR and 22 appropriate-for-gestational-age infants. Results: At birth, IGF-1 levels in SGA infants without EUGR did not differ significantly from those in SGA infants with EUGR. However, IGF-1 levels in SGA infants without EUGR were as high as those observed in appropriate-for-gestational-age infants and were significantly different from those in SGA infants with EUGR at 4 and 8 weeks of age. Leptin and ghrelin levels did not differ significantly among the three groups at any time point. Conclusion: IGF-1 is related to catch-up growth in SGA VLBW infants during neonatal intensive care unit admission; however, this does not appear to be the case for leptin and ghrelin. IGF-1 level monitoring may be useful for predicting EUGR in preterm VLBW infants. abstract_id: PUBMED:25843159 The biomarkers of fetal growth in intrauterine growth retardation and large for gestational age cases: from adipocytokines to a metabolomic all-in-one tool. Adipose tissue is no longer considered as inert; the literature describes the role it plays in the production of many substances, such as adiponectin, visfatin, ghrelin, S100B, apelin, TNF, IL-6 and leptin. These molecules have specific roles in humans and their potential as biomarkers useful for identifying alterations related to intrauterine growth retardation and large for gestational age neonates is emerging. Infants born in such conditions have undergone metabolic changes, such as fetal hypo- or hyperinsulinemia, which may lead to development of dysmetabolic syndrome and other chronic diseases in adulthood. In this review, these biomarkers are analyzed specifically and it is discussed how metabolomics may be an advantageous tool for detection, discrimination and prediction of metabolic alterations and diseases. Thus, a holistic approach, such as metabolomics, could help the prevention and early diagnosis of metabolic syndrome. abstract_id: PUBMED:21353474 Leptin administration affects growth and skeletal development in a rat intrauterine growth restriction model: preliminary study. Objective: Skeletal abnormalities are one of the hallmarks of growth delay during gestation. The aim of this study was to determine changes induced by leptin in skeletal growth and development in a rat model of intrauterine growth retardation (IUGR) and to elucidate the possible underlying mechanisms. Methods: Intrauterine growth retardation was induced prepartum and the effects of leptin to mothers prenatally or to offspring postnatally were studied. Radii were harvested and tested mechanically and structurally. Tibias were evaluated for growth-plate morphometry. Results: On day 40 postpartum, total bone length and mineral density and tibial growth-plate width and numbers of cells within its zones of offspring treated with leptin were significantly greater than in the control group. Conclusion: Postnatal leptin administration in an IUGR model improves the structural properties and elongation rate of bone. These findings could pave the way to preventing some phenotypic presentations of IUGR. abstract_id: PUBMED:19095781 Intrauterine growth restriction and adult disease: the role of adipocytokines. Intrauterine growth restriction (IUGR) is the failure of the fetus to achieve his/her intrinsic growth potential, due to anatomical and/or functional disorders and diseases in the feto-placental-maternal unit. IUGR results in significant perinatal and long-term complications, including the development of insulin resistance/metabolic syndrome in adulthood. The thrifty phenotype hypothesis holds that intrauterine malnutrition leads to an adaptive response that alters the fetal metabolic and hormonal milieu designed for intrauterine survival. This fetal programming predisposes to an increased susceptibility for chronic diseases. Although the mechanisms controlling intrauterine growth are poorly understood, adipose tissue may play an important role in linking poor fetal growth to the subsequent development of adult diseases. Adipose tissue secretes a number of hormones, called adipocytokines, important in modulating metabolism and recently involved in intrauterine growth. This review aims to summarize reported findings concerning the role of adipocytokines (leptin, adiponectin, ghrelin, tumor necrosis factor (TNF), interleukin-6 (IL6), visfatin, resistin, apelin) in early life, while attempting to speculate mechanisms through which differential regulation of adipocytokines in IUGR may influence the risk for development of chronic diseases in later life. Answer: The research presented in the abstracts suggests that placental leptin may indeed be an important growth factor in intrauterine and neonatal development. Leptin, the protein product of the ob gene, is not only produced by adipocytes but is also expressed in placental tissue and in human placental cell lines, indicating that the placenta is a source of leptin production in utero (PUBMED:9200375). The presence of leptin mRNA in placental tissue at levels comparable to or greater than adipose tissue supports this notion (PUBMED:11780443; PUBMED:11798925). The correlation between leptin and adiposity found in children and adults is also observed in newborns, with serum leptin concentrations in newborns being increased more than three-fold compared to children in Tanner stages 1 and 2 when controlling for adiposity. This suggests that leptin concentrations in newborns are not explained by adiposity alone (PUBMED:9200375). Additionally, placental leptin has been shown to correlate with intrauterine fetal growth and development, with the placentas of small for gestational age (SGA) neonates expressing leptin mRNA and protein at significantly lower levels than those of appropriate for gestational age (AGA) neonates, and the placentas of large for gestational age (LGA) neonates expressing higher levels (PUBMED:11780443; PUBMED:11798925). Furthermore, leptin administration has been found to affect growth and skeletal development in a rat intrauterine growth restriction (IUGR) model, suggesting that leptin may play a role in mitigating some effects of IUGR (PUBMED:21353474). The relationship between leptin and IUGR is also highlighted by the finding that increased leptin synthesis in placental trophoblast cells is associated with IUGR (PUBMED:28651476). In summary, the evidence points to a significant role for placental leptin in fetal growth and development, with potential implications for understanding and possibly intervening in conditions such as IUGR and other growth-related disorders (PUBMED:9200375; PUBMED:11780443; PUBMED:11798925; PUBMED:21353474; PUBMED:28651476).
Instruction: Do NSAIDs affect longitudinal changes in knee cartilage volume and knee cartilage defects in older adults? Abstracts: abstract_id: PUBMED:19699379 Do NSAIDs affect longitudinal changes in knee cartilage volume and knee cartilage defects in older adults? Background: The effects of nonsteroidal anti-inflammatory drugs (NSAIDs) on knee osteoarthritis progression are unclear. The aim of this longitudinal study was to determine the associations between use of NSAIDs and changes in knee cartilage volume and knee cartilage defects over 2.9 years in older adults. Methods: T(1)-weighted fat-suppressed magnetic resonance imaging on the right knee was performed in a total of 395 randomly selected subjects (mean age 62 years, range 51-80 years, and 50% female) to assess knee cartilage volume at tibial sites and knee cartilage defects (0-4 scale) at baseline and 2.9 years later. Medication use in the last month was recorded by questionnaire. Results: Compared with nonusers of NSAIDs (n = 334), users of cyclooxygenase (COX)-2 inhibitors (n = 40) had decreased knee cartilage defect development in the medial tibiofemoral compartment (odds ratio [OR] 0.4, 95% confidence interval [CI], 0.2-0.99), whereas users of conventional NSAIDs (n = 21) had increased knee cartilage defect development in both medial (OR 3.1, 95% CI, 1.0-9.1) and lateral (OR 2.6, 95% CI, 1.0-6.7) tibiofemoral compartments. Comparing users of COX-2 inhibitors with users of conventional NSAIDs, the latter had higher knee cartilage volume loss (-5.3% vs -3.1% at medial tibia and -3.6% vs -1.1% at lateral tibia; all P <.05). All associations were adjusted for potential confounders including knee pain and radiographic osteoarthritis. Conclusions: This study suggests that nonselective NSAIDs may have deleterious effects, while selective COX-2 inhibitors might have beneficial effects on knee cartilage. Randomized controlled trials examining knee structure to confirm this finding are warranted. abstract_id: PUBMED:31699372 Association of glucose homeostasis and metabolic syndrome with knee cartilage defects and cartilage volume in young adults. Objective: To describe the associations of glucose homeostasis and metabolic syndrome (MetS) measures with knee cartilage defects and cartilage volume in young adults. Methods: Fasting blood biochemistry, waist circumference and blood pressure measures were collected 4-5 years prior to knee magnetic resonance imaging (MRI) scans. Blood measures included levels of glucose, insulin, triglyceride and high-density lipoprotein cholesterol (HDL-C). Homeostatic model assessment 2-insulin resistance (HOMA2-IR), HOMA2-beta cell function (HOMA2-β), HOMA2-insulin sensitivity (HOMA-S) and MetS were calculated or defined. Knee cartilage defects and cartilage volume were measured from MRI scans. Data were analysed using log binomial or linear regressions. Results: Among 328 participants (47.3% were females, aged 26-36 years at baseline), 40 (12.7%) had hyperglycaemia and 21 (6.7%) had MetS. Glucose homeostasis measures (except fasting glucose) were associated with tibiofemoral cartilage defects (fasting insulin: relative risk (RR) 1.05, 95% confidence interval (CI) 1.01 to 1.08; HOMA2-IR: 1.44, 1.08 to 1.92; HOMA2-β: 2.59, 1.33 to 5.07; HOMA2-S: 0.36, 0.18 to 0.72), but not patellar cartilage defects. There were no associations between glucose homeostasis measures and knee cartilage volume. High waist circumference (RR 2.32, 95% CI 1.18 to 4.54) and low HDL-C (RR 1.99, 95% CI 1.08 to 3.69) were associated with tibiofemoral cartilage defects, but no other associations were observed between MetS or its components and cartilage defects or volume. Conclusion: Insulin resistance, high waist circumference and low HDL-C were associated with higher risk of tibiofemoral cartilage defects, suggesting glucose homeostasis and some MetS components may affect early cartilage damage in young adults. abstract_id: PUBMED:17763451 Static knee alignment is associated with the risk of unicompartmental knee cartilage defects. Although knee malalignment is a risk factor for the progression of unicompartmental knee osteoarthritis (OA), it is unclear how this relationship is mediated. Cartilage defects are known to predate cartilage loss and the onset of knee OA, and it may be that knee malalignment increases the risk of unicompartmental knee cartilage defects. Knee radiographs and MRI were performed on a total of 202 subjects, 36.6% of whom had radiographic knee OA, to determine the relationship between static knee alignment and knee cartilage defects. Analyses were performed for the entire cohort, as well as for healthy and OA subgroups. For every 1 degrees increase in a valgus direction, there was an associated reduced risk of the presence of cartilage defects in the medial compartment of subjects with knee OA (p = 0.02), healthy subjects (p = 0.002), and the combined (p < 0.001) group. Moreover, for every 1 degrees increase in a valgus direction, there was an associated increased risk of the presence of lateral cartilage defects in the OA group (p = 0.006), although the relationship between change toward genu valgum and lateral compartment cartilage defects did not persist for the healthy group (p = 0.16). This cross-sectional study has demonstrated that knee alignment is associated with the risk for compartment specific knee cartilage defects in both healthy and arthritic people. Given that the natural history of cartilage volume reduction appears to be predated by the presence of cartilage defects, whether knee alignment affects the longitudinal progression from cartilage defects to cartilage loss requires further examination. abstract_id: PUBMED:16320339 Association of prevalent and incident knee cartilage defects with loss of tibial and patellar cartilage: a longitudinal study. Objective: To describe the association between prevalent and incident knee cartilage defects and loss of knee cartilage in male and female adults. Methods: A convenience sample of 325 subjects (mean age 45 years; age range 26-61 years) was evaluated at baseline and approximately 2 years later. Knee cartilage volume, cartilage defect scores (0-4 scale), and joint surface area were determined using T1-weighted fat-suppression magnetic resonance imaging techniques. Height, weight, and radiographic evidence of osteoarthritis were measured by standard protocols. Results: Multivariable analysis revealed that baseline cartilage defect scores at the medial tibia, lateral tibia, and patella had a dose-response association with the annual rate of change in knee cartilage volume at the corresponding site (beta = -1.3% to -1.2% per grade; P < 0.05 for all comparisons). In addition, an increase in knee cartilage defect score (change of > or =1) was associated with higher rates of knee cartilage volume loss at all sites (beta = -1.9% to -1.7% per year; P < 0.01 for all comparisons). Furthermore, a decrease in the knee cartilage defect score (change of less than or equal to -1) was associated with an increase in knee cartilage volume at all sites (beta = 1.0% to 2.7% per year; P < 0.05 for all comparisons). Conclusion: Prevalent knee cartilage defects are predictive of compartment-specific cartilage loss over 2 years. Both increases and decreases in knee cartilage defects are associated with changes in knee cartilage volume, which implies a potential for reversal of knee cartilage loss. abstract_id: PUBMED:36076236 Osteophytes mediate the associations between cartilage morphology and changes in knee symptoms in patients with knee osteoarthritis. Aims: To investigate whether the associations between cartilage defects and cartilage volumes with changes in knee symptoms were mediated by osteophytes. Methods: Data from the Vitamin D Effects on Osteoarthritis (VIDEO) study were analyzed as a cohort. The Western Ontario and McMaster Universities Osteoarthritis Index was used to assess knee symptoms at baseline and follow-up. Osteophytes, cartilage defects, and cartilage volumes were measured using magnetic resonance imaging at baseline. Associations between cartilage morphology and changes in knee symptoms were assessed using linear regression models, and mediation analysis was used to test whether these associations were mediated by osteophytes. Results: A total of 334 participants (aged 50 to 79 years) with symptomatic knee osteoarthritis were included in the analysis. Cartilage defects were significantly associated with change in total knee pain, change in weight-bearing pain, and change in non-weight-bearing pain after adjustment for age, sex, body mass index, and intervention. Cartilage volume was significantly associated with change in weight-bearing pain and change in physical dysfunction after adjustment. Lateral tibiofemoral and patellar osteophyte mediated the associations of cartilage defects with change in total knee pain (49-55%) and change in weight-bearing pain (61-62%) and the association of cartilage volume with change in weight-bearing pain (27-30%) and dysfunction (24-25%). Both cartilage defects and cartilage volume had no direct effects on change in knee symptoms. Conclusions: The significant associations between cartilage morphology and changes in knee symptoms were indirect and were partly mediated by osteophytes. abstract_id: PUBMED:24833783 A longitudinal study of the association between infrapatellar fat pad maximal area and changes in knee symptoms and structure in older adults. Background: The infrapatellar fat pad (IPFP) is of uncertain significance for knee osteoarthritis. The aim of this study was to describe the longitudinal associations between baseline IPFP maximal area and changes in knee pain, knee cartilage volume and cartilage defects in older adults. Methods: 356 community-dwelling male and female adults aged 50-80 years were measured at baseline and approximately 2.6 years later. T1-weighted or T2-weighted fat-suppressed MRI was used to assess maximal IPFP area, cartilage volume and cartilage defects at baseline and/or follow-up. Knee pain was assessed by the self-administered Western Ontario McMaster Osteoarthritis Index questionnaire. Results: After adjustment for confounders, IPFP maximal area in women was significantly and negatively associated with changes in knee pain (β: -0.18 to -0.86 for total knee pain, pain at night while in bed, pain when sitting/lying and pain when standing upright, all p<0.05) but not with other knee pain subscales. IPFP maximal area in women was beneficially associated with change in tibial cartilage volume per annum (β: +1.56% per cm(2) at medial site; +0.86% per cm(2) at lateral site, both p<0.05), but not with change in patellar cartilage volume. Further, it was significantly associated with reduced risks of increases in medial cartilage defects (relative risk: 0·46 at tibial site, relative risk: 0.59 at femoral site; both p<0.05) but not with increases at other sites in women. No significant associations were found in men. Conclusions: While the associations are not fully consistent, IPFP maximal area appears to have a protective role for knee symptoms and cartilage damage in older female adults. abstract_id: PUBMED:28521839 Associations between systemic bone mineral density and early knee cartilage changes in middle-aged adults without clinical knee disease: a prospective cohort study. Background: Osteoarthritis has a high prevalence in people with high bone mineral density (BMD). Nevertheless, whether high systemic BMD predates early structural features of knee osteoarthritis is unclear. This study examined the association between systemic BMD and knee cartilage defect progression and cartilage volume loss in middle-aged people without clinical knee disease. Methods: Adults (n = 153) aged 25-60 years had total body, lumbar spine, and total hip BMD assessed by dual-energy X-ray absorptiometry at baseline (2005-2008), and tibial cartilage volume and tibiofemoral cartilage defects assessed by magnetic resonance imaging at baseline and follow up (2008-2010). Results: Higher spine BMD was associated with increased risk for progression of medial (OR = 1.45, 95% CI 1.10, 1.91) and lateral (OR = 1.30, 95% CI 1.00, 1.67) tibiofemoral cartilage defects. Total hip BMD was also positively associated with the progression of medial (OR = 1.63, 95% CI 1.10, 2.41) and lateral (OR = 1.53, 95% CI 1.08, 2.18) tibiofemoral cartilage defects. Greater total body, spine, and total hip BMD were associated with increased rate of lateral tibial cartilage volume loss (for every 1 g/10 cm2 increase in total body BMD: B = 0.44%, 95% CI 0.17%, 0.71%; spine BMD: 0.17%, 95% CI 0.04%, 0.30%; total hip BMD: 0.29%, 95% CI 0.13%, 0.45%), with no significant associations for medial tibial cartilage volume loss. Conclusion: In middle-aged people without clinical knee disease, higher systemic BMD was associated with increased early knee cartilage damage. Further work is needed to clarify the effect of systemic BMD at different stages of the pathway from health through to disease in knee osteoarthritis, as new therapies targeting bone are developed for the management of knee osteoarthritis. abstract_id: PUBMED:14963201 Longitudinal study of the relationship between knee angle and tibiofemoral cartilage volume in subjects with knee osteoarthritis. Objectives: There is emerging evidence that knee alignment is associated with progression of osteoarthritis (OA). The aim of this study was to examine the relationship between baseline knee angle and the rate of cartilage loss in subjects with knee OA. Methods: One hundred and seventeen subjects with knee OA had standing radiographs and MRI on their symptomatic knee at baseline and at the 1.9+/-0.2 yr follow-up. Knee cartilage volume was measured at baseline and follow-up. Knee angle was defined as the angle subtended by a line drawn through the mid-shaft of the femur with respect to one drawn through the mid-shaft of the tibia. Results: At baseline, in the medial compartment, as the angle decreased (i.e. was less varus) the tibial and femoral cartilage volume increased. In the lateral compartment, as the angle became more valgus, there was a reduction in tibial and femoral cartilage volume. In the longitudinal study, for every 1 degrees increase in baseline varus angulation there was an average annual loss of medial femoral cartilage of 17.7 micro l [95% confidence interval (CI) 6.5-28.8]. Although not statistically significant, there was a trend for a similar relationship between loss of medial tibial cartilage volume and baseline knee angle. In the lateral compartment, there was an average loss of tibial cartilage volume of 8.0 micro l (95% CI 0.0-16.0) for every 1 degrees increase in valgus angle. Conclusions: Baseline knee angle is associated with the rate of cartilage loss in the knee. Further work will be needed to determine whether therapies aimed at modifying the knee angle will reduce the progression of knee OA. abstract_id: PUBMED:19598333 Women have increased rates of cartilage loss and progression of cartilage defects at the knee than men: a gender study of adults without clinical knee osteoarthritis. Objective: Women have an increased risk of knee osteoarthritis (OA). However, little is known about gender differences in cartilage health before the onset of clinical knee OA. The aim of this study was to examine whether there are longitudinal gender differences in knee cartilage in a cohort of healthy, asymptomatic adults with no clinical knee disease. Methods: Two hundred seventy-one participants (169 women) aged between 50 and 79 years with no clinical history of knee pain or pathology were examined using magnetic resonance imaging at baseline and 2.3 years later. From these images, changes in tibial and patella cartilage volume and progression of cartilage defects were determined. Results: In multivariate analyses, after adjustment for potential confounders, the average annual percentage loss of total tibial cartilage volume was significantly greater in women (1.6% [95% CI, 1.1-2.2]) than in men (0.4%[95% CI, -0.4 to 1.2]) (P = 0.05 for difference). Likewise, the female gender was also associated with an increased risk for the progression of tibiofemoral cartilage defects (odds ratio, 3.0; 95% CI, 1.1-8.1; P = 0.03). At the patella,the average annual percentage loss of cartilage volume was significantly greater in women (2.3% [95% CI, 1.7-2.8]) than in men (0.8% [95% CI, 0.1-1.6]) (P = 0.02 for difference). Conclusions: The female predisposition toward knee OA may, at least in part, be due to gender differences in cartilage health, even before the onset of clinical knee disease. Understanding the mechanism for these gender differences may provide a means to reduce the risk of knee OA in women. abstract_id: PUBMED:25550336 Association between MRI-detected knee joint regional effusion-synovitis and structural changes in older adults: a cohort study. Objective: To describe the cross-sectional and longitudinal associations between knee regional effusion-synovitis and structural changes in older adults. Methods: A total of 977 subjects were randomly selected from the local community (mean 62 years, 50% female) at baseline and 404 were followed up 2.6 years later. T2-weighted MRI was used to assess knee effusion-synovitis in four subregions: suprapatellar pouch, central portion, posterior femoral recess and subpopliteal recess. Knee cartilage defects, cartilage volume and bone marrow lesions (BMLs) were measured using MRI at baseline and follow-up. Results: Cross-sectionally, effusion-synovitis in most subregions was significantly associated with a higher risk of cartilage defects, BMLs and reduced cartilage volume. Longitudinally, suprapatellar pouch effusion-synovitis at baseline predicted an increase in cartilage defects (p<0.01), loss of cartilage volume (p=0.04) and an increase in BMLs (p=0.02) in multivariable analyses. The significant associations of effusion-synovitis with cartilage volume and BMLs disappeared after adjustment for cartilage defects. Effusion-synovitis in whole knee joint (p<0.01) and subpopliteal recess (p<0.05) was consistently associated with longitudinal changes in cartilage defects but not in cartilage volume and BMLs. Conclusions: There are independent associations between knee joint effusion-synovitis and knee cartilage defects in both cross-sectional and longitudinal analyses, suggesting a potential causal relationship. The associations of effusion-synovitis with BMLs and cartilage volume were largely dependent on cartilage defects, suggesting potential causal pathways. Answer: Yes, NSAIDs do affect longitudinal changes in knee cartilage volume and knee cartilage defects in older adults. A longitudinal study aimed to determine the associations between the use of NSAIDs and changes in knee cartilage volume and knee cartilage defects over 2.9 years in older adults found that compared with nonusers of NSAIDs, users of cyclooxygenase (COX)-2 inhibitors had decreased knee cartilage defect development in the medial tibiofemoral compartment. In contrast, users of conventional NSAIDs had increased knee cartilage defect development in both medial and lateral tibiofemoral compartments. Additionally, users of conventional NSAIDs had higher knee cartilage volume loss compared to users of COX-2 inhibitors. These associations were adjusted for potential confounders including knee pain and radiographic osteoarthritis. The study suggests that nonselective NSAIDs may have deleterious effects, while selective COX-2 inhibitors might have beneficial effects on knee cartilage. However, randomized controlled trials examining knee structure are warranted to confirm this finding (PUBMED:19699379).
Instruction: Is the predischarge exercise test valid in patients younger than 40 years old after myocardial infarct for determination of multivascular disease? Abstracts: abstract_id: PUBMED:9304164 Is the predischarge exercise test valid in patients younger than 40 years old after myocardial infarct for determination of multivascular disease? Background And Objectives: Conventional exercise testing before hospital discharge is the most useful procedure in order to estimate postinfarction prognosis and in detecting multivessel coronary disease which is associated with a poor long-term prognosis. There are no bibliographic reports about it in younger myocardial infarction survivors. The aim of the study was to evaluate sensitivity, specificity and predictive value of symptoms limited maximal exercise testing for multivessel disease diagnosis in young patients after myocardial infarction. Methods: Myocardial infarction survivors until the age of 40 performed symptoms limited maximal exercise testing and had a coronary arteriography before hospital discharge. Results: A total of 100 consecutive patients were included, although in only 83 of them exercise tests and coronariographic studies were done. In this group, multivessel disease was confirmed in 27 patients (15 with positive tests and 12 with normal exercise testing). In the remaining 56 young adults without multivessel involvement, positive tests were only observed in 15 patients and normal tests in 45. Thus, a sensitivity of 56%, specificity of 73%, positive predictive value of 50% and negative predictive value of 77% were found. When patients showed high risk exercise test criteria, the exercise test positive predictive value increased to 80%. Conclusions: Due to the lower sensitivity of this test in young myocardial infarction survivors for detecting multivessel artery disease, we remark on the need for predischarge complementary tests such as isotopic, stress echocardiography or coronariography testing. abstract_id: PUBMED:3192838 Exercise testing after myocardial infarction: relative values of the low level predischarge and the postdischarge exercise test. This study was undertaken to compare the relative values of the low level predischarge exercise test and the postdischarge (6 weeks) symptom-limited test in 518 consecutive patients admitted with an acute myocardial infarction. Of the patients who did not develop significant ST segment depression or angina during the predischarge test, the symptom-limited test also remained negative in 91.5 and 91.9% of the patients, respectively. Similar results were obtained with ST segment elevation and the systolic blood pressure response during the two exercise tests with only 2.1 and 11.4% changing from normal to abnormal, respectively. Discriminant function analysis was done to predict the occurrence of coronary events (unstable angina, reinfarction, cardiac failure, cardiac death) with use of the data from the exercise tests together with other clinical and investigational data. The jackknife method correctly classified 71.9 and 71.4% of the patients with the data from the predischarge exercise test and symptom-limited test, respectively. Combining the data from the two tests improved the overall predictive accuracy to only 75.0%. It is concluded that the routine performance of a symptom-limited test 6 to 8 weeks after infarction does not reveal any significant additional information in those patients who have undergone a predischarge low level exercise test. Thus the 6 to 8 week test should be restricted to selected patients after myocardial infarction. abstract_id: PUBMED:6526050 Ineligibility for predischarge exercise testing after myocardial infarction in the elderly: implications for prognosis. This study describes the clinical profile and prognosis of elderly patients not eligible for predischarge exercise testing. The database consisted of 133 patients 55-64 years of age, and 111 patients older than 64 years of age who survived an acute myocardial infarction. Follow-up was one year. In the younger age group, 24 (18%) patients were unable to perform the test, in contrast to 63 (57%) of the elderly subjects. In these two groups, one-year mortality rates were 13% and 37%, compared with 6% and 4% for the respective patients eligible for stress testing. Clinical profile and radionuclide ejection fraction between ineligible patients in both age groups were similar. Ejection fraction measurement was the best predictor of late mortality in those patients who did not have an exercise test. It is concluded that ineligibility for predischarge exercise test identifies a high-risk group, especially in patients older than 64 years of age. abstract_id: PUBMED:3492907 Predictive importance of clinical findings and a predischarge exercise test in patients with suspected unstable coronary artery disease. The prognostic information of clinical variables and a predischarge exercise test was studied in 400 patients (282 men, 118 women) admitted to the coronary care unit with suspected unstable coronary artery disease, that is, recurring chest pain of new onset, increasing anginal pain in formerly stable angina pectoris or suspected nontransmural acute myocardial infarction. Forty-nine coronary events occurred in the 276 men who performed the exercise test during the following year, whereas only 5 coronary events occurred among the 118 women. The only variable of prognostic importance in women was nontransmural myocardial infarction. In men, the clinical variables increasing age, duration of angina, ST- or T-segment changes on the rest electrocardiogram and increasing angina or nontransmural myocardial infarction as inclusion criteria were associated with increased occurrence of coronary artery bypass surgery, transmural myocardial infarction or cardiac death. Findings of ST-segment depression, limiting chest pain or low rate-pressure product during the exercise test were of greater value than any clinical variable in prediction of coronary artery bypass surgery, transmural myocardial infarction or cardiac death. Within all clinical subgroups of men, the results of the exercise test had an additive predictive value for future coronary events. Combinations of clinical data and exercise test results enabled the best identification of patients with high or low risk for coronary events. abstract_id: PUBMED:2124975 Late recurrent ischemia in infarct patients with a normal predischarge exercise test after thrombolysis. We investigated the prognostic value of normal predischarge exercise test in 109 patients after myocardial infarction treated with i.v. thrombolysis within 4 h. In 29 of these 109 patients, elective PTCA or bypass surgery was performed for prognostic reasons after coronary angiography; 80 patients were treated conservatively with drug therapy. Recurrent postinfarct angina early after hospital discharge was the reason in 4 of 80 for PTCA or bypass surgery. Twenty-three of the remaining 76 conservatively treated patients developed recurrent ischaemia during long-term follow-up of 12.0 +/- 6.2 months, including one patient with reinfarction. Late recurrent ischaemia during long-term follow-up was observed in one third of the conservatively treated patients with a normal predischarge exercise test, although a high percentage (30%) of patients in this subgroup had been treated with PTCA or bypass surgery mainly for prognostic reasons. Predischarge exercise test is therefore of limited value for detection of still viable myocardium at risk of further ischaemic events after acute myocardial infarction and thrombolysis. abstract_id: PUBMED:8421972 Impact of treatment strategy on predischarge exercise test in the Thrombolysis in Myocardial Infarction (TIMI) II Trial. Predischarge supine bicycle ergometry was used to assess persistent myocardial ischemia in postinfarction patients who received thrombolytic therapy and were randomized to an invasive versus conservative strategy in the Thrombolysis in Myocardial Infarction (TIMI) II trial. The frequency of ischemic responses in both strategies, and the 1-year prognostic importance of the different exercise test outcomes were examined. At 14 days, the percentage of patients with any adverse outcome (including death, presence of exercise-induced ST-segment depression, or inability to perform the exercise test) was 33.7% of 1,681 randomly assigned to the invasive strategy compared with 34.6% of 1,658 randomly assigned to the conservative strategy (p = 0.57). The 1-year mortality was greater in patients who did not perform the predischarge exercise test (7.7%) than in those who did (1.8%) (p < 0.001); the former were older, and a greater proportion were women, had a more frequent history of myocardial infarction, and more extensive coronary artery disease (p < 0.01 for each comparison). The 1-year mortality in patients with exercise-induced ST-segment depression or chest pain was only 1.4% (3 of 22) among those randomly assigned to the conservative strategy where coronary angiography and revascularization were recommended if the test result was abnormal (relative risk compared with those without ST-segment depression or chest pain 0.6; 99% confidence interval 0.1 to 2.9).(ABSTRACT TRUNCATED AT 250 WORDS) abstract_id: PUBMED:8551170 Ischemia monitoring with on-line vectorcardiography compared with results from a predischarge exercise test in patients with acute ischemic heart disease. Information from 24-hour monitoring with on-line vectorcardiography, starting immediately after admission, was compared with results from a predischarge exercise test 3-13 days after admission. A total of 169 patients with acute myocardial infarction and 73 patients with unstable angina pectoris were investigated. Patients were followed for 487 +/- 135 days. During the follow-up period, 19 patients (8%) died from cardiac causes and 34 (14%) were hospitalized for a myocardial infarction. The QRS vector difference (QRS-VD), ST change vector magnitude (STC-VM), ST vector magnitude (ST-VM), and ST vector leads X, Y, Z were monitored. Patients with ST depression on the exercise test showed higher occurrence of transient, supposedly ischemic, episodes of QRS-VD, STC-VM, and ST-VM than patients without ST depression. The sensitivity and specificity of identifying patients with ST depression at the exercise test were respectively, 71 and 47% for QRS-VD episodes, 58 and 56% for ST-VM episodes, and 55 and 65% for STC-VM episodes. The maximum ST depression at the exercise test was related to the maximum ST depression in vector lead X (r = .44, P < .001) and the number of STC-VM (r = .40, P < .001), ST-VM (r = .37, P < .001), and QRS-VD (r = .33, P < .001) episodes on the VCG. In multivariate analysis, maximum ST depression in vector lead X and STC-VM episodes were the best determinants for ST depression at the exercise test. In a Cox regression model, the optimal combination of exercise test data in patients who died from cardiac causes exhibited a global chi-square value of 20.0. The combination of these data and the number of STC-VM episodes increased the global chi-square value to 30.6. This study indicates that in patients with acute ischemic heart disease, early continuous vectorcardiographic monitoring may predict the results from a predischarge exercise test and also contributes independent prognostic information beyond that of exercise test data. abstract_id: PUBMED:2589187 Prognostic utility of predischarge dipyridamole-thallium imaging compared to predischarge submaximal exercise electrocardiography and maximal exercise thallium imaging after uncomplicated acute myocardial infarction. The prognostic value of predischarge dipyridamole-thallium scanning after uncomplicated myocardial infarction was determined by comparison with submaximal exercise electrocardiography and 6-week maximal exercise thallium imaging and by correlation with clinical events. Two endpoints were defined: cardiac events and severe ischemic potential. Of the 40 patients studied, 8 had cardiac events within 6 months (1 died, 3 had myocardial infarction and 4 had unstable angina requiring hospitalization). The finding of any redistribution on dipyridamole-thallium scanning was common (77%) in these patients and had poor specificity (29%). Redistribution outside of the infarct zone, however, had equivalent sensitivity (63%) and better specificity (75%) for events (p less than 0.05). Both predischarge dipyridamole-thallium and submaximal exercise electrocardiography identified 5 of the 8 events (p = 0.04 and 0.07, respectively). The negative predictive accuracy for events for both dipyridamole-thallium and submaximal exercise electrocardiography was 88%. In addition to the 8 patients with events, 16 other patients had severe ischemic potential (6 had coronary bypass surgery, 1 had inoperable 3-vessel disease and 9 had markedly abnormal 6-week maximal exercise tests). Predischarge dipyridamole-thallium and submaximal exercise testing also identified 8 and 7 of these 16 patients with severe ischemic potential, respectively. Six of the 8 cardiac events occurred before 6-week follow-up. A maximal exercise thallium test at 6 weeks identified 1 of the 2 additional events within 6 months correctly. Thallium redistribution after dipyridamole in coronary territories outside the infarct zone is a sensitive and specific predictor of subsequent cardiac events and identifies patients with severe ischemic potential.(ABSTRACT TRUNCATED AT 250 WORDS) abstract_id: PUBMED:10462933 Analysis of the results from an early low-level predischarge and late symptom-limited exercise tests and coronary angiographic examination in patients with uncomplicated myocardial infarction. The aim of our study was to evaluate the prognostic value of an early predischarge and late symptom-limited exercise tests as opposed to the evidence from the coronary angiographic examination. Twenty patients with an uncomplicated myocardial infarction were studied with an early predischarge exercise test (14-21 days) and a following postdischarge late symptom-limited veloergometric test. All the patients underwent an exercise test designed according to the protocol of the International Health Organization. A coronary angiographic examination was carried out in all the studied patients. Thirteen of the patients were with two positive veloergometric tests and significant stenosis from the selective coronary angiography. Seven of the patients had two negative tests. Two of them did not have significant coronary stenosis from the angiografic assessment. Generally 40% of the patients undergoing late exercise test reached a higher workload. There was no significant difference between the results from an early predischarge and late symptom-limited veloergometric test. A higher physical work load reached during the late exercise test did not show an improved coronary blood flow. Positive early and late exercise tolerance tests proved significant coronary stenosis. A negative exercise test did not exclude significant coronary stenosis. abstract_id: PUBMED:4055143 Stress testing predischarge and six weeks after myocardial infarction to compare submaximal and maximal exercise predischarge and to assess the reproducibility of induced abnormalities. Submaximal and maximal treadmill exercise tests were performed predischarge in 64 patients after acute myocardial infarction to assess the relative yield of residual ischaemic abnormalities. The reproducibility of individual abnormalities resulting from maximal stress tests performed predischarge and 6 weeks after infarction was also assessed in 55 of these patients. Compared with predischarge submaximal exercise testing, a maximal exercise test identified a significantly greater number of patients with residual myocardial ischaemia (26 vs. 15, P less than 0.05) and this was associated with a significantly longer average maximal exercise duration (P less than 0.001), and a higher rate-pressure product (P less than 0.001). Among the 55 patients who had maximal stress tests both predischarge and 6 weeks after infarction, there was a significant lack of reproducibility in the occurrence of exercise induced angina (P less than 0.01) and an abnormal blood pressure response (P less than 0.02). In contrast, exercise induced ST segment depression and elevation and ventricular arrhythmias were relatively reproducible. More patients had an ischaemic test result (ST depression or angina) at the later test compared to the predischarge test (33 vs. 25 patients) but this increase was not statistically significant. There were, however, significant increases at the later test in mean maximal exercise duration (P less than 0.001). mean maximal heart rate (P less than 0.001) and heart rate-systolic blood pressure double product (P less than 0.001). The majority of patients who had a cardiac event in the period between the two tests had a predischarge test abnormality. We conclude that a significantly greater number of patients with residual reversible myocardial ischaemia after infarction will be identified by symptom limited exercise testing compared with a submaximal predischarge test. Because ST depression and elevation appear reproducible, patients who develop these abnormalities during a predischarge test do not, for prognostic reasons, need retesting 6 weeks after infarction. Exercise induced angina pectoris and an abnormal blood pressure response, however, are highly variable and in these patients a repeat test may be useful. Answer: The predischarge exercise test in patients younger than 40 years old after myocardial infarction has been evaluated for its effectiveness in determining multivascular disease. The study found that the sensitivity of the predischarge exercise test in detecting multivessel artery disease in young myocardial infarction survivors was 56%, with a specificity of 73%, a positive predictive value of 50%, and a negative predictive value of 77% (PUBMED:9304164). When high-risk exercise test criteria were present, the positive predictive value increased to 80%. However, due to the lower sensitivity of this test in young patients, the study suggests the need for additional predischarge complementary tests such as isotopic, stress echocardiography, or coronariography to improve the detection of multivessel disease (PUBMED:9304164). In conclusion, while the predischarge exercise test can provide some indication of multivessel disease in young patients after myocardial infarction, its lower sensitivity suggests that it should not be solely relied upon for this determination, and additional diagnostic methods should be considered.
Instruction: Is the course of panic disorder the same in women and men? Abstracts: abstract_id: PUBMED:9585708 Is the course of panic disorder the same in women and men? Objective: Panic disorder with or without agoraphobia has a chronic relapsing course. Factors associated with poor outcome include early onset of illness and phobic avoidance. Several, but not all, authors have found a worse clinical course for women. Using observational, longitudinal data from the Harvard/Brown Anxiety Disorders Research Program, the authors analyzed remission and symptom recurrence rates in panic patients with respect to sex. Method: Male and female patients (N = 412) in an episode of panic with or without agoraphobia were assessed by structured interview and prospectively followed for up to 5 years. Data on remission, symptom recurrence, and comorbid psychiatric conditions for each sex were compared. Results: There were no significant differences between men and women in panic symptoms or level of severity at baseline. Women were more likely to have panic with agoraphobia (85% versus 75%), while men were more likely to have uncomplicated panic (25% versus 15%). The rates of remission for panic with or without agoraphobia at 5 years were equivalent in men and women (39%). Of the subjects who achieved remission, 25% of the women and 15% of the men reexperienced symptoms by 6 months. Recurrence of panic symptoms continued to be higher in women (82%) than men (51%) during the follow-up period and was not influenced by concurrent agoraphobia. Conclusions: This study extends previous findings by showing that not only are women more likely to have panic with concurrent agoraphobia, but they are more likely than men to suffer a recurrence of panic symptoms after remission of panic. abstract_id: PUBMED:11331798 An eight-year longitudinal comparison of clinical course and characteristics of social phobia among men and women. Objective: Social phobia is a chronic disorder with a higher prevalence among women than men. Data from an eight-year longitudinal study were analyzed to investigate the course of social phobia and to explore potential sex differences in the course and characteristics of the illness. Methods: Data were analyzed from the Harvard/Brown Anxiety Research Program, a naturalistic, observational study begun in 1989 in which patients with social phobia are assessed every six to 12 months. Treatment was observed but not prescribed by the program personnel. Data on comorbidity, remission, and health-related quality of life were collected for 176 patients with social phobia. Results: Only 38 percent of women and 32 percent of men experienced a complete remission during the eight-year study period, a difference that was not significant. A larger proportion of women than men had the generalized form of social phobia, although the difference was not significant. Women were more likely to have concurrent agoraphobia, and men had a higher rate of comorbid substance use disorders. Social phobia had a more chronic course among women who had low Global Assessment of Functioning scores and a history of suicide attempts at baseline than among men who had these characteristics. Health-related quality of life was similar for both men and women, except that women were slightly but significantly more impaired in household functioning. Conclusions: The chronicity of social phobia was striking for both men and women. Although remission rates did not differ significantly between men and women, clinicians should be alert to the fact that women with poor baseline functioning and a history of suicide attempts have the greatest chronicity of illness. abstract_id: PUBMED:12768651 Chronicity, relapse, and illness--course of panic disorder, social phobia, and generalized anxiety disorder: findings in men and women from 8 years of follow-up. Anxiety disorders are chronic illnesses that occur more often in women than men. Previously, we found a significant sex difference in the 5-year clinical course of uncomplicated panic disorder that was attributable to a doubling of the illness relapse rate in women compared to men. However, we have not detected a sex difference in the clinical course of panic with agoraphobia, generalized anxiety disorder (GAD), or social phobia (SP), which are conditions generally thought to be more chronic than uncomplicated panic disorder. Given that a longer follow-up period may be required to detect differences in clinical course for more enduring illnesses, we conducted further analyses on this same cohort after a more protracted interval of observation to determine whether sex differences would emerge or be sustained. Data were analyzed from the Harvard/Brown Anxiety Research Program (HARP), a naturalistic, longitudinal study that repeatedly assessed patients at 6 to 12 month intervals over the course of 8 years. Data regarding remission and relapse status were collected from 558 patients and treatment was observed but not prescribed. Cumulative remission rates were equivalent among men and women with all diagnoses. Patients who experienced remission were more likely to improve during the first 2 years of study. Women with GAD continued remitting late into the observation period and experienced fewer overall remission events by 8 years. However, the difference in course failed to reach statistical significance. Relapse rates for women were comparable to those for men who suffered from panic disorder with agoraphobia, GAD, and SP. Again, initial relapse events were more likely to occur within the first 2 years of observation. However, relapse events for uncomplicated panic in women were less restricted to the first 2 years of observation and by 8 years, the relapse rates for uncomplicated panic was 3-fold higher in women compared with men. Anxiety disorders are chronic in the majority of men and women, although uncomplicated panic is characterized by frequent remission and relapse events. Short interval follow-up shows sex differences in the remission and relapse rates for some but not all anxiety disorders. These findings suggest important differences in the clinical course among the various anxiety disorders and support nosological distinctions among the various types of anxiety. It may be that sex differences in the clinical course of anxiety disorders hold prognostic implications for patients with these illnesses. abstract_id: PUBMED:22461133 Clinical features of bipolar disorder comorbid with anxiety disorders differ between men and women. Background: Anxiety disorders are commonly comorbid with bipolar disorder (BP) and may worsen course of illness, but differential impact of specific anxiety disorders in men and women remains unknown. Methods: We measured the impact of comorbid panic disorder (PD), social phobia, specific phobia, and obsessive-compulsive disorder (OCD) in 460 women and 276 men with Bipolar I Disorder (BPI) or schizoaffective disorder, bipolar type from the National Institute of Mental Health Bipolar Genetics Initiative. We compared clinical characteristics in BP with and without each anxiety disorder in men and women separately correcting for family relatedness. Results: Comorbid PD, OCD, and specific phobia were more common in women with BP than men. Comorbid social phobia correlated with increased risk of alcohol abuse in BP women, but not men. Women with comorbid PD attended fewer years of school. Comorbidity with OCD was associated with earlier age at the onset of BP for both genders. Comorbid PD, OCD, and specific phobia were associated with more antidepressant trials in BP, across both genders, compared to BP patients without these anxiety disorders. Conclusion: In BP, comorbid anxiety disorders are associated with increased risk for functional impairment, and women had differently associated risks than men. Clinicians should be aware of an increased risk for comorbid PD, OCD, and specific phobia in women with BP, and an increased risk of alcohol abuse in women with BD and comorbid social phobia. abstract_id: PUBMED:24326900 Psychosocial benefits of cardiac rehabilitation among women compared with men. Purpose: Cardiac rehabilitation (CR) has been shown to reduce cardiac risk and improve the psychosocial functioning of participants. This study examines gender differences on several psychosocial indicators across the course of CR. Methods: Patients (N = 380; 67.9% men and 32.1% women) referred from local inpatient and outpatient settings at a southeastern US academic medical facility were assessed on reported levels of depression, anxiety, panic, anger, and relationship satisfaction, using the Burns Brief Mood Survey, at the start and conclusion of a CR program. Medical variables were also assessed but are not the focus of this report. Statistical analyses included 1-way, Kruskal-Wallis, and repeated-measures analysis of variance procedures, as well as χ analyses. Results: Women reported more psychosocial symptoms at pre-CR than men, and overall, both groups improved across CR. Women with significant depression, anxiety, and panic experienced clinically significant benefit across CR. Although the percentage of men reporting clinically significant levels of anger decreased significantly across CR, clinically significant levels of anger did not significantly change among women. In addition, women did not report benefits in relationship dissatisfaction. Conclusion: This study provides further evidence that CR offers psychosocial benefit for women, as has been reported in several small clinical samples. Some notable gender differences on anger and relationship satisfaction were observed. Clinical attention may be warranted to facilitate improvement for symptoms of anger and relationship concerns among selected women who participate in CR. abstract_id: PUBMED:21714209 Differences in sense of coherence in difficult and aspirin-induced asthma and it's links with severity of panic and depressive symptoms in subgroups of women and men Aim: The author examined psychiatrically a group of 106 patients with difficult asthma and 100 patients with aspirin-induced asthma. The special interest of the study were differences in sense of coherence in both groups and it's links with severity of panic and depressive symptoms in subgroups of asthmatic women and men. Methods: 106 consecutive adults with confirmed, physician-diagnosed difficult asthma and 100 patients with aspirin-induced asthma underwent psychiatric interview and assessment using M.I.N.I 5.0, Beck Depression Inventory (BDI), Panic And Agoraphobia Scale (PAS) and Sense of Coherence Scale (SOC-29). Psychiatric assessment was performed by an experienced liaison psychiatrist according to ICD-10 and DSM-IV diagnosis. In the difficult asthma group, there were 78 women (74%) and 28 men (26%). The average age was 51.3 (SD = 14.5) for women and 47.5 (SD = 12.7) for men. In aspirin induced asthma group there were 66 women (66%) and 34 men (34%). The average age was 52.7 (SD = 12.3) for women and 48.8 (SD = 13.0) for men. Results: In both groups of asthmatic patients women were the majority (74% with difficult asthma and 66% with aspirin-induced asthma) with a much higher level of anxiety and depressive symptoms than men. General coherence and coherence measured in all three sub-scales were lower in the difficult asthma group, especially in women with severe panic and depressive symptoms. Conclusions: It is possible that differences of coherence affect the development, course and severity of anxiety and depressive symptoms in asthmatic women and men. It may play a special role in development of difficult asthma especially among women. abstract_id: PUBMED:21614830 Links between defence mechanisms and severity of symptoms of panic disorder and depression in women and men with difficult and aspirin-induced asthma Aim: The author examined psychiatrically a group of 106 patients with difficult asthma and 100 patients with aspirin-induced asthma. The special interest of the study were links between defence mechanisms used by women and men from both groups and severity of their panic and depressive symptoms. Methods: 106 consecutive adults with confirmed, physician-diagnosed difficult asthma and 100 patients with aspirin-induced asthma underwent the psychiatric interview and assessment using M.I.N.I 5.0, Beck Depression Inventory (BDI), Panic and Agoraphobia Scale (PAS) and the Defence Style Questionnaire. Psychiatric assessment was performed by an experienced liaison psychiatrist according to ICD-10 and DSM-IV diagnosis. In the difficult asthma group there were 78 women (74%) and 28 men (26%). The average age was 51.3 (SD = 14.5) for women and 47.5 (SD = 12.7) for men. In the aspirin induced asthma group there were 66 women (66%) and 34 men (34%). The average age was 52.7 (SD = 12.3) for women and 48.8 (SD = 13.0) for men. Results: In both groups of asthmatic patients women were majority (74% with difficult asthma and 66% with aspirin-induced asthma) with higher level of anxiety and depressive symptoms than men. Generally in difficult asthma group significantly more often than in aspirin-induced asthma neurotic and immature defence mechanisms were used. In both groups, either in women and in men there was a strong tendency to decrease the use of mature defence mechanisms and increase the use of neurotic and immature defence mechanisms in association with an increase of severity of panic and depressive symptoms. Conclusions: It is possible that differences in defence mechanisms used by women and men with difficult asthma affect the development, course and severity of their anxiety and depressive symptoms. This may play a special role in the development of difficult asthma phenomenon. abstract_id: PUBMED:9750973 Characteristics of agoraphobia in women and men with panic disorder with agoraphobia. We compared female and male patients with panic disorder with agoraphobia (PDA) in terms of characteristics of agoraphobia (AG). Ninety-five patients (73 women and 22 men) with the SCID-based diagnosis of PDA were administered the National Institute of Mental Health Panic Questionnaire (NIMH PQ), and women and men were compared on the items of the NIMH PQ that pertain to AG and symptoms of panic attacks. Male and female patients did not differ significantly with respect to demographic characteristics, age of onset of panic disorder and AG, duration of PDA, and severity and frequency of symptoms experienced during panic attacks. Women avoided more situations than did men, but this difference was not statistically significant. Women avoided buses and being in unfamiliar places alone significantly more often. The only situation that was avoided more often by men, although not significantly, was staying at home alone. Women were significantly more likely to stay at home to avoid agoraphobic situations and significantly less likely to go outside of home alone. When going outside, women required a companion significantly more often. There were significantly more married women than married men who required a spouse as a companion, and significantly more women with children than men with children who required a child as a companion. Women thought that AG had affected the overall quality of their lives significantly more adversely. Whereas the overall "profile" of agoraphobic situations does not seem to distinguish between female and male patients with AG, females may be more impaired and appear more dependent than men in terms of requiring companions to move outside of the home. Cultural and psychological factors may be most likely to account for these findings. abstract_id: PUBMED:36271705 Religion, women's health, and obstetrician-gynecologists. Religions continue to be a strong moral, even political, force in the world. They are often seen to be in conflict with women's health; we argue that this should not continue to be the case. The conflict can be traced back to when religions had their birth and early development in patriarchal communities in which women were marginalized to the edges of society. In addition, religious leadership has traditionally been dominated by men and exclusive of women. The recent introduction of new scientific technologies, which has empowered women to regulate and control their fertility, challenged traditional norms and raised a religious-inspired moral panic. However, a recent initiative has been gaining momentum. An enlightened religious leadership and a new generation of feminist religious activists are calling for a review of the original texts and a reinterpretation in a sociocultural context that is different from when they were first revealed. Obstetrician-gynecologists, while having to practice in a socioreligious context and continuing to face challenges in providing health care in religiously diverse societies, have a social responsibility to stand by women and to uphold that religions do not and should not stand in the way of advancing their health and rights. abstract_id: PUBMED:34517242 Men's anxiety: A systematic review. Aim: Anxiety disorders are amongst the most commonly diagnosed mental illnesses amongst men; however male-specific anxiety research is lacking. This review explores men's anxiety symptoms and disorders including help-seeking, coping and the role of masculinity. Method: Four electronic database searches identified 8,333 citations, with 25 studies meeting inclusion criteria. Nineteen studies employed quantitative methods, five studies reported qualitative research, and one utilised mixed methods. Results: Unique profiles of anxiety, including psychosomatic symptoms, were identified and persisted over extended periods of time. Men commonly reported self-reliance over formal help-seeking, and typically managed anxiety symptoms through problem-based coping. Masculinity was related to anxiety in complex ways; adherence to norms of toughness could be protective against anxiety onset, while adherence to emotional restrictiveness and heterosexual presentation norms were positively associated with anxiety. The experience of, and help-seeking for anxiety transgressed many men's adherence to masculinity norms resulting in significant social and self-stigmas. Limitations: The anxiety measurement scales utilised were inconsistent across included studies and there was limited scope of research into panic disorders, phobias and agoraphobia. Conclusion: Findings demonstrate the enduring nature of anxiety for men and a potential under-reporting of symptoms, especially amongst younger men. To better tailor clinical care and public health resources to the needs of men with anxiety disorders, targeted research examining men's lived experiences of (and coping strategies for) anxiety is essential. Answer: The course of panic disorder is not the same in women and men. Studies have shown that there are differences in the clinical course and characteristics of panic disorder between the sexes. Women are more likely to have panic with concurrent agoraphobia and are more likely than men to suffer a recurrence of panic symptoms after remission of panic (PUBMED:9585708). Additionally, an eight-year longitudinal study found that while remission rates did not differ significantly between men and women with social phobia, women with poor baseline functioning and a history of suicide attempts have the greatest chronicity of illness (PUBMED:11331798). Further analysis from the Harvard/Brown Anxiety Research Program (HARP) after eight years of observation revealed that cumulative remission rates were equivalent among men and women with all diagnoses, but women with generalized anxiety disorder (GAD) continued remitting late into the observation period and experienced fewer overall remission events by 8 years. However, this difference in course did not reach statistical significance. Relapse rates for women were comparable to those for men who suffered from panic disorder with agoraphobia, GAD, and social phobia. Nevertheless, relapse events for uncomplicated panic in women were less restricted to the first 2 years of observation and by 8 years, the relapse rates for uncomplicated panic was 3-fold higher in women compared with men (PUBMED:12768651). In the context of bipolar disorder comorbid with anxiety disorders, comorbid panic disorder (PD), social phobia, specific phobia, and obsessive-compulsive disorder (OCD) were more common in women with bipolar disorder than men. Comorbid social phobia correlated with increased risk of alcohol abuse in bipolar women, but not men. Women with comorbid PD attended fewer years of school, and comorbidity with OCD was associated with earlier age at the onset of bipolar disorder for both genders (PUBMED:22461133). These findings suggest that there are important differences in the clinical course of panic disorder and other anxiety disorders between women and men, which may have prognostic implications for patients with these illnesses.
Instruction: Is higher body temperature beneficial in ischemic stroke patients with normal admission CT angiography of the cerebral arteries? Abstracts: abstract_id: PUBMED:24482573 Is higher body temperature beneficial in ischemic stroke patients with normal admission CT angiography of the cerebral arteries? Background: Low body temperature is considered beneficial in ischemic stroke due to neuroprotective mechanisms, yet some studies suggest that higher temperatures may improve clot lysis and outcomes in stroke patients treated with tissue plasminogen activator (tPA). The effect of increased body temperature in stroke patients treated with tPA and with normal computed tomography angiography (CTA) on admission is unknown. We hypothesized a beneficial effect of higher body temperature in the absence of visible clots on CTA, possibly due to enhanced lysis of small, peripheral clots. Methods: Patients with ischemic stroke admitted to our Stroke Unit between February 2006 and April 2013 were prospectively registered in a database (Bergen NORSTROKE Registry). Ischemic stroke patients treated with tPA with normal CTA of the cerebral arteries were included. Outcomes were assessed by the modified Rankin Scale (mRS) after 1 week. An excellent outcome was defined as mRS=0, and a favorable outcome as mRS=0-1. Results: A total of 172 patients were included, of which 48 (27.9%) had an admission body temperature ≥37.0°C, and 124 (72.1%) had a body temperature <37.0°C. Body temperature ≥37.0°C was independently associated with excellent outcomes (odds ratio [OR]: 2.8; 95% confidence interval [CI]: 1.24-6.46; P=0.014) and favorable outcomes (OR: 2.8; 95% CI: 1.13-4.98; P=0.015) when adjusted for confounders. Conclusion: We found an association between higher admission body temperature and improved outcome in tPA-treated stroke patients with normal admission CTA of the cerebral arteries. This may suggest a beneficial effect of higher body temperature on clot lysis in the absence of visible clots on CTA. abstract_id: PUBMED:23807851 Persistent middle cerebral artery occlusion associated with lower body temperature on admission. Background: Low body temperature is considered neuroprotective in ischemic stroke, yet some studies suggest that low body temperature may also inhibit clot lysis and recanalization. We hypothesized that low body temperature was associated with persistent proximal middle cerebral artery (MCA) occlusion in patients with acute ischemic stroke presenting with symptoms of proximal MCA occlusion, suggesting a possible detrimental effect of low body temperature on recanalization. Methods: All patients with acute ischemic stroke admitted to our Stroke Unit between February 2006 and August 2012 were prospectively registered in a database. Computed tomography (CT) angiography was performed in patients admitted <6 hours after stroke onset. Based on presenting symptoms, patients were classified according to the Oxford Community Stroke Project classification (OCSP). Patients with symptomatic proximal MCA occlusion were compared to patients with total anterior circulation infarct (TACI) without MCA occlusion on CT angiography. Results: During the study period, 384 patients with acute ischemic stroke were examined with CT angiography. A total of 79 patients had proximal MCA occlusion and 31 patients had TACI without MCA occlusion. Median admission body temperatures were lower in patients with MCA occlusion compared to patients without occlusion (36.3°C versus 36.7°C, P = 0.027). Admission body temperature <36.5°C was independently associated with persistent MCA occlusion when adjusted for confounders in multivariate analyses (odds ratio 3.7, P = 0.007). Conclusion: Our study showed that low body temperature on admission was associated with persistent proximal MCA occlusion. These results may support a possible detrimental effect of low body temperature on clot lysis and recanalization. abstract_id: PUBMED:2512785 Correlation of angiographic and sequential CT findings in patients with evolving cerebral infarction. The usefulness of CT and angiography for predicting the final ischemic brain damage resulting from supratentorial ischemic stroke was evaluated in 36 patients. CT was performed within 4 hr and angiography within 6 hr after the onset of symptoms. CT was used to assess the site and size of parenchymal brain damage and angiography was used to evaluate the cerebral circulation. A 3-month follow-up CT study was used to determine the site and size of final ischemic damage. Angiography was normal in six patients and showed complete occlusion in 30. Angiographic findings in patients with arterial occlusion were classified as either internal carotid artery occlusion or middle cerebral artery (MCA) occlusion. MCA occlusions were subdivided into occlusion before the origin of internal lenticulostriate arteries (type 1), occlusion beyond the origin of these branches (type 2), occlusion at the bifurcation of the main trunk (type 3), and occlusion of the peripheral branches (type 4). Collateral blood supply was also studied. Early CT findings were positive in 25 of 36 patients; the lentiform nucleus alone, the lentiform nucleus and the cortex, or only the cortex were involved. In all patients with positive early CT findings, angiography showed an arterial occlusion, often located in the main trunk of the MCA. Involvement of the lentiform nucleus on early CT was always seen in patients with internal carotid artery or type 1 MCA occlusion.(ABSTRACT TRUNCATED AT 250 WORDS) abstract_id: PUBMED:34489106 A review of the anatomy of the middle cerebral artery for the era of thrombectomy: a radiologic tool based on CT angiography and perfusion CT. The middle cerebral artery (MCA) is one of the principal intracranial vessels. It is also the one that is most often involved in ischemic cerebrovascular disease, which accounts for a significant volume of brain imaging tests. With recent updates in the management of ischemic stroke (including intravascular treatment in increasingly distal vessels and in an increasingly widening spectrum of patients), it is becoming more important to know the details of the cerebral vascular anatomy to reach accurate diagnoses quickly and thereby improve patients' prognoses. For these reasons, we present this anatomic review of the MCA, reviewing its segments and anatomic limits, its branching patterns, and its anatomic variants. We also provide a radiologic tool based on correlations between CT angiography and perfusion CT to facilitate the identification of the points of occlusion within the branches of the MCA, taking into account its different variants. abstract_id: PUBMED:15359101 Clinical implication of cerebral artery calcification on brain CT. Background And Purpose: Arterial calcification is a process of atherosclerosis, usually taking place within advanced atheromatous plaque. In the coronary arteries, calcification seen on computed tomography (CT) often indicates severe stenosis, and is noted in almost all patients with coronary symptoms. Calcification of the cerebral arteries is also frequently observed on brain CT, but its clinical significance has yet to be clarified. To evaluate the clinical significance of cerebral arterial calcification on brain CT, we investigated the angiographic and transcranial Doppler ultrasonography (TCD) findings of calcified cerebral arteries. Methods: The subjects were 57 patients with ischemic stroke, all of whom underwent brain CT, digital subtraction cerebral angiography and TCD. Stroke subtypes were defined according to TOAST classification. Two clinicians who were blinded in respect to the patients' clinical history and to each other's interpretation, determined the presence of cerebral arterial calcification and the degree of angiographic stenosis. Results: Cerebral artery calcification was noted in 23 patients (40%) out of a total of 46 arteries, consisting of 23 internal carotid, 15 vertebral, 5 basilar, 2 middle cerebral and 1 anterior cerebral artery. The patients with calcification were significantly older and more hypertensive than those without calcification. All the patients with calcification showed either large artery atherosclerosis or lacunar stroke. Hemodynamic alterations measured by TCD were rarely observed in calcified arteries. The severity and extent of the atherosclerotic changes were significantly greater in the patients with calcification than those without calcification. Conclusion: These results suggest that cerebral arterial calcification does not directly indicate hemodynamically significant atherosclerotic stenosis involving calcified segments, but indicates the underlying etiology of either large artery atherosclerosis or lacunar stroke, and raises the possibility of a diffuse atherosclerotic process being present in the cerebral circulation. abstract_id: PUBMED:10029258 Conventional cerebral angiography in children with ischemic stroke. A retrospective review of conventional cerebral angiograms of 46 children with ischemic stroke was undertaken. Comparison was made with findings on magnetic resonance imaging and magnetic resonance angiography where available. Thirty-six children (78%) underwent magnetic resonance angiography in addition to conventional cerebral angiography. Seven patients had normal cerebral angiograms. Magnetic resonance angiography was diagnostic in 25 of 28 patients with large vessel occlusion, stenosis, or moya-moya syndrome. Conventional angiography was abnormal in four of nine patients with a normal magnetic resonance angiography. All patients with normal conventional angiograms also had normal magnetic resonance angiograms. Conventional angiography, either diagnostic or yielding further information, altered management in five patients with arterial dissection, one patient with large vessel occlusion, one patient with large vessel stenosis, and four patients with arteritis. On the basis of this experience, a clinical algorithm for the use of conventional cerebral angiography in the investigation of ischemic stroke in children is proposed. abstract_id: PUBMED:33741772 Correlation of orthostatic hypotension with extracranial carotid and intracranial cerebral arteries atherosclerosis in patients with acute ischemic stroke under cerebral angiography. Objective: The clinical management of patients with acute ischemic stroke (AIS) is complicated by orthostatic hypotension, which might have close relationship with the atherosclerosis of cerebral arteries. The primary objectives were to evaluate the relationship of orthostatic hypotension with extracranial carotid arteries atherosclerosis (ECAS) and intracranial atherosclerosis (ICAS) in AIS patients. Methods: This study was a prospective cohort analysis of consecutive AIS patients under cerebrovascular angiography. A total of 289 patients were included. orthostatic hypotension was defined as a systolic BP decline ≥20 mmHg or a diastolic BP decline ≥10 mmHg within 3 min of standing. Univariate and multivariate analysis were performed to investigate the association of the clinical variables with orthostatic hypotension. Results: Orthostatic hypotension was identified in 80 (27.7%) of all patients. ECAS (≥70%) and ICAS (≥50%) was found in 39 (13.5%) and 71 (24.6%) respectively. In multivariate analysis, only diabetes mellitus (odds ratio = 2.00, 95% confidence interval, 1.12-3.58, P = 0.019) and ECAS (odds ratio = 1.97, 95% confidence interval, 1.54-2.51, P < 0.001) were independent risk factors for orthostatic hypotension. Conclusions: Orthostatic hypotension is a relatively common finding among patients with AIS. AIS patients should be screened for orthostatic hypotension, especially combined with severe ECAS and diabetes mellitus. abstract_id: PUBMED:36036137 Dissection of cerebral arteries as a cause of ischemic stroke Dissection of cerebral arteries is a rare cause of stroke in the general population. The article describes 5 clinical cases of patients with ischemic stroke after dissection, discusses the issues of etiology, diagnosis and treatment tactics. The possibility of using high-tech methods of treatment for this subtype of stroke is emphasized. The diagnostic method of choice is neuroimaging, including cerebral angiography. abstract_id: PUBMED:34801183 A review of the anatomy of the middle cerebral artery for the era of thrombectomy: A radiologic tool based on CT angiography and perfusion CT. The middle cerebral artery (MCA) is one of the principal intracranial vessels. It is also the one that is most often involved in ischemic cerebrovascular disease, which accounts for a significant volume of brain imaging tests. With recent updates in the management of ischemic stroke (including intravascular treatment in increasingly distal vessels and in an increasingly widening spectrum of patients), it is becoming more important to know the details of the cerebral vascular anatomy to reach accurate diagnoses quickly and thereby improve patients' prognoses. For these reasons, we present this anatomic review of the MCA, reviewing its segments and anatomic limits, its branching patterns, and its anatomic variants. We also provide a radiologic tool based on correlations between CT angiography and perfusion CT to facilitate the identification of the points of occlusion within the branches of the MCA, taking into account its different variants. abstract_id: PUBMED:25981225 CT perfusion cerebral blood volume does not always predict infarct core in acute ischemic stroke. We investigated the practical clinical utility of the CT perfusion (CTP) cerebral blood volume (CBV) parameter for differentiating salvageable from non-salvageable tissue in acute ischemic stroke (AIS). Fifty-five patients with AIS were imaged within 6 h from onset using CTP. Admission CBV defect (CBVD) volume was outlined using previously established gray and white matter CBV thresholds for infarct core. Admission cerebral blood flow (CBF) hypoperfusion and CBF/CBV mismatch were visually evaluated. Truncation of the ischemic time-density curve (ITDC) and hypervolemia status at admission, recanalization at 24-h CT angiography, hemorrhagic transformation (HT) at 24 h and/or 7-day non-contrast CT (NCCT), final infarct volume as indicated by 3-month NCCT defect (NCCTD) and 3-month modified Rankin Score were determined. Patients with recanalization and no truncation had the highest correlation (R = 0.81) and regression slope (0.80) between CBVD and NCCTD. Regression slopes were close to zero for patients with admission hypervolemia with/without recanalization. Hypervolemia underestimated (p = 0.02), while recanalization and ITDC truncation overestimated (p = 0.03) the NCCTD. Among patients with confirmed recanalization at 24 h, 38 % patients had an admission CBF/CBV mismatch within normal appearing areas on respective NCCT. 83 % of these patients developed infarction in admission hypervolemic CBF/CBV mismatch tissue. A reduction in CBV is a valuable predictor of infarct core when the acquisition of ITDC data is complete and hypervolemia is absent within the tissue destined to infarct. Raised or normal CBV is not always indicative of salvageable tissue, contrary to the current definition of penumbra. Answer: According to the study reported in abstract PUBMED:24482573, higher body temperature on admission was found to be independently associated with excellent outcomes (odds ratio [OR]: 2.8; 95% confidence interval [CI]: 1.24-6.46; P=0.014) and favorable outcomes (OR: 2.8; 95% CI: 1.13-4.98; P=0.015) in ischemic stroke patients treated with tissue plasminogen activator (tPA) who had normal computed tomography angiography (CTA) of the cerebral arteries. The authors suggest that a higher body temperature may have a beneficial effect on clot lysis in the absence of visible clots on CTA. This implies that in the specific context of ischemic stroke patients with normal admission CTA and who are treated with tPA, a higher body temperature could be beneficial.
Instruction: Patient-initiated delay at a genitourinary medicine clinic: are there public health consequences? Abstracts: abstract_id: PUBMED:18812392 Patient-initiated delay at a genitourinary medicine clinic: are there public health consequences? Objectives: To assess the public health consequence of patients electing not to be seen within 48 hours in a genitourinary medicine (GUM) clinic. Methods: A 3-month retrospective case notes review was carried out for 310 new and re-book patients who chose to wait for more than 48 hours to be seen. Results: Altogether, 10% (310/3110) of patients opted to be seen beyond 48 h. Their median wait was 6 days including weekends and 4 days excluding weekends. Demographic details did not vary except for the male to female ratio of 1:1.7 (1:1 in patients seen within 48 h). We found that no symptomatic patients or asymptomatic contacts of those with known sexually transmitted infections (STIs) reported sex with a new partner after booking their appointment. No patient reported sex with a recently treated partner who consequently required re-treatment and none suffered a complication of a STI. In addition, there were no cases of new HIV infection in this group and the rates of STIs were similar compared with patients seen within 48 hours of contacting the unit. Conclusions: Despite 10% of patients choosing to delay attendance beyond 48 h, no adverse public health outcomes were demonstrated. abstract_id: PUBMED:19148310 The delay in diagnosis of tuberculosis in the Monteregie region of Quebec, Canada. Introduction: Despite being more prevalent in developing countries, tuberculosis (TB) remains an important health problem in Canada. Long diagnosis delays of respiratory tuberculosis are associated with adverse consequences for the patient but also for the community. From a public health perspective, identification of factors associated with long delays of diagnosis could help reduce these delays. Objectives: 1)To describe diagnosis delays of respiratory tuberculosis in Monteregie 2)To identify the characteristics of patients and factors associated with longer diagnosis delays 3)To identify consequences of these delays. Methods: The study is descriptive and transversal. Data were obtained from notifiable diseases files of the Public Health Department of the Health and Social Services Agency of Monteregie. The diagnosis delay was calculated using the first symptomatic date and the date of diagnosis. For continued variable analyses, Student t tests and an ANOVA test were done. For categorical variables, Pearson's chi squared test and a Mann-Whitney test were done. Results: The average delay of diagnosis for the 115 cases studied was 92.2 days (CI 80.6-103.8). Weight loss and/or non specific general malaise were associated with a longer diagnosis delay. No association was found between the diagnosis delay and possible consequences of longer delays. Discussion And Conclusion: Most patients had a diagnosis delay longer than two months. A larger study that would divide the total diagnosis delay into a patient delay and a suspicion delay (health care system delay) could permit a better identification of factors that favour long delays. abstract_id: PUBMED:24810216 Do genitourinary physicians report notifiable diseases? A survey in South East England. Genitourinary medicine work requires public health actions. Notifiable infections may be seen in genitourinary medicine, but concerns over confidentiality could delay public health actions and outbreak management. To assess genitourinary medicine clinicians' awareness of notification of infectious disease, reporting practices and liaison with Health Protection Units, we sent postal surveys to 140 genitourinary medicine clinicians (SE HPA region) that explored prior public health training, Health Protection Unit liaison and management of possible clinical scenarios. Fifty-seven respondents reported median genitourinary medicine experience of 12 years; 29% had prior public health training, nine on the British Association for Sexual Health and HIV course. A total of 90% had heard of Health Protection Units and understood their role. Approximately one-third would not report key diseases at all, most reporting only on laboratory confirmation. In all, 83% would only notify acute hepatitis on lab confirmation; 50% would report suspected measles immediately (44% awaiting lab confirmation) and 40% would not pass on any patient details without consent. Clinicians have good knowledge of notification of infectious disease conditions but responses suggest it is not always used in clinical context. Reporting delays occur waiting for lab confirmation and liaison with local Health Protection Units may be hindered by confidentiality concerns, potentially delaying public health action. Doctors with prior public health training are more likely to report appropriately. abstract_id: PUBMED:30124453 Precision Medicine in Parkinson's Disease - Exploring Patient-Initiated Self-Tracking. Background: Individually tailored healthcare, in the form of precision medicine, holds substantial potential for the future of medicine, especially for a complex disorder like Parkinson's disease (PD). Patient self-tracking is an under-researched area in PD. Objective: This study aimed to explore patient-initiated self-tracking in PD and discuss it in the context of precision medicine. Methods: The first author used a smartphone app to capture finger-tapping data and also noted times for medication intakes. Results: Data were collected during four subsequent days. Only data from the first two days were complete enough to analyze, leading to the realization that the collection of data over a period of time can pose a significant burden to patients. From the first two days of data, a dip in finger function was observed around the time for the second medication dose of the day. Conclusions: Patient-initiated self-tracking enabled the first author to glean important insights about how her PD symptoms varied over the course of the day. Symptom tracking holds great potential in precision medicine and can, if shared in a clinical encounter, contribute to the learning of both patient and clinician. More work is needed to develop this field and extra focus needs to be given to balancing the burden of tracking for the patient against any expected benefit. abstract_id: PUBMED:24501232 Screening for alcohol use disorders in a genitourinary medicine and contraception clinic: a service evaluation. Summary National Institute for Health and Clinical Excellence guidance advises that all patients routinely undergo alcohol screening. Our service evaluation in a sexual health clinic assessed the acceptability of alcohol screening, the prevalence of hazardous drinking and the uptake of referral to a specialist alcohol service. Three services were evaluated: a genitourinary medicine service, a contraception service and an integrated sexual health service. AUDIT was used to screen 276 patients; those scoring ≥8 were classified as hazardous drinkers. Screening uptake was 58% overall, 58% for the contraception service, 71% for the genitourinary medicine service and 100% for the integrated sexual health service. Overall, 28% had AUDIT scores ≥8. Hazardous drinking rates were higher in the genitourinary medicine (32%) and integrated sexual health services (52%) compared to the contraception service (21%); 7% of hazardous drinkers accepted referral. High rates of hazardous drinking were identified within all three groups, but uptake of referral was low. Alternative strategies to specialist alcohol service referral are discussed. abstract_id: PUBMED:24100285 Genitourinary Medicine trainees' experience and training needs in the management of patients disclosing sexual violence. The British Association for Sexual Health and HIV (BASHH) Sexual Violence group assessed the level of confidence of Genitourinary Medicine (GUM) trainees in managing patients disclosing sexual violence using an online survey. Twenty-eight percent of current UK GUM trainees responded. The results demonstrated wide variation in trainees' experience and confidence in managing these patients, which was dependent on the patient type, as well as the gender of the trainee and the number of years' experience the trainee had in the specialty. There were also differences in the reported availability of training in this specialist area. Regular accessible training in identification and management of patients disclosing sexual violence is recommended for GUM trainees. abstract_id: PUBMED:35854651 Haemodialysis patient's adherence to treatment: Relationships among nurse-patient-initiated participation and nurse's attitude towards patient participation. Aims And Objectives: To evaluate the relationship between nurse-patient-initiated participation, nurses' attitudes towards patient's participation, and patients' adherence to treatment. Specifically, to (1) explore nurse-patient participation during haemodialysis and quantify the information into measurable indices; (2) determine the haemodialysis patient's adherence to treatment; (3) describe nurses' attitudes towards patient participation; and (4) establish the relationships between nurse-patient-initiated participation, nurses' attitudes towards patient participation and patients' adherence to treatment. Background: To improve haemodialysis patients' health, it is crucial to identify nurses' and patients' factors facilitating adherence to treatment. Design: An exploratory-sequential mixed-methods (quantitative and qualitative) design. Methods: All nurses working at a dialysis ward (n = 30) and their randomly selected patients (n = 102) participated. Qualitative data on nurse-patient-initiated participation were derived from transcribed nurse-patient conversations and quantified for further analyses. Nurses' attitudes towards patient participation were collected via questionnaire, and adherence to treatment via observed reduction in prescribed haemodialysis time. [CONSORT-SPI guidelines]. Results: Content analysis of the conversations indicated that nurse-initiated participation focused on patient's medical condition, treatment plan and education; while patients initiated more small talk. Non-adherence to treatment was significant (Mean = 0.19 h; SD = 0.33). Regression analyses indicated that nurses' attitude towards participation was negatively linked to patient adherence, while patient-nurse-initiated participation was unrelated. Nurses' attitudes towards patient participation moderated the relationship between nurse-patient-initiated participation and patient adherence: the more positive the attitude towards inclusion the more negative the link between patient or nurse-initiated participation and patient adherence. Conclusions: The findings provided paradoxical insights: Nurses' positive attitudes towards participation lead them to accept the patient's position for shortening haemodialysis treatment, so that adherence to care decreases. Relevance To Clinical Practice: Nurses require education on negotiating methods to help achieve patient adherence while respecting the patient's opinion. Patients should be educated how to approach nurses, seeking the information they need. abstract_id: PUBMED:38014480 Clinical and ocular abnormalities in DEGCAGS syndrome-Developmental delay with gastrointestinal, cardiovascular, genitourinary, and skeletal abnormalities. Purpose: To describe clinical and ocular abnormalities in a case of Developmental Delay with Gastrointestinal, Cardiovascular, Genitourinary, and Skeletal Abnormalities (DEGCAGS syndrome). Methods: A clinical report. Case Description: An infant born to a consanguineous Middle Eastern family who was delivered by cesarean section because of in utero growth restriction, premature labor, and breech presentation. Post-partum medical problems included hypotension, generalized hypotonia, bradycardia, apnea requiring resuscitation and positive pressure ventilation, facial dysmorphia, skeletal malformations, and disorders of the gastrointestinal, immune, urinary, respiratory, cardiac, and visual systems. The family reported that a previous child had severe hypotonia at birth and was given the diagnosis of hypoxic ischemic encephalopathy; that child remains on a ventilator in a chronic care facility. Our patient was found to be homozygous for a novel pathogenic missense variant in theZNF699 zinc finger gene on chromosome 19p13 causing a syndrome known as Developmental Delay with Gastrointestinal, Cardiovascular, Genitourinary, and Skeletal Abnormalities (DEGCAGS syndrome). We review this variable syndrome, including abnormalities of the visual system not described previously. Conclusions: We describe the 15th child to be presumably identified with the DEGCAGS syndrome and the first individual with homozygous missense variants in the ZNF699 gene who had complete clinical examination and detailed retinal imaging. abstract_id: PUBMED:35075673 Causes and consequences of diagnostic delay in Guillain-Barré syndrome in a UK tertiary center. Introduction/aims: Understanding the potential causes and consequences of diagnostic delay in Guillain-Barré syndrome (GBS) could improve quality of care and outcomes. We aimed to determine these causes and consequences in our cohort of patients with GBS. Methods: We retrospectively reviewed records of subjects with GBS, admitted to our center at University Hospitals Birmingham, UK, between January 2005 and December 2020. We evaluated time to diagnosis from presentation, factors associated with diagnostic delay, and its potential consequences. Results: We included 119 consecutive subjects. Diagnostic delay at least 5 days from first presentation occurred in 27 of 119 (22.7%) of patients. Diagnostic delay was associated with age >60 years (odds ratio [OR], 3.58; 95% confidence interval [CI], 1.44-8.85), pre-existing cardiac/respiratory disease (OR, 4.10; 95% CI, 1.46-11.54), pre-existing diabetes (OR, 10.38; 95% CI, 2.47-43.69), documented normal initial neurological examination (OR, 2.49; 95% CI, 1.03-6.02), initial assessment by primary care (OR, 3.33; 95% CI, 1.22-9.10) and at least one visit for medical attention (OR, 10.29; 95% CI, 3.81-27.77). Diagnostic delay was not associated with length of inpatient stay, intensive care unit admission, ventilation, ability to walk at discharge, or inpatient mortality. Independent associations with diagnostic delay were observed for at least one visit for medical attention (OR, 10.15; 95% CI, 3.64-28.32) and pre-existing cardiac/respiratory disease (OR, 3.98; 95% CI, 1.19-13.28). An association of diagnostic delay with inpatient mortality was ascertained specifically in subjects with classic GBS (OR, 5.33; 95% CI, 1.1-25.87). Discussion: Diagnostic delay in GBS results from patient-specific factors and patient pathways. A high index of suspicion is appropriate for certain patient groups. Prospective studies are needed to further investigate this topic. abstract_id: PUBMED:36636790 Type of cancer and complementary and alternative medicine are determinant factors for the patient delay experienced by children with cancer: A study in West Java, Indonesia. Introduction: Most pediatric cancer patients in developing countries present at an advanced stage due to delayed diagnosis, being an important barrier to effective care. The objective of this study was to evaluate the associated factor of patient delay and explore significant parental practice-associated risk factor to patient delay. Methods: This was a sequential mixed methodology, utilizing data from the Indonesian Pediatric Cancer Registry for clinical variables and completed interviews with parents using structured questionnaires to obtain their sociodemographic data. A binary logistic regression analysis model was fitted to identify factors associated with patient delay. Additional semi-structured interviews related to parental practice of using complementary and alternative medicine (CAM) were administered to 30 parents. Thematic framework analysis was performed on qualitative data to explore determinant factors of parental practice of using CAM. Results: We interviewed 356 parents with children with cancer. The median patient delay was 14 days (interquartile range [IQR]: 6-46.5 days). The most extended delay was in patients with malignant bone tumors (median 66, IQR: 14-126). In multivariable logistic regression analysis, solid cancer (odds ratio [OR] = 5.22, 95% confidence interval [CI]: 2.79-9.77, p < .001) and use of CAM (OR = 1.86, 95% CI: 1.13-3.08, p = .015) were associated with patient delay. Qualitative interviews highlighted key issues relative to determinant parental factors using CAM, including vague initial childhood cancer symptoms, parental health-seeking behavior, CAM availability and accessibility, also barriers of healthcare facilities. Conclusion: Type of cancer and use of CAM are essential factors that cause patient delay. It should be addressed in the future childhood cancer awareness and childhood cancer diagnosis pathway. Answer: According to the retrospective case notes review carried out over three months for 310 new and re-book patients who chose to wait for more than 48 hours to be seen at a genitourinary medicine (GUM) clinic, no adverse public health outcomes were demonstrated despite 10% of patients choosing to delay attendance beyond 48 hours. The study found that no symptomatic patients or asymptomatic contacts of those with known sexually transmitted infections (STIs) reported sex with a new partner after booking their appointment. Additionally, no patient reported sex with a recently treated partner who consequently required re-treatment, and none suffered a complication of an STI. There were also no cases of new HIV infection in this group, and the rates of STIs were similar compared with patients seen within 48 hours of contacting the unit (PUBMED:18812392).
Instruction: Pericardium closure after heart operations: a safety option? Abstracts: abstract_id: PUBMED:21103745 Pericardium closure after heart operations: a safety option? Objectives: Primary pericardium closure may reduce the risk of cardiac injury during chest re opening, especially the right ventricle, aorta and coronary bypass grafts. Nevertheless, concern about adverse hemodynamic effects prevents most heart surgeons of closuring the pericardium. Methods: We evaluated 48 patients undergoing open heart surgery consecutively which the pericardium was closed in 30 patients (group A) and 18 patients, as a control group (group B) in which the pericardium was left open. All patients underwent posteroanterior and lateral chest roentgenograms before surgery and one week postoperatively. Postoperative evaluation also included echocardiograms, ECG and postoperative enzyme analysis. Results: There were no deaths or any complications in both groups (acute myocardial infarction, stroke, bleeding or cardiac tamponate). It was observed statistically differences between both groups especially in echocardiogram parameters and cardiothoracic ratio without clinical impact. Conclusion: Pericardium closure is a simple method to facilitate resternotomy during subsequent re operative procedures. However, cardiac surgeons should be aware of the transient deterioration in hemodynamics associated with it, even thought there was no clinical significance in this study. abstract_id: PUBMED:795444 Closure of pericardium after open heart surgery. A way to prevent postoperative cardiac tamponade. Between July 1968 and December 1975, 821 patients underwent open heart operations. In 596 cases the pericardium was left open and in 225 the pericardium was closed. Forty-one patients in the open pericardium group required reoperation and 23 of these had tamponade. Four patients in the closed pericardium group had reoperation but there was not a single case of tamponade. In most cases that required reoperation the bleeding was from extrapericardial sources. Absence of tamponade in the closed pericardium group can be explained by the fact that blood from extrapericardial sources of bleeding cannot collect round the heart because the pericardium is closed. Thus closure of pericardium helps to prevent tamponade. Reoperations some months or years after the original operation are technically easier and less hazardous if the pericardium has been closed because the closed pericardium prevents the heart from becoming adherent to the back of sternum and also because there are fewer adhesions in the pericardial cavity. abstract_id: PUBMED:1152493 Influence of primary closure of the pericardium after open-heart surgery on the frequency of tamponade, postcardiotomy syndrome, and pulmonary complications. Experiences with primary closure of the pericardium in a series of 100 patients undergoing open-heart operations are described. The pericardium was kept under tension during the operation to minimize shrinkage and permit closure at the end of the procedure. In 28 patients one pleural space was opened for drainage, whereas in 72 patients intra- and extrapericardial sumps alone were used for drainage. Measurements of sump drainage revealed that most postoperative bleeding originates from outside the pericardium. There were no instances of cardiac tamponade although 19 patients lost more than 1 L. of blood after operation and 5 required reoperation for hemorrhage. Transpleural drainage tubes were shown to be ineffective and in addition were associated with a fourfold increase in postcardiotomy syndrome and a significantly greater frequency of pleural effusion and atelectasis when compared to the use of mediastinal sump drainage alone. We have concluded that closing the pericardium and using mediastinal sump drainage minimizes the risk of cardiac tamponade and allows early localization of the site of postoperative bledding. Another advantage of pericardial closure and drainage is that postoperative adhesions and postcardiotomy syndrome will be less significant. As a consequence the danger of injuring the heart in a subsequent operation is lessened. abstract_id: PUBMED:3284493 The use of autologous pericardium for ventricular aneurysm closure. Closure of the ventriculotomy following ventricular aneurysm resection usually requires buttressing material to provide strength and hemostasis. Although Teflon felt has usually been used, this material is bulky, noncompliant, and prone to infection. Autologous pericardium appears to offer an ideal substitute without the disadvantages of artificial material. A simple technique is described to apply pericardial tissue as a natural buttressing agent for ventriculotomy closure. abstract_id: PUBMED:21477771 The heart and pericardium. This article describes the normal anatomy of the heart and pericardium. Included is a detailed description of the pericardium, mediastinal nerves, cardiac chambers, valves, coronary arteries and veins, and the conduction tissues. As cardiac and thoracic surgery continue to get more specialized and the procedures become less invasive, it is essential for the cardiothoracic surgeon to have a thorough working knowledge of cardiothoracic anatomy. abstract_id: PUBMED:34545589 Comparison on the properties of bovine pericardium and porcine pericardium used as leaflet materials of transcatheter heart valve. Background: In order to obtain the smaller delivery diameter, porcine pericardium had been used as a substitute material of bovine pericardium for the leaflet materials of transcatheter heart valve (THV). However, the differences between them had not been fully studied. Therefore, this study compared the microstructure, biochemical and mechanical properties of two materials and hydrodynamics of THV made by the two materials in detail. Methods: In this study, firstly, the microstructure of pericardium was analyzed by staining and scanning electron microscope; secondly, the biochemical properties of pericardium after different processes were compared by heat shrinkage temperature test, free amino and carboxyl concentration test, enzyme degradation test, subcutaneous implantation calcification analysis in rats; finally, the mechanical properties were evaluated by uniaxial tensile test before and after the pericardium being crimped, and then, the hydrodynamics of THV was studied according to the ISO5840 standard. Results: Compared with bovine pericardium, after the same process, porcine pericardium showed a looser and tinier fiber bundle, a similar free carboxyl concentration, a lower resistance to enzyme degradation, a significantly lower calcification, bearing capacity and damage after being crimped, a better hydrodynamic and adaption with lower cardiac output and deformation of implantation position. Meanwhile the dehydration process of pericardium almost had preserved all the biochemical advantages of two materials. Conclusion: In this study, porcine and bovine pericardium showed some significant differences in biochemical, mechanical properties and hydrodynamics. According to the results, it was presumed that the thinner porcine pericardium might be more suitable for THV of right heart system. Meanwhile, more attention should be taken for the calcification of THV made by the bovine pericardium. abstract_id: PUBMED:15798062 Anatomic pitfalls of the heart and pericardium. Recent technologic advances have led to more frequent dedicated cross-sectional imaging of the heart. Faster scanning techniques, cardiac gating, and advanced postprocessing software allow improved visualization of finer anatomic details of the heart and pericardium compared with older techniques and software. Use of thin-section computed tomography (CT) or image reformatting in nonaxial planes may be helpful in some cases. The cardiac and pericardial structures are usually readily demonstrated with CT, even if chest CT is performed for evaluation of noncardiac structures. However, radiologists are expected to evaluate all structures on an image, and incidental findings are common. Radiologists must first be familiar with the normal anatomic structures of the heart and pericardium (eg, atria, ventricles, cardiac valves, pericardial recesses, paracardiac structures) to avoid mistaking them for pathologic processes. abstract_id: PUBMED:23098147 A review of the current status of pericardial closure following cardiac surgery. Some cardiac surgeons prefer to close the pericardium whenever possible following surgery, others specifically avoid this practice, and still others believe that neither alternative has any meaningful influence on clinical outcomes. Unfortunately, scientific evidence supporting either approach is scarce, making a consensus regarding best practice impossible. In this article, the known functions of the native intact pericardium are summarized, and the arguments for and against pericardial closure after surgery are examined. In addition, the techniques and materials that have been utilized for pericardial closure previously, as well as those that are currently being developed, are assessed. abstract_id: PUBMED:35788523 Comparative study on Sichuan yak pericardium and Australian cattle pericardium Currently, as the key raw material of artificial biological heart valve, bovine pericardium is mainly depend on import and has become a "bottleneck" challenge, greatly limiting the development of domestic biological heart valve. Therefore, the localization of bovine pericardium is extremely urgent. In this study, the pericardium of Sichuan yak was compared with that of Australian cattle in terms of fundamental properties and anti-calcification performance. The results demonstrated that the appearance and thickness of yak pericardium were more advantageous than the Australian one. Sichuan yak pericardium and Australian cattle pericardium had comparable performance in shrinkage temperature, mechanical test and anti-calcification test. This study preliminarily verifies the feasibility of substitution of Australian cattle pericardium by Sichuan yak pericardium and promotes the progression of bovine pericardium localization with data support. abstract_id: PUBMED:2722891 Modelling of the heart and pericardium at end-diastole. Herein we present a refined version of Vito's two-sphere static model of the heart with pericardium and discuss its possible applications. The improvements we make on Vito's model are: (i) Vito assumed that the elastic materials which constitute the model 'heart' and 'pericardium' are isotropic; we relax this assumption to that of transverse-isotropy. (ii) Our analysis, which does not assume the existence of stored-energy functions, links the model directly to empirical stress-strain relations of suitable biaxial uniform-extension tests; two such stress-strain relations (one for the pericardium, one for the myocardium, both of which may be described by the same equation except for difference in the values of response parameters) now define the model completely, so we avoid altogether the difficult task of determining full-fledged constitutive equations for the pericardium and myocardium. As for applications, we contend that the concentric spheres in static equilibrium can be taken as a model of the left ventricle and pericardium at end-diastole. We show that the model when equipped with suitable stress-strain relations does give good fit to the pressure-volume data which Spotnitz et al. (1966, Circulation Res., 18, 49-66) obtained from excised canine left ventricles and to the pericardium data which Pegram et al. (1975, Circulation Res., 9, 707-714) obtained from closed chest, anaesthetized dogs. Three different empirical formulae were tried in the data-fitting as the equation that describes the requisite stress-strain relations. The 'exponential law' gave the best results. Answer: The closure of the pericardium after heart operations is a topic of debate among cardiac surgeons, with some considering it a safety option while others are concerned about potential adverse effects. One study evaluated 48 patients undergoing open heart surgery, with the pericardium closed in 30 patients (group A) and left open in 18 patients (group B). The results showed no deaths or complications such as acute myocardial infarction, stroke, bleeding, or cardiac tamponade in either group. Although there were statistical differences in echocardiogram parameters and cardiothoracic ratio between the groups, these did not have clinical impact. The study concluded that pericardium closure can facilitate resternotomy during subsequent reoperative procedures, but cardiac surgeons should be aware of the transient deterioration in hemodynamics associated with it (PUBMED:21103745). Another study from 1968 to 1975 involving 821 patients found that closure of the pericardium helped prevent tamponade, as blood from extrapericardial sources could not collect around the heart. Reoperations were also technically easier and less hazardous due to fewer adhesions in the pericardial cavity (PUBMED:795444). A further study with 100 patients undergoing open-heart operations described the benefits of primary closure of the pericardium, including minimizing the risk of cardiac tamponade, allowing early localization of postoperative bleeding, and reducing the significance of postoperative adhesions and postcardiotomy syndrome (PUBMED:1152493). In summary, while there are concerns about potential adverse hemodynamic effects, the closure of the pericardium after heart operations can be a safety option that facilitates subsequent surgeries, prevents cardiac tamponade, and reduces the risk of adhesions. However, the decision to close the pericardium should be made with consideration of the individual patient's condition and the potential risks and benefits (PUBMED:21103745, PUBMED:795444, PUBMED:1152493).
Instruction: Specialist capacity in public health: are we hitting the target? Abstracts: abstract_id: PUBMED:17010397 Specialist capacity in public health: are we hitting the target? Objectives: The aim of this study was to audit specialist public health capacity in one strategic health authority (SHA), and to compare capacity with the targets suggested by the Faculty of Public Health (FPH). Methods: A census of all public health specialists and specialists in training, working within the geographical boundary of one SHA, was performed in late 2004. The number of whole-time-equivalent (WTE) public health specialists was quantified using a variety of methods, including the public health network database, informal networks, existing written reports, personal knowledge and telephone interviews. The number of specialists was compared with the targets suggested by the FPH and with other regions. Results: There were 12 WTE public health specialists per million population in the SHA, and the FPH's target was 25 WTE per million population. There was a particular shortage of specialists in academic public health. Conclusions: There was a marked shortfall in specialist public health capacity in the SHA compared with the FPH's targets. Comparisons with the FPH's targets were difficult; the FPA used WTE as the metric, while the local public health network database provided information in terms of numbers of specialists and WTE data were time consuming to obtain. Comparisons with other regions were of limited use as the workforce data were not comparable. The FPH's targets were found to have little resonance outside the world of specialist public health, and so are unlikely to be helpful in securing local investment in specialist public health capacity. The service needs to be marketed, and new ways of inter-organizational and collaborative working and of expanding the wider public health workforce need to be examined in order to deliver effective public health. abstract_id: PUBMED:15560898 Specialist public health capacity in England: working in the new primary care organizations. Objectives: To determine the capacity and development needs, in relation to key areas of competency and skills, of the specialist public health workforce based in primary care organizations following the 2001 restructuring of the UK National Health Service. Study Design: Questionnaire survey to all consultants and specialists in public health (including directors of public health) based in primary care trusts (PCTs) and strategic health authorities (SHAs) in England. Results: Participants reported a high degree of competency. However, skill gaps were evident in some areas of public health practice, most notably "developing quality and risk management" and in relation to media communication, computing, management and leadership. In general, medically qualified individuals were weaker on community development than non-medically qualified specialists, and non-medically qualified specialists were less able to perform tasks that require epidemiological or clinical expertise than medically qualified specialists. Less than 50% of specialists felt that their links to external organizations, including public health networks, were strong. Twenty-nine percent of respondents felt professionally isolated and 22% reported inadequate team working within their PCT or SHA. Approximately 21% of respondents expressed concerns that they did not have access to enough expertise to fulfil their tasks and that their skills were not being adequately utilized. Conclusions: Some important skill gaps are evident among the specialist public health workforce although, in general, a high degree of competency was reported. This suggests that the capacity deficit is a problem of numbers of specialists rather than an overall lack of appropriate skills. Professional isolation must be addressed by encouraging greater partnership working across teams. abstract_id: PUBMED:35719502 Deriving Public Innovation Capacity: Evidence From the Korean Public Sector. By actively coping with changes, a government providing public services can also improve the quality of those public services and help citizens improve their quality of life in the face of rapidly changing social structures, environments, and values. Accordingly, this study will typologize public innovation capacity (PIC) in terms of the individual, middle manager, and organizational levels. This study typologizes public innovation capacity in terms of the individual, middle manager, and organizational levels through mini-round Delphi analysis and exploratory factor analysis and confirmatory factor analysis using survey. This capacity is a precondition of the specificity of the public sector (generating public interest by providing public services) and universal value of HR (human resource) research (creating performance). It provides the basic capacity within the public sector to enhance the quality of public services and create positive outcomes. abstract_id: PUBMED:36853890 Evaluating the use of reflective cafés in Specialist Community Practitioner and Specialist Community Public Health Nurse programmes. Background: Specialist Community Practitioner (SCP) and Specialist Community Public Health Nurse (SCPHN) students are required to evidence their competency by the use of reflective practice as part of the NMC proficiencies. A reflective café trilogy comprising of three reflective teaching sessions was developed and introduced into a university programme to support and encourage alternative methods for deeper reflection within this student group. Aim: It was important for educators to evaluate if a reflective café met the student's needs and understand the usefulness of a 'reflective café' as a technique to support the process of reflecting on practice. Methods: Evaluation was undertaken using an online questionnaire. Findings: Students evaluated if the reflective café was useful for their own development and identified that the number of sessions met their developmental needs. Conclusion: The potential to develop alternative methods to reflect was recognised and the team plan to develop other reflective processes to support students in the future. abstract_id: PUBMED:32451594 Digital public health-leverage for community capacity building in health promotion : Current situation, developmental issues and TEAviisari as a model implementation In 1986, the Ottawa charter marked a paradigm shift for public health, putting the focus on strengthening community action and on creating supportive environments for health. A key to this is "capacity building" (CB), which we understand as the development and sustainable implementation of structural capacities, e.g. coordinated data collection, collaboration processes across sectors and reliable provision of basic resources in all areas of local health promotion.Many efforts and three and a half decades later we still envisage infrastructure deficits, scattered public health landscapes and restraints to intersectoral cooperation much too often. While agreement on the theoretical insights on what is needed appears to be broad, translating these insights into practice remains a challenge. In this situation, digital public health (DPH) can contribute to overcoming barriers and making knowledge for action more visible and more accessible. With DPH, data can be integrated, structured and disseminated in novel ways.We discuss why CB at the local level could benefit from technological advances and what DPH might do for the provision of information services on public health capacity. Our focus is on the web-based, interactive representation of public health data for use in information, governance or benchmarking processes. As an example from public health practice, the Finnish tool TEAviisari (National Institute for Health and Welfare, Finland) is presented.The 2020 EU Council Presidency of Germany - with the topics of digitalisation and the common European health data space - offers opportunities to decisively advance the development of CB in health promotion in this country. abstract_id: PUBMED:26769749 Experiments in engagement: Designing public engagement with science and technology for capacity building. Public engagement with science and technology is now widely used in science policy and communication. Touted as a means of enhancing democratic discussion of science and technology, analysis of public engagement with science and technology has shown that it is often weakly tied to scientific governance. In this article, we suggest that the notion of capacity building might be a way of reframing the democratic potential of public engagement with science and technology activities. Drawing on literatures from public policy and administration, we outline how public engagement with science and technology might build citizen capacity, before using the notion of capacity building to develop five principles for the design of public engagement with science and technology. We demonstrate the use of these principles through a discussion of the development and realization of the pilot for a large-scale public engagement with science and technology activity, the Futurescape City Tours, which was carried out in Arizona in 2012. abstract_id: PUBMED:30537267 Releasing latent compassion through an innovative compassion curriculum for Specialist Community Public Health Nurses. Aims: To evaluate the impact of a curriculum based on the Compassionate Mind Model (CMM) designed to facilitate the expression of compassion in Specialist Community Public Health Nurses. Background: The CMM identifies that fear of compassion creates a barrier to the flow of compassion. There is some evidence linking self-compassion to compassionate care but no previous research has explored this potential with postregistration specialist community public health nursing students. Design: Prospective, longitudinal design using focus group interviews. Methods: Twenty six students (81% of cohort) agreed to participate in a wider evaluation (2014-2015). For this study, two groups were drawn from those participants (total 13 students) who attended audio-taped group interviews at the course mid- and end-points to explore their perceptions on compassion and compassionate care. Transcripts were analysed thematically. Findings: Several subthemes were identified. "Cultural change in the NHS", "Workload and meeting targets" and "Lack of time were barriers to compassionate care, as was negative "Role modelling". These were collated under a macro-theme of "A culture lacking in compassion". Secondly, the subthemes "Actualization of compassion" and "Transformation" were collated within a macro-theme: "Realization of compassion". This theme identified realization of latent compassion from their previous roles that in some transferred into students' personal lives suggesting a transformation beyond professional attitude. Conclusion: The curriculum facilitated a realization of compassion in students over the period of the course by enhancing their capacity to be self-compassionate and by actualization of compassion that had previously been suppressed. abstract_id: PUBMED:25604975 Capacity building in public health nutrition. The aim of the present paper is to review capacity building in public health nutrition (PHN), the need for which has been stressed for many years by a range of academics, national and international organisations. Although great strides have been made worldwide in the science of nutrition, there remain many problems of undernutrition and increasingly of obesity and related chronic diseases. The main emphasis in capacity building has been on the nutrition and health workforce, but the causes of these health problems are multifactorial and require collaboration across sectors in their solution. This means that PHN capacity building has to go beyond basic nutrition and beyond the immediate health workforce to policy makers in other sectors. The present paper provides examples of capacity building activities by various organisations, including universities, industry and international agencies. Examples of web-based courses are given including an introduction to the e-Nutrition Academy. The scope is international but with a special focus on Africa. In conclusion, there remains a great need for capacity building in PHN but the advent of the internet has revolutionised the possibilities. abstract_id: PUBMED:29942671 Developing the public health workforce: training and recognizing specialists in public health from backgrounds other than medicine: experience in the UK. Background: There is increasing recognition that improving health and tackling inequalities requires a strong public health workforce capable of delivering key public health functions across systems. The World Health Organization in Europe has identified securing the delivery of the Essential Public Health Operations and strengthening public health capacities within this as a priority.It is acknowledged that current public health capacities and arrangements of public health services vary considerably across the World Health Organization in European Region, and investment in multidisciplinary workforce with new skills is essential if public health services are to be delivered. Case Presentation: This paper describes the current situation in the UK where there are nationally funded multidisciplinary programmes for training senior public health specialists. Uniquely, the UK provides public health registration for multidisciplinary as well as medical public health specialists. Conclusion: The transition from a predominantly medical to a multidisciplinary public health specialist workforce over a relatively short timescale is unprecedented globally and was the product of a sustained period of grass roots activism aligned with national policy innovation. the UK experience might provide a model for other countries seeking to develop public health specialist workforce capacity in line with the Essential Public Health Operations. abstract_id: PUBMED:38230728 Estimates of Current Capacity for Diagnosing Alzheimer's Disease in Sweden and the Need to Expand Specialist Numbers. Background: The emergence of disease-modifying Alzheimer's (AD) treatments provides new hope to patients and families but concerns have been raised about the preparedness of healthcare systems to provide timely access to such treatments because of a combination of a complex diagnostic process and a large prevalent pool. Objectives: We assess the preparedness of Sweden, a high-income country known for its dementia-friendly policies, to diagnose AD patients eligible for treatment within a six-month window, given current capacity for specialist evaluations and biomarker testing. We calculate the investment requirements for Sweden to achieve this target over a timeframe of 20 years. Design: Desk research to identify data for population, mortality, disease burden, cost of services and current capacity, expert consultation to inform assumptions about patient journey, and use of a Markov model to predict waiting times. The model simulates the patients' journey through different evaluation stages: initial evaluation by a primary care specialist, neurocognitive testing by an AD specialist, and confirmatory biomarker testing with PET scanning or cerebrospinal fluid (CSF) testing. The model assumes specialist appointments and PET scans are capacity constrained, and patients progress from cognitively normal to MCI and from MCI to dementia in the resulting waiting times. Measurements: Projected waiting times for diagnosis of eligibility for disease-modifying Alzheimer's treatment from 2023 to 2042 assuming current capacity, assuming 20% of Swedish residents aged 60 years and above would seek an evaluation for cognitive decline. Investments required to scale capacity up to reach target of providing diagnosis within six months on average. Results: Initial average waiting times for AD specialist appointments would be around 21 months in 2023 and remain around 55 months through 2042, as demand would continue to outstrip supply throughout the 20-year model horizon. Waiting times for biomarker testing would be stable at less than four weeks, as patients would be held up in the queue for their first specialist consultations, and use of CSF testing is widely accepted in Sweden. An additional 25% of AD specialists would have to be added above the current growth trend to reduce waiting times to less than 6 months at an average annual cost of approximately 805 million SEK. The increased cost of volume of biomarker testing would amount to about 106 million SEK per year. Conclusions: At current capacity, the Swedish healthcare system is unable to provide timely diagnosis of patients eligible for disease-modifying AD treatment. Although future diagnostic technologies, such as digital cognitive assessments and blood tests for the AD pathology, might decrease demand for capacity-constrained services, substantial investments will be required to meet a target of less than six months of waiting time for a diagnosis. Answer: The available abstracts suggest that specialist capacity in public health is not meeting the targets set by various health authorities and organizations. In one strategic health authority (SHA), there was a marked shortfall in specialist public health capacity compared to the targets suggested by the Faculty of Public Health (FPH), with only 12 whole-time-equivalent (WTE) public health specialists per million population, while the FPH's target was 25 WTE per million population (PUBMED:17010397). Additionally, there was a particular shortage of specialists in academic public health. Another study highlighted skill gaps in the specialist public health workforce in England, particularly in areas such as "developing quality and risk management," media communication, computing, management, and leadership. This suggests that the capacity deficit is more a problem of the number of specialists rather than a lack of appropriate skills (PUBMED:15560898). The abstracts also discuss the importance of capacity building in various contexts, such as public innovation capacity (PUBMED:35719502), reflective practice for Specialist Community Practitioner and Specialist Community Public Health Nurse programmes (PUBMED:36853890), and leveraging digital public health for community capacity building in health promotion (PUBMED:32451594). Moreover, engagement in public health is seen as a means to build citizen capacity and enhance democratic discussion of science and technology (PUBMED:26769749), and there is a call for innovative curricula to release latent compassion in Specialist Community Public Health Nurses (PUBMED:30537267). The need for capacity building in public health nutrition is also stressed, with a focus on the nutrition and health workforce and policy makers in other sectors (PUBMED:25604975). The UK's experience in training and recognizing specialists in public health from backgrounds other than medicine is highlighted as a model that could be replicated in other countries to develop public health specialist workforce capacity (PUBMED:29942671). Lastly, a study assessing the preparedness of Sweden's healthcare system to diagnose Alzheimer's Disease suggests that current capacity is insufficient to provide timely access to disease-modifying treatments, with initial average waiting times for AD specialist appointments around 21 months in 2023 (PUBMED:38230728). In summary, the evidence indicates that specialist capacity in public health is not meeting the desired targets, and there is a need for increased investment in training and capacity building to address this shortfall.
Instruction: Should adjuvant radiotherapy to the supraclavicular fossa be routinely given in patients with breast conservative treatment? Abstracts: abstract_id: PUBMED:17443743 Should adjuvant radiotherapy to the supraclavicular fossa be routinely given in patients with breast conservative treatment? Background: To analyze the overall outcome, supraclavicular fossa (SCF) recurrence rate, and pattern of failure in breast cancer patients treated with conservative surgery and adjuvant radiotherapy excluding SCF treatment. Methods: A total of 143 patients were enrolled in the study. Ninety-two percent of patients were stages I and II, and 8% were stage III. The median age was 44 years, and 31% of patients were &lt;/=40 years old. Radiotherapy was delivered to the ipsilateral breast excluding the SCF. Results: The 5-year overall survival rate of the cohort was 95%, and disease-free survival rate was 91%. The cumulative incidence of SCF recurrence was 18% in patients with involved axillary nodes (N) &gt;/= 4, and 0.8% in patients with N &lt; 4. The 5-year SCF-recurrence-free survival in patients with N &gt;/= 4 and N &lt; 4 was 80% and 99%, respectively (P &lt; 0.001). N &gt;/= 4 was the only independent predictor for locoregional control (P = 0.045), disease-free survival (P = 0.001), and overall survival (P = 0.008) in multivariate analysis. Conclusions: Women with N &gt;/= 4 have a significantly higher risk of SCF recurrence and poorer survival. The SCF might be safely spared in patients with N &lt; 4, but should be routinely included in the radiotherapy design for those with N &gt;/= 4. abstract_id: PUBMED:24294004 Body mass index and patient CT measurements as a predictor of benefit of intensity-modulated radiotherapy to the supraclavicular fossa. Background: Irradiation of the supraclavicular fossa is commonly used as part of adjuvant breast radiotherapy. Intensity-modulated radiotherapy (IMRT) may be used to target this region accurately, and there are subgroups of patients that may benefit more from IMRT than others. We identify the benefit of IMRT over fixed-depth dose prescription to the supraclavicular fossa in patients of different builds in a clinical setting. Methods: Fifteen patients who received radiotherapy to the left breast and supraclavicular fossa were selected. Computed tomographic planning was used to generate plans for supraclavicular fossa coverage. Dose prescription to 1.5 cm and 3.0 cm depths was compared with IMRT plans. Coverage of the planning target volume and dose to the organs at risk were compared and correlated with patient body mass index (BMI) and computed tomography measurements. Results: Within the IMRT group, increasing depth of the supraclavicular fossa produced significantly better coverage of the planning target volume with IMRT. IMRT resulted in lower mean doses to the brachial plexus (P = 0.00) when compared with 1.5 cm and 3 cm depth dose prescriptions, but higher maximum brachial plexus doses. IMRT was more beneficial in patients with lower BMI because this resulted in a decreased maximum brachial plexus dose (P-values of 0.03 and 0.001 when compared with 1.5 cm and 3.0 cm depth dose prescriptions, respectively). Higher patient BMI resulted in a lower dose contribution of IMRT to the cord (P-values 0.066 and 0.034 when compared with 1.5 cm and 3.0 cm depth dose prescriptions respectively). Conclusion: IMRT of the supraclavicular fossa results in lower brachial plexus doses for patients with low BMI while patients with higher BMI benefit from lower mean cord doses. IMRT provides superior coverage of the planning target volume, especially in patients with a deeper supraclavicular fossa. abstract_id: PUBMED:25231194 Patients with N1 breast cancer: who could benefit from supraclavicular fossa radiotherapy? Purpose: To find a high-risk group of supraclavicular fossa recurrence (SCFR) in N1 breast cancer treated with breast conservative therapy without supraclavicular radiation therapy (SCFRT). Methods And Materials: We designed a retrospective review of 767 patients with N1 breast cancer. All patients included in this study underwent to lumpectomy or quadrantectomy with axillary lymph node dissection, followed by whole breast irradiation. All patients received radiotherapy with two tangencial fields, after a median dose of 50.4 Gy on the whole breast; an additional boost (10-16 Gy) to the tumor bed was administered. A analysis by the cox method was performed to identify prognostic factors for SCFR and a risk group for SCFR was build. Results: With a median follow-up of 76 months (12-142 months), 81 patients (10.5%) had SCFR. With the exception of T stage, all other prognostic factors (lymphovascular invasion, extracapsular extension, the number of involved axillary nodes, estrogen receptor, T stage and nuclear grade) maintaned a statistical significance in the multivariate analysis. The risk group build consisted of patients with 1 or none prognostic factor, 2 and 3 or more prognostic factors. In the analysis of 5-years SCFR free survival, patients with ≥ 3 factors showed a significant higher recurrence rate than patients with 2 and 1 or none factors 44.1%, 91.1% and 97.7%, (p &lt; 0.0001) respectively. Conclusions: Extracapsular extension, lymphovascular invasion, high nuclear grade, negative hormone receptor and the number of involved axillary nodes were important prognostic factors associated with SCFR. abstract_id: PUBMED:7527248 Prophylactic supraclavicular fossa radiotherapy in early breast cancer: is it worthwhile? A total of 291 consecutive patients with early breast cancer who did not receive any supraclavicular prophylactic irradiation of the ipsilateral fossa have been followed for a minimum of 5 years. Isolated relapse in that site occurred in 4.5% of patients and was controlled by radical radiotherapy with a post-relapse 5-year survival of 33%. Relapse with co-existing distant metastases occurred in a further 7% and no patient survived to 3 years. Supraclavicular fossa irradiation contributes to morbidity, does not improve survival and should be abandoned in favour of delayed treatment for proven recurrence. abstract_id: PUBMED:27601954 Comparison of CT-volumed supraclavicular fossa radiotherapy planning and conventional simulator-planned defined by bony landmarks for early breast cancer. Aim: A comparison of techniques, CT planning of the supraclavicular fossa and field based simulation. We highlight CT planned SCF radiotherapy which would be useful for a centre introducing the technique. Background: Development of radiotherapy technique includes a move from field-based simulation to CT planning. Materials And Methods: We conducted a retrospective review of the first 50 patients receiving radiotherapy according to the 3D CT planning protocol. Production of the previous field based technique, by virtual simulation methods on the same 50 patient CT data sets allowed both techniques to be compared for beam energy, field size, planning target volume (PTV) minimum and maximum, mean doses, depth dose normalisation, V40% lung volume and brachial plexus. Results: 88% CT-volumed plans received mean dose within ICRU recommended limits compared with only 8% using previous conventional technique. 76% required 10 MV to improve coverage and one patient (2%) an opposed posterior field. The mean normalisation depth was 4.5 cm (range 1.9-7.7 cm) compared with pre-set 3 cm of the conventional technique. With CT-volumed technique the whole lung volume exposed to V40%, including the tangential fields, reduced from 10.79% to 9.64% (p &lt; 0.001) but the mean maximum brachial plexus dose increased from 48.9 Gy to 51.6 Gy (p &lt; 0.001). Conclusions: Dose coverage of the SCF PTV was greatly improved for plans produced from 3DCT volumes compared to field based techniques. abstract_id: PUBMED:26289040 Hypofractionated irradiation of infra-supraclavicular lymph nodes after axillary dissection in patients with breast cancer post-conservative surgery: impact on late toxicity. Background: The aim of the present work was to analyse the impact of mild hypofractionated radiotherapy (RT) of infra-supraclavicular lymph nodes after axillary dissection on late toxicity. Methods: From 2007 to 2012, 100 females affected by breast cancer (pT1- T4, pN1-3, pMx) were treated with conservative surgery, Axillary Node Dissection (AND) and loco-regional radiotherapy (whole breast plus infra-supraclavicular fossa). Axillary lymph nodes metastases were confirmed in all women. The median age at diagnosis was 60 years (range 34-83). Tumors were classified according to molecular characteristics: luminal-A 59 pts (59%), luminal-B 24 pts (24%), basal-like 10 pts (10%), Her-2 like 7 pts (7%). 82 pts (82%) received hormonal therapy, 9 pts (9%) neo-adjuvant chemotherapy, 81pts (81%) adjuvant chemotherapy. All patients received a mild hypofractionated RT: 46 Gy in 20 fractions 4 times a week to whole breast and infra-supraclavicular fossa plus an additional weekly dose of 1,2 Gy to the lumpectomy area. The disease control and treatment related toxicity were analysed in follow-up visits. The extent of lymphedema was analysed by experts in Oncological Rehabilitation. Results: Within a median follow-up of 50 months (range 19-82), 6 (6%) pts died, 1 pt (1%) had local progression disease, 2 pts (2%) developed distant metastasis and 1 subject (1%) presented both. In all patients the acute toxicity was mainly represented by erythema and patchy moist desquamation. At the end of radiotherapy 27 pts (27%) presented lymphedema, but only 10 cases (10%) seemed to be correlated to radiotherapy. None of the patients showed a severe damage to the brachial plexus, and the described cases of paresthesias could not definitely be attributed to RT. We did not observe symptomatic pneumonitis. Conclusions: Irradiation of infra-supraclavicular nodes with a mild hypofractionated schedule can be a safe and effective treatment without evidence of a significant increase of lymphedema appearance radiotherapy related. abstract_id: PUBMED:16681980 Side effects of adjuvant chemotherapy and radiotherapy concurrently after conservative surgery for breast cancer Objective: Observe the side effects of adjuvant chemotherapy and radiotherapy concurrently after breast conservative surgery and investigate it feasible. Methods: 40 breast cancer patients of conservative surgery were divided into two groups randomizedly, 20 patients of the study group were given adjuvant chemotherapy and radiotherapy concurrently, 20 patients of the control group were given adjuvant thermotherapy and radiotherapy sequently. Observe the side effects of the patients in the two groups and follow them 6 months. Results: The leukopenia within the third chemotherapeutic cycle and the oral mucitis within the fourth chemotherapeutic cycle in the study group were more severe than those in the control group (P &lt; 0.05). The other side effects within every chemotherapeutic cycles were similar in both groups, including skin reaction in radiotherapeutic area, gastroenteric reaction, alopecia, arthralgia, neurotoxicity and abnormal hepatic function. Following the patients of the two groups in 6 months, there were no significance in hemogram, hepatic and renal function, electrocardiogram and chest x-ray. Conclusion: It is feasible and safe to give the adjuvant chemotherapy and radiotherapy concurrently to the breast cancer patients after conservative surgery. abstract_id: PUBMED:22056674 A new isocentric technique for exact geometric matching in the radiotherapy of the breast and ipsilateral supraclavicular fossa using dual asymmetric jaws. Background: In some clinical situations breast or chest wall radiotherapy for cancer is given in association with supraclavicular fossa irradiation. Often the treatment is delivered by two tangential fields to the breast or chest wall and an anterior field that irradiates the supraclavicular region. The tissue between the breast or chest wall and the supraclavicular region may be under or overdosed, because of the junction between the two tangential fields and the anterior field. Purpose: To present a new isocentric technique for exact geometric matching between the two tangential fields and the anterior field. Methods: Patients are positioned with both arms raised. Using three-dimensional trigonometry, two half-fields, with isocenter between the breast and the supraclavicular region, are easily matched. The tangential fields have a collimator rotation to protect the lung without additional shielding. The correct gantry, collimator and couch positions are defined for the anterior field to match the tangential fields. Conclusions: A general formula for exact geometric matching in radiotherapy of the breast and supraclavicular fossa is presented. The method does not require additional shielding to eliminate divergence other than the four independent jaws. The result is simple to implement in modern delivery facilities. abstract_id: PUBMED:19540052 Outcome after conservative surgery and breast irradiation in 5,717 patients with breast cancer: implications for supraclavicular nodal irradiation. Purpose: To evaluate the outcome and predictive factors of patients who underwent breast-conserving surgery and adjuvant radiotherapy to the whole breast only, without supraclavicular nodal irradiation. Methods And Materials: A total of 5,717 patients with pT1-T4 breast cancer were treated at the University of Florence. The median age of the patient population was 55 years (range, 30-80 years). All patients were followed for a median of 6.8 years (range, 1-27 years). Adjuvant chemotherapy was recommended in 1,535 patients (26.9%). Tamoxifen was prescribed in 2,951 patients (51.6%). The patients were split into three groups according to number of positive axillary nodes (PAN): P1, negative axillary lymph nodes; P2, one to three PAN; P3, more than three PAN. Results: The P3 patients had a higher incidence of supraclavicular fossa recurrence (SFR) compared with P2 and P1 patients. However, the incidence of SFR in P3 patients was low (only 5.5%), whereas the incidence of distant metastases (DM) was 27.2%. Distant metastasis was the only independent prognostic factor for breast cancer survival. Additionally, in the subgroup of patients who developed local recurrence, DM was the most important death predictor. Conclusion: Our series suggests that isolated SFR in patients who did not receive supraclavicular radiotherapy is infrequent, as well as in those patients who have more than three PAN, and SFR seems not to influence the outcome, which depends on DM occurrence. abstract_id: PUBMED:10942334 An audit of indications and techniques for supraclavicular fossa irradiation in early breast cancer in the United Kingdom. This article describes an audit of the indications and techniques used by clinical oncologists in the United Kingdom (UK) in the delivery of adjuvant radiotherapy to the supraclavicular fossa (SCF) in patients with early breast cancer. A postal questionnaire was sent to one consultant clinical oncologist in each UK radiotherapy centre in November 1999. These were the same individuals listed in the Maher Committee Report as providing breast cancer services. Forty-one out of 51 completed forms were returned. The results show significant variation in the indications for SCF irradiation and for the definition of the radiotherapy target volume. We discuss the possible basis for the variations found in the audit. There was broad agreement on technique, in particular on the need for matching the tangential and SCF fields and maintaining patient position between fields, factors that are likely to minimize serious morbidity, including brachial plexus injury. Answer: Adjuvant radiotherapy to the supraclavicular fossa (SCF) in patients with breast conservative treatment is not routinely recommended for all patients. The decision to include SCF in the radiotherapy plan should be based on the individual patient's risk factors for recurrence. For instance, patients with four or more involved axillary nodes (N ≥ 4) have a significantly higher risk of SCF recurrence and poorer survival, suggesting that SCF should be included in the radiotherapy design for these patients (PUBMED:17443743). On the other hand, the SCF might be safely spared in patients with fewer than four involved axillary nodes (N < 4), as they have a much lower risk of SCF recurrence (PUBMED:17443743). Moreover, certain high-risk groups for SCF recurrence in N1 breast cancer have been identified, such as those with extracapsular extension, lymphovascular invasion, high nuclear grade, negative hormone receptor, and the number of involved axillary nodes. These prognostic factors are associated with SCF recurrence, and patients with three or more of these factors have a significantly higher recurrence rate (PUBMED:25231194). The use of intensity-modulated radiotherapy (IMRT) to the SCF can result in lower brachial plexus doses for patients with low body mass index (BMI), while patients with higher BMI benefit from lower mean cord doses. IMRT provides superior coverage of the planning target volume, especially in patients with a deeper SCF (PUBMED:24294004). It is also worth noting that prophylactic SCF radiotherapy in early breast cancer does not improve survival and contributes to morbidity, suggesting that it should be abandoned in favor of delayed treatment for proven recurrence (PUBMED:7527248). In conclusion, adjuvant radiotherapy to the SCF should not be given routinely to all patients with breast conservative treatment. It should be considered on a case-by-case basis, taking into account the number of involved axillary nodes and other high-risk factors for SCF recurrence.
Instruction: Does the prognostic value of dobutamine stress echocardiography differ among different age groups? Abstracts: abstract_id: PUBMED:21473974 Does the prognostic value of dobutamine stress echocardiography differ among different age groups? Background: Age is associated with reduced exercise capacity and greater prevalence of coronary artery disease. Whether the prognostic information obtained from dobutamine stress echocardiography (DSE), a stress test commonly used for patients unable to perform an exercise test, provides differential information based on age is not well known. Methods: We studied 6,655 consecutive patients referred for DSE. Patients were divided into 3 age groups: (1) &lt;60 years (n = 1,389), (2) 60 to 74 years (n = 2,978), and (3) ≥75 years (n = 2,288). Mean follow-up was 5.5 ± 2.8 years. End points included all-cause mortality and cardiac events, including myocardial infarction and late (&gt;3 months) coronary revascularization. Results: Peak stress wall motion score index was an independent predictor of cardiac events in all age groups (&lt;60 years: hazard ratio [HR] 1.14, P = .02; 60-74 years: HR 1.70, P &lt; .0001; ≥75 years: HR 1.10, P = .006). In patients ≥75 years, peak wall motion score index (HR 1.10, P &lt; .0001) and abnormal left ventricular end-systolic volume response (HR 1.25, P = .03) were independent predictors of death. In patients aged 60 to 74 years, abnormal left ventricular end-systolic volume response (HR 1.43, P = .0003) was independently related to death, whereas in patients &lt;60 years, the echocardiographic data assessed during stress were not a predictor. Conclusions: Dobutamine stress echocardiography provided independent information predictive of cardiac events among all age groups and death in patients ≥60 years. However, among patients &lt;60 years, stress-induced echocardiographic abnormalities were not independently associated with mortality. Comorbidities, which have precluded exercise testing, may be most relevant in predicting mortality in patients &lt;60 years undergoing DSE. abstract_id: PUBMED:33191138 Accuracy and Prognostic Value of Physiologist-Led Stress Echocardiography for Coronary Disease. Background: We demonstrated that physiologist-led stress echocardiography (PLSE) is feasible for coronary artery disease (CAD) assessment. We sought to extend our work by assessing its accuracy and prognostic value. Methods: Retrospective study of 898 subjects undergoing PLSE (n=393) or cardiologist-led stress echocardiography (CLSE) (n=505) for CAD assessment using exercise or dobutamine. For accuracy assessment, the primary outcome was the ability of stress echocardiography to identify significant CAD on invasive coronary angiography (ICA). Incidence of 24-month non-fatal MI, total and cardiac mortality, revascularisation and combined major adverse cardiac events (MACE) were assessed. Results: Demographics, comorbidities, CAD predictors, CAD pre-test probability and cardiac medications were matched between the PLSE and CLSE groups. PLSE had high sensitivity, specificity, positive and negative predictive value and accuracy (85%, 74%, 69%, 88%, 78% respectively). PLSE accuracy measures were similar and non-inferior to CLSE. There was a similar incidence of individual and combined outcomes in PLSE and CLSE subjects. Negative stress echocardiography conferred a comparably low incidence of non-fatal MI (PLSE 1.4% vs. CLSE 0.9%, p=0.464), cardiac mortality (0.6% vs. 0.0%, p=0.277) and MACE (6.8% vs. 3.1%, p=0.404). Conclusion: This is the first study of the accuracy compared with gold standard of ICA, and prognostic value of PLSE CAD assessment. PLSE demonstrates high and non-inferior accuracy compared with CLSE for CAD assessment. Negative PLSE and CLSE confer a similarly very low incidence of cardiac outcomes, confirming for the first time the important prognostic value of PLSE. abstract_id: PUBMED:31504358 Is 3D Dobutamine stress echocardiography ready for prime time? Diagnostic and prognostic implications. Aims: Compare the diagnostic accuracy and prognostic value of echo contrast enhanced 2D and 3D Dobutamine stress echocardiography (DSE). Methods And Results: We included 718 patients indicated for DSE. All had standard 2D, and contrast enhanced left ventricular opacification (LVO) for 2D and 3D acquisitions at rest and peak stress. Chi-square test was done to assess relationship between DSE result and early revascularization. Kaplan-Meier plots with Logistic regression analysis predicted late major adverse cardiovascular events (MACE) at a maximum follow-up of 84 months. The mean age was 63 ± 13 years (61% males) and follow-up was obtained in 692/718 (96.4%) patients. Only 32% had excellent baseline image quality. The DSE was abnormal in 19.4% patients on 2D, in 17.1% on 2D-LVO and in 19.1% on 3D-LVO. Early revascularization was performed in, respectively, 32.8%, 45.8%, and in 48.5% of stress-positive 2D, 2D-LVO, and 3D-LVO studies. After excluding the 66 patients receiving early revascularization 68/626 (10.9%) had MACE at a maximum follow-up of 84 months. Kaplan-Meier plots showed that stress-positive 2D-LVO and 3D-LVO studies not receiving early revascularization when assessed separately and combined had significantly worse outcomes for MACE compared with stress-negative patients (OR 3.69; 95% CI: 1.54-8.87; P = 0.011, OR 4.54; 95% CI: 1.72-12.93; P = 0.008, and OR 7.07, 95% CI: 1.62-25.16; P = 0.001, respectively). Conclusion: Combined use of 2D- and 3D-LVO DSE is ready for prime time considering the feasibility, improved diagnostic accuracy and prognostic value. abstract_id: PUBMED:20589431 Prognostic value of dobutamine stress echocardiography in octogenarians. The prognostic value of dobutamine stress echocardiography (DSE) for risk stratification of patients aged ≥ 80 years is not clearly defined. A follow-up of 3 ± 2 years for major cardiac events and all-cause mortality was obtained in 227 patients, age ≥ 80 years, who underwent DSE for known or suspected coronary artery disease. Stress function index (SFI), calculated as the ratio of peak wall motion score index to left ventricular ejection fraction, was analyzed both as continuous variable and categorized using the mean value of 5 as the cut-off. Only 95 patients (42%) of this group underwent a cycloergometer exercise stress test (EST). During DSE 118 patients developed inducible ischemia; SFI was 4.9 ± 2.6 and 60 subjects showed a value higher than 5. EST gave a positive result in 12 patients and a negative result in 8 patients; it was inconclusive for inadequate increase in heart rate in 75 (79%) subjects. Advanced age (HR: 1.184/year, 95% CI: 1.073-1.306, p = 0.001) and SFI ≥ 5 (HR: 2.682, 95% CI: 1.429-5.035, p = 0.002) were independent predictors of all-cause mortality; advanced age (HR: 1.252/year, 95% CI: 1.064-1.473, p = 0.007), SFI ≥ 5 (HR: 3.181, 95% CI: 1.174-8.621, p = 0.02) and presence of left bundle branch block (HR: 3.060, 95% CI: 1.057-8.862, p = 0.039) independently predicted an increased occurrence of major cardiac events. No parameter derived from EST showed an independent prognostic role. DSE showed a significant prognostic value in octogenarians, both for all-cause mortality and major cardiac events. abstract_id: PUBMED:32125367 Long-term prognostic value of stress myocardial perfusion echocardiography in patients with coronary artery disease: a meta-analysis. Aims: To evaluate the prognostic value of myocardial perfusion (MP) imaging during contrast stress echocardiography (cSE) in patients with known or suspected coronary artery disease (CAD). Methods And Results: A search in PubMed, Embase databases, and the Cochrane library was conducted through May 2019. The Cochran Q statistic and the I2 statistic were used to assess heterogeneity, and the results were analysed by RevMan V5.3 and Stata V15.1 software. Twelve studies (seven dipyridamole and five exercise/dobutamine) without evidence of patient overlap (same institution publishing results over a similar time period) enrolling 5953 subjects (47% female, 8-80 months of follow-up) were included in the analysis. In all studies, total adverse cardiovascular events were defined as either cardiac death, non-fatal myocardial infarction (NFMI), or need for urgent revascularization. Hazard ratios (HRs) revealed that a MP abnormality [pooled HR 4.75; 95% confidence interval (CI) 2.47-9.14] was a higher independent predictor of total events than abnormal wall motion (WM, pooled HR 2.39; 95% CI 1.58-3.61) and resting left ventricular ejection fraction (LVEF, pooled HR 1.92; 95% CI 1.44-2.55) with significant subgroup differences (P = 0.002 compared with abnormal WM and 0.01 compared with abnormal LVEF). Abnormal MP was associated with higher risks for death [Risk ratio (RR) 5.24; 95% CI 2.91-9.43], NFMI (RR 3.09; 95% CI 1.84-5.21), and need for coronary revascularization (RR 16.44; 95% CI 6.14-43.99). Conclusion: MP analysis during stress echocardiography is an effective prognostic tool in patients with known or suspected CAD and provides incremental value over LVEF and WM in predicting clinical outcomes. abstract_id: PUBMED:31384401 The value of negative stress echocardiography in predicting cardiovascular events among adults with no known coronary disease. Introduction: Stress echocardiography is a safe and cost-effective method of evaluating the patients with suspected coronary artery disease (CAD). However, the risk factors of an adverse cardiovascular event after a normal exercise (ESE) or dobutamine (DSE) stress echocardiography are not well established. Methods: A cohort of 705 patients without previous history of CAD and a negative ESE/DSE was studied. All studies were performed in a high-volume echocardiologic laboratory and interpreted by two experienced echocardiography-trained cardiologists. Patients with inconclusive studies and those with an evidence of myocardial ischemia were excluded. Demographic, echocardiographic and hemodynamic findings were recorded. Patients were followed for at least 2 years. Independent predictors of major adverse cardiovascular events (MACE) were determined by regression analysis. Results: During a period of 55.7±17.5 months, MACE occurred in 35 (5.0%) of patients. Negative predictive value (NPV) of DSE was 89.2%, which was significantly less than 96.5% for ESE in predicting the occurrence of MACE (P = 0.001). MACE occurred more frequently among older (≥65 years) men with preexisting diabetes, hypertension, and/or hyperlipidemia. During ESE, a higher maximum blood pressure*heart rate product for the achieved level of metabolic equivalent (METS) of tasks was also an independent predictor of MACE. Conclusion: Inability of patients to undergo traditional ESE that led to the choice of using DSE alternative reduces the NPV of the stress echocardiography among patients without previous history of CAD. A modest rise of heart rate and blood pressure in response to increased level of activity serves as favorable prognostic value and improves the NPV of stress echocardiography. abstract_id: PUBMED:16129798 Prognostic value of dobutamine stress myocardial contrast perfusion echocardiography. Background: Myocardial perfusion (MP) imaging with real-time contrast echocardiography (RTCE) improves the sensitivity of dobutamine stress echocardiography for detecting coronary artery disease. Its prognostic value is unknown. We sought to determine the value of MP and wall motion (WM) analysis during dobutamine stress echocardiography in predicting the outcome of patients with known or suspected coronary artery disease. Methods And Results: We retrospectively studied 788 patients with RTCE during dobutamine stress echocardiography using intravenous commercially available contrast agents. The incremental prognostic value of MP imaging over clinical risk factors and other echocardiographic data was examined through the use of a log-likelihood test (Cox model). During a median follow-up of 20 months, 75 events (9.6%) occurred (58 deaths, 17 nonfatal myocardial infarctions). Abnormal MP had significant incremental value over clinical factors, resting ejection fraction, and WM responses in predicting events (P&lt;0.001). By multivariate analysis, the independent predictors of death and nonfatal myocardial infarction were resting left ventricular ejection fraction &lt;50% (relative risk [RR], 1.9; 95% CI, 1.2 to 3.2; P=0.01), hypercholesterolemia (RR, 0.5; 95% CI, 0.3 to 0.9; P=0.01), and abnormal MP (RR, 5.2; 95% CI, 3.0 to 9.0; P&lt;0.0001). The 3-year event free survival was 95% for patients with normal WM and MP, 82% for normal WM and abnormal MP, and 68% for abnormal WM and MP. Conclusions: MP imaging during dobutamine stress RTCE provides incremental prognostic information in patients with known or suspected coronary artery disease. Patients with normal MP have a better outcome than patients with normal WM. abstract_id: PUBMED:23590036 Prognostic value of normal dobutamine stress echocardiography in renal transplant candidates. Background: Renal transplant candidates are at high-risk for cardiovascular events. No definite screening tool has been recommended for the pre-operative evaluation. Objective: The authors studied the prognostic value of normal dobutamine stress echocardiography in this population. Material And Method: Dobutamine stress echocardiography was performed for the pre-operative assessment in 107 renal transplant candidates (age 53.2 +/- 6.2 years, 66.4% male). The mean follow-up time was 2.8 +/- 1.7 years. The primary endpoint was total mortality. Results: During follow-up, 16 (15.0%) died and 26 (24.3%) patients underwent kidney transplantation. The overall survival probabilities at 1, 3 and 5 years were 87, 83 and 79%, respectively. Among those who underwent renal transplantation, the survival probabilities at 1, 3 and 5 years were 100, 100 and 89%, respectively Conclusion: In renal transplant candidates, normal dobutamine stress echocardiography portends a good long-term prognosis. abstract_id: PUBMED:17650690 Exercise stress test and dobutamine stress echocardiography for the prognostic stratification after uncomplicated acute myocardial infarction Background: Exercise stress test and dobutamine stress echocardiography are usually performed early after an uncomplicated acute myocardial infarction in the prognostic stratification of patients to define the optimal diagnostic and therapeutic procedure. The aim of this study was to evaluate if the association of an imaging test could increase exercise test capability to identify patients with residual ischemia and patients at high risk of events in the follow-up. Methods: Four hundred and forty-two consecutive patients underwent exercise stress testing and dobutamine stress echocardiography before discharge and subsequently coronary angiography within 30 days. In case of submaximal negative result at the exercise test, this was repeated 20 days after discharge. The follow-up lasted 26.8 +/- 9 months. The endpoints were death, reinfarction, and unstable angina requiring hospitalization or revascularization intervention. Results: Both tests and their association showed a higher sensitivity in males; in females dobutamine stress echocardiography had a higher specificity. In females, the addition of dobutamine stress echocardiography increased either the negative or the positive prognostic values of exercise stress test by 31% and 5.6%, respectively. In males, the negative prognostic value increased by 15.5%, whereas the positive prognostic value decreased by 12%. A low exercise capability (&lt;6 METs) showed an event predictive value independent of test results and any other variables. The event-free survival curves correlated with exercise capability differed shortly after the first months both in males and females. Conclusions: These results suggest different stratification procedures with regard to gender: in males, the exercise stress test might be sufficient at discharge, to be repeated 20 days later, if submaximal negative. In females, it seems to be useful to associate an imaging test at discharge. In any case, the exercise stress test remains the main step in the stratification procedure also for its capability to identify patients who are at high risk of events in the follow-up. abstract_id: PUBMED:17531567 Prognostic implications of dipyridamole or dobutamine stress echocardiography for evaluation of patients &gt; or =65 years of age with known or suspected coronary heart disease. This study investigated the value of pharmacologic stress echocardiography for risk stratification of patients &gt; or =65 years of age. The study cohort consisted of 2,160 patients &gt; or =65 years of age (1,257 men, mean +/- SD 71 +/- 5 years of age) undergoing dipyridamole (n = 1,521) or dobutamine (n = 639) stress echocardiography for evaluation of known (n = 913) or suspected (n = 1,247) coronary artery disease. Of 2,160 patients, 753 (35%) had a normal test result, whereas 772 (36%) showed a myocardial ischemic pattern and 635 (29%) a scar pattern. During a median follow-up of 26 months, 241 deaths and 87 nonfatal myocardial infarctions occurred. Patients (n = 568) undergoing revascularization were censored. Of 16 analyzed variables, age (hazard ratio [HR] 1.07 per unit increment), wall motion score index at rest (HR 2.63 per unit increment), ischemia at stress echocardiography (HR 1.81), and diabetes (HR 1.57) were multivariable predictors of death, whereas age (HR 1.06 per unit increment), ischemia at stress echocardiography (HR 2.60), wall motion score index at rest (HR 1.98 per unit increment), scar pattern (HR 1.99), and diabetes (HR 1.48) were multivariable predictors of death or myocardial infarction. Using an interactive stepwise procedure, stress echocardiography showed incremental prognostic value over clinical and echocardiographic data at rest, which decreased with increasing age. In addition, the annual hard event rate associated with a normal test result progressively increased with age. In conclusion, pharmacologic stress echocardiography provides useful prognostic information in patients &gt; or =65 years of age. However, its prognostic value decreases with increasing age. Answer: Yes, the prognostic value of dobutamine stress echocardiography (DSE) does differ among different age groups. A study involving 6,655 consecutive patients referred for DSE and divided into three age groups (<60 years, 60 to 74 years, and ≥75 years) found that peak stress wall motion score index was an independent predictor of cardiac events in all age groups. However, among patients <60 years, stress-induced echocardiographic abnormalities were not independently associated with mortality, suggesting that comorbidities may be more relevant in predicting mortality in this age group undergoing DSE. In contrast, for patients ≥60 years, DSE provided independent information predictive of death (PUBMED:21473974). Another study focusing on octogenarians (patients aged ≥80 years) undergoing DSE for known or suspected coronary artery disease found that the stress function index (SFI) was an independent predictor of all-cause mortality and major cardiac events. This suggests that DSE has significant prognostic value in this older age group (PUBMED:20589431). Furthermore, a study investigating the prognostic implications of pharmacologic stress echocardiography in patients ≥65 years of age with known or suspected coronary heart disease showed that stress echocardiography provided useful prognostic information. However, its prognostic value decreased with increasing age, and the annual hard event rate associated with a normal test result progressively increased with age (PUBMED:17531567). In summary, while DSE provides prognostic information across different age groups, its predictive value for mortality may not be as strong in younger patients (<60 years) compared to older patients (≥60 years and especially ≥80 years), where it serves as a more robust predictor of both cardiac events and mortality.
Instruction: Mast cells positive for cluster of differentiation 117 protein: are they players or conductor in the orchestra of cholesteatoma? Abstracts: abstract_id: PUBMED:22697111 Mast cells positive for cluster of differentiation 117 protein: are they players or conductor in the orchestra of cholesteatoma? Background: The pathogenesis of cholesteatoma remains unclear, despite several theories. Alterations in the density of mast cells positive for cluster of differentiation 117 protein (also known as CD117) can be critical to cholesteatoma formation, due to the effect on keratinocyte growth factor production. This study aimed to investigate the potential role of these mast cells in cholesteatoma pathogenesis. Methods: The number and density of mast cells positive for cluster of differentiation 117 protein were immunohistochemically analysed in 52 patients: 22 with chronic otitis media alone (group one), 25 with chronic otitis media with cholesteatoma (group two) and five controls. Results: The number of these mast cells was much higher in group two (in cholesteatoma matrix tissue) than in group one (in chronic otitis media granulation tissue) or the controls (in normal post-auricular skin). The density of these mast cells was significantly greater in group two than in group one or the controls (p &lt; 0.05). The number and density of these mast cells was much greater in group one than in controls (p &lt; 0.01). Conclusion: Mast cells positive for cluster of differentiation 117 protein could play a role in cholesteatoma formation. Further investigation of the role of these mast cells in cholesteatoma may suggest new ways of addressing this disorder, and may enable the development of targeted treatments. abstract_id: PUBMED:9596172 Increased numbers of mast cells in human middle ear cholesteatomas: implications for treatment. Hypothesis: Because many of the biologic phenomena in which mast cells are involved also are observed in human cholesteatoma pathology, the authors hypothesized that mast cells may play a role in this disease. The first test of this hypothesis is to determine whether there are an increased number of mast cells associated with cholesteatomas. Background: The molecular and cellular defects that result in the pathologic features observed in acquired and congenital cholesteatomas are unknown. One common feature of cholesteatoma pathogenesis is the presence of bacteria and a numerous inflammatory cytokines expressed by host inflammatory cells. The interactions between inflammatory cells and cholesteatoma epithelium could result in the induction of other aberrant biologic features of cholesteatomas. Thus, it is critical to the understanding of the pathogenesis of cholesteatomas to define the specific role of each cell type involved in this disease. Connective tissue mast cells have a complex retinue of functions mediated via the secretion of a variety of cytokines and proteinases, and many of the biologic phenomena in which mast cells are involved also are observed in cholesteatoma pathology. Methods: The authors evaluated by immunohistochemistry 36 cholesteatomas of all types (e.g., primary and secondary acquired, recurrent, and congenital) and 23 specimens of normal tissues (e.g., tympanic membrane, canal wall skin, and postauricular skin) for the expression of tryptase, a mast cell-specific protease. Results: Cholesteatomas showed approximately threefold to sevenfold increase in the concentration of mast cells when compared with that of normal tissues. In addition, 19-34% of the mast cells were found within the suprabasal layers of the squamous epithelium of cholesteatoma subgroups, a phenomenon observed only in grossly inflamed tympanic membrane specimens, but not in other control tissues including minimally inflamed tympanic membranes. Conclusions: The authors conclude from these data that mast cells may represent a previously unrecognized host inflammatory cell, which plays an important role in the development of one or more traits of cholesteatoma pathology. abstract_id: PUBMED:7543918 Mast cells in aural polyps: a preliminary report. Mast cells are a rich source of potent biologically active mediators and are found in connective tissue, associated with blood vessels in many varied inflammatory conditions. Mast cells have been described in nasal polyps and turbinates and in adenoidal tissue in the upper aerodigestive tract. As the middle ear lining is contiguous with the nose and the nasopharynx, the presence of mast cells in aural polyps is interesting. This preliminary study investigated the presence of mast cells in inflammatory aural polyps using light microscopy. All patients presenting to the department in one year were included. Patients with previous ear disease or surgery and in whom cholesteatoma was suspected were excluded. Except for one patient mast cells were seen in all aural polyps. The implications of these findings is discussed. Further work is needed using electron microscopy. abstract_id: PUBMED:2411102 Bone resorption in chronic otitis media. The role of mast cells. Twenty-two surgical specimens of eroded middle ear ossicles were removed from patients with chronic otitis media, with and without cholesteatoma. By using specific mast cell stains, increased numbers of mast cells were found in connective or granulation tissue adjacent to eroded surface of the bone. Mast cells possess the biological machinery necessary for enhancing bone resorption, and the population density of mast cells is increased in a variety of disorders that are associated with bone resorption. It is hypothesized that mast cells contribute to bone resorption in chronic otitis media, and the possible mechanisms by which mast cells exert their action are discussed. abstract_id: PUBMED:6085806 Mast cells in human middle ear mucosa in health and in disease. The distribution of mast cells was studied in normal human middle ear mucoperiosteal lining and in middle ear biopsies of patients with acute and chronic otitis media. The mast cells were identified on the basis of the metachromatic staining for their cytoplasmic granules with Giemsa and toluidine blue. Only a few mast cells located in proximity to blood vessels in the lamina propria underneath the epithelial layer were observed in normal middle ear mucoperiosteum. The number of mast cells in acute inflammatory reactions and in the normal middle ear lining was similar. By contrast, the mast cell count was significantly increased in chronic inflammatory reactions. The population density of the mast cells was the highest in the subepithelial layer of cholesteatoma, in regions where the lamina propria showed fibrosis and infiltration with chronic inflammatory cells, and around mucous glands. The presence of increased numbers of mast cells in chronic otitis media is consistent with our previous finding of high levels of histamine in middle ear effusions. It is postulated that mast cells play an important role in the pathogenesis of chronic otitis media through the release of their active biochemical mediators. abstract_id: PUBMED:24931056 Studies on distribution of α1-antitrypsin, lysozyme, lactoferrin, and mast cell enzymes in diseased middle ear mucosa. Conclusion: Antimicrobial agents (AMAs) and mast cells in chronic otitis media (COM) may play a protective role in the pathogenesis of COM. Among them, our results indicated that lysozyme may be the most important parameter of the mucosa infectivity in COM. Objectives: AMAs include alpha 1-antitrypsin (α1-AT), lysozyme, and lactoferrin and exhibit innate immune activity that is known to play a critical role in mucosal defenses. In addition, proteases expressed in mast cells contribute to mucosal defenses. The aim of this study was to identify AMAs responsible for COM as well as their association with middle ear mucosal diseases. Methods: Middle ear mucosa (MEM) was intraoperatively collected by biopsy from patients, diagnosed as COM without or with cholesteatoma, and immediately processed for immunohistochemical study to evaluate expression of AMAs and mast cell proteases (chymase and tryptase). Results: The AMAs and mast cell enzymes were observed prominently in the chronic middle ear diseases whereas immunoreactivity of α1-AT and lysozyme was not significantly increased in the COM with cholesteatoma compared with that in COM without cholesteatoma. Lactoferrin was not detected in the diseased MEMs. abstract_id: PUBMED:2603700 The Audiant Bone Conductor. Clinical experience in 2 patients In this paper we present you our clinical experience of 2 cases in which we implanted the Audiant Bone Conductor (by J. Hough). Our conclusion is that we should be very careful in making the decision to do such an implant; indications and limitations should be known by every surgeon who intends to do an implant. abstract_id: PUBMED:36766559 A Role for Mast Cell-Mediated Antibodies in the Formation of Cholesteatoma and Cholesteatoma-Induced Bone Erosion. The study aimed to evaluate the effects and relationships between mast cells in the matrix, mast cell enzymes tryptase and chymase, epithelial proliferation, microvascular density, and bone destruction in cholesteatoma. Thirty-five biopsies diagnosed with cholesteatoma and seven healthy skin tissues taken from the retro-auricular region for control were evaluated. Immunohistochemical studies were performed with CD117, CD34, Ki-67, chymase, and tryptase antibodies, in a single session for all cases and the control group. The relationship between erosion size and antibody load was determined. The mean cholesteatoma epithelium Ki-67 was higher than the control group (p &lt; 0.001). CD117-positive mast cells, chymase-positive mast cells, tryptase-positive mast cells, and microvessel density were significantly higher in the cholesteatoma matrix compared to the control group (p &lt; 0.002, p &lt; 0.001, p &lt; 0.005). In the group with bone erosion scores of two and above, immunohistochemical markers tended to be higher. A positive correlation was found between CD117 and chymase, tryptase, and microvessel density; between tryptase, chymase, and microvessel density; and between chymase and microvessel density. CD117-positive mast cells and chymase-positive mast cells stimulate angiogenesis, increase the epithelium's proliferative capacity in the cholesteatoma matrix, and form cholesteatoma. The increased proliferation of cholesteatoma epithelium and increased vascular density in the matrix exacerbate bone erosion. abstract_id: PUBMED:10377839 Mastocytes in chronic inflammation of middle ear mucosa Introduction: Pathohistological studies have shown that in secretory otitis media an increased number of mast cells have been found in mucosa of the middle ear and an increased level of histamine in the cavity of the middle ear. The aim of this study was to analyse the distribution and functional state of mast cells in middle ear mucosa biopsies in patients with chronic otitis media. Material And Methods: The pathohistological analysis was performed on 118 biopsy specimens of mucosa of the middle ear. The samples were taken intraoperatively from 39 patients during the surgical treatment of chronic otitis media with and without cholesteatoma and otomastoiditis. The specimens were taken from mastoid, attic space, promantory and protympanon. All biopsy specimens were prepared for hematoxylin-eozin staining and were identified with PAS method and Lennert Giemsa histamine granules. The Alcian blue-Safranin method was used to identify heparin granules in mast cells. By using the semiquantitative method, distribution of mast cells was marked as rare (+), numerous (+2) and high (+3). Also the number of degranulated cells was identified (Tables 1, 2). Results And Discussion: The analyses of biopsy specimens of mucosa of the middle ear in chronic otitis media showed that cells were present in all areas of mucosa in a considerably higher number (Table 1). Degranulated cells were frequently found in biopsy specimens of otomastoiditis and contained predominately histamine granules; thus specimens taken from chronic otitis media, with and without cholesteatoma, showed mast cells with heparin granules with low rate degranulation. Pathophysiological examinations of the role of mast cells in the mechanism of chronic inflammation have shown that mast cells display through mediators the biphasic effect. They can participate in the process of deterioration of inflammation, whereby the released mediators cause the processes of tissue destruction, and can stimulate the process of tissue repair. Heparin accelerates the healing process of tympanic membrane perforation, stimulates the proliferation of endothelial cells and supports angiogenesis in the lamina propria of the tympanic membrane. Further studies are needed to clarify the role of histamine and heparin in the pathophysiology of chronic inflammation in chronic otitis media. abstract_id: PUBMED:8179864 Isolation and characterization of trypsin-like and chymotrypsin-like proteinases from human cholesteatoma. The mast-cell-specific proteolytic enzymes tryptase and chymase were identified in and isolated from cholesteatoma in a ratio similar to that found in human skin. We assume that this ratio reflects a similar distribution of tryptase-containing and tryptase/chymase-containing mast cells in both these tissues. It seems conceivable that mechanisms able to trigger excessive and/or continuous mast cell degranulation in the middle ear might be causative for the formation of cholesteatoma either directly or via primed chronic inflammatory reactions. By their ability to amplify degranulation of mast cells, mast cell proteinases, in particular chymase, may contribute to the chain of events leading to the formation of cholesteatoma. Answer: Mast cells positive for cluster of differentiation 117 protein, also known as CD117, appear to play a significant role in the pathogenesis of cholesteatoma. The presence and density of CD117-positive mast cells are much higher in cholesteatoma matrix tissue compared to chronic otitis media granulation tissue or normal post-auricular skin, suggesting a potential role in cholesteatoma formation (PUBMED:22697111). This is supported by findings that cholesteatomas show a threefold to sevenfold increase in the concentration of mast cells when compared with normal tissues, indicating that mast cells may be an important host inflammatory cell contributing to cholesteatoma pathology (PUBMED:9596172). The presence of mast cells in aural polyps and their association with various inflammatory conditions further underscores their potential involvement in middle ear diseases (PUBMED:7543918). Additionally, increased numbers of mast cells have been found adjacent to eroded bone surfaces in chronic otitis media, suggesting that they may contribute to bone resorption in this condition, which is a common complication of cholesteatoma (PUBMED:2411102). The increased number of mast cells in chronic inflammatory reactions of the middle ear, particularly in the subepithelial layer of cholesteatoma, aligns with the hypothesis that mast cells play a role in the pathogenesis of chronic otitis media through the release of active biochemical mediators (PUBMED:6085806). Furthermore, the relationship between mast cells, their enzymes (tryptase and chymase), epithelial proliferation, microvascular density, and bone destruction in cholesteatoma has been established, with a positive correlation found between CD117-positive mast cells and these factors (PUBMED:36766559). In conclusion, CD117-positive mast cells are not merely players but could be considered conductors in the orchestra of cholesteatoma, influencing various aspects of its pathogenesis, including epithelial proliferation, angiogenesis, and bone erosion. Their significant presence and activity in cholesteatoma suggest that they could be a target for developing new treatments for this disorder (PUBMED:22697111; PUBMED:9596172; PUBMED:36766559).
Instruction: Does flying present a threat of polybrominated diphenyl ether exposure? Abstracts: abstract_id: PUBMED:21124237 Does flying present a threat of polybrominated diphenyl ether exposure? Objective: To investigate possible exposure to polybrominated diphenyl ethers (PBDEs) in US professional airline workers. Methods: We recruited 30 healthy US professional airline workers who lived in the Dallas, Texas, area to test their blood PBDE levels. We examined the relationship between hours worked in an airplane and total PBDE blood levels. Results: Total PBDE blood levels from the 30 volunteers were unremarkable despite minor elevations of certain congeners in a few volunteers. No statistically significant correlations were noted between hours in airplanes in the past 1 or 5 years and levels of individual brominated diphenyl ethers congeners or total PBDEs. Conclusions: We hypothesized that elevated PBDE levels in commercial aviation workers could be found associated with time spent in airliners. Our findings do not support such an association. abstract_id: PUBMED:17637922 Measurement of polybrominated diphenyl ethers and metabolites in mouse plasma after exposure to a commercial pentabromodiphenyl ether mixture. Background: Previous studies have shown that polybrominated diphenyl ethers (PBDEs) behave as weak estrogens in animal and cell culture bioassays. In vivo metabolites of PBDEs are suspected to cause these effects. Objectives: To identify candidate metabolites, mouse plasma samples were collected after continuous oral and subcutaneous exposure to DE-71, a widely used commercial pentabromodiphenyl ether product, for 34 days. Methods: Samples were extracted, separated into neutral and phenolic fractions, and analyzed by gas chromatographic mass spectrometry. Results: In the plasma samples of orally treated animals, 2,2',4,4',5,5'-hexabromodiphenyl ether (BDE-153) represented 52% of total measurable PBDEs, whereas it represented only 4.3% in the DE-71 mixture. This suggested that BDE-153 was more persistent than other congeners in mice. Several metabolites were detected and quantitated: 2,4-dibromophenol, 2,4,5-tribromophenol, and six hydroxylated PBDEs. The presence of the two phenols suggested cleavage of the ether bond of 2,2',4,4'-tetrabromodiphenyl ether (BDE-47) and 2,2',4,4',5-pentabromodiphenyl ether (BDE-99), respectively. The hydroxylated (HO)-PBDEs might come from hydroxylation or debromination/hydroxylation. Among the quantitated hydroxylated metabolites, the most abundant was 4-HO-2,2',3,4'-tetra-BDE, which suggested that there was a bromine shift during the hydroxylation process. para-HO-PBDEs have been proposed to behave as endocrine disruptors. Conclusions: THERE SEEM TO BE THREE METABOLIC PATHWAYS: cleavage of the diphenyl ether bond, hydroxylation, and debromination/hydroxylation. The cleavage of the diphenyl ether bond formed bromophenols, and the other two pathways formed hydroxylated PBDEs, of which para-HO-PBDEs are most likely formed from BDE-47. These metabolites may be the most thyroxine-like and/or estrogen-like congeners among the HO-PBDEs. abstract_id: PUBMED:9067487 Lack of effects of some individual polybrominated diphenyl ether (PBDE) and polychlorinated biphenyl (PCB) congeners on human lymphocyte functions in vitro. The structural similarities between polybrominated diphenyl ethers and immunotoxic halogenated aromatic compounds suggest that the polybrominated diphenyl ethers might affect the immune system. The present study was undertaken to investigate the immunological effects of some purified PBDE-congeners on human lymphocyte function in vitro. Polychlorinated biphenyl congeners were also included in the study. Mitogen-induced DNA synthesis and immunoglobulin synthesis by lymphocytes from blood donors were examined following polybrominated diphenyl ether or polychlorinated biphenyl exposure in vitro in order to determine the immunotoxic potential of these substances. No effects on mitogen-induced proliferation or immunoglobulin synthesis were observed after exposure of cells to concentrations up to 10(-5) M. The negative findings in this study indicate that certain functions of human peripheral lymphocytes, i.e. proliferation and immunoglobulin synthesis, are insensitive to the direct action of polybrominated diphenyl ethers and polychlorinated biphenyls. Our results are in accordance with other recent studies in which no effects on immunological parameters were demonstrated by exposure of lymphocytes to polyhalogenated aromatic hydrocarbons in vitro. abstract_id: PUBMED:28734254 Childhood polybrominated diphenyl ether (PBDE) exposure and neurobehavior in children at 8 years. Background: Prenatal polybrominated diphenyl ether (PBDE) exposure has been associated with decrements in IQ and increased attention deficit/hyperactivity disorder related behaviors in children; however, data are limited for the role of postnatal exposures. Objectives: We investigated the association between a series of childhood PBDE concentrations and Full-Scale Intelligence Quotient (FSIQ) and externalizing problems at 8 years. Methods: We used data from 208 children in the Health Outcomes and Measures of the Environment (HOME) Study, a prospective pregnancy and birth cohort. Child serum PBDEs were measured at 1, 2, 3, 5, and 8 years; missing serum PBDE concentrations were estimated via multiple imputation. The Wechsler Intelligence Scales for Children-IV and the Behavior Assessment System for Children-2 was used to assess intelligence and externalizing behavior, respectively, in children at 8 years. We used multiple informant models to estimate associations between repeated lipid-adjusted PBDEs and child neurobehavior and to test for windows of susceptibility. Results: Postnatal exposure to PBDE congeners (- 28, - 47, - 99, - 100, and - 153) at multiple ages was inversely associated with FSIQ at 8 years. For instance, a 10-fold increase in BDE-153 concentrations at 2, 3, 5, and 8 years were all related to lower FSIQ at age 8 (β for 3 years: - 7.7-points, 95% CI - 12.5, - 2.9; β for 8 years: - 5.6-points, 95% CI - 10.8, - 0.4). Multiple PBDE congeners at 8 years were associated with increased hyperactivity and aggressive behaviors at 8 years. Conclusions: Postnatal PBDE exposure was associated with decrements in FSIQ and increases in hyperactivity and aggressive behaviors. abstract_id: PUBMED:31253486 Fetal exposure to polybrominated diphenyl ethers and the risk of hypospadias: focus on the congeners involved. Background: Polybrominated diphenyl ethers (PBDEs) are widely used flame retardants, and their endocrine-disrupting properties have focused growing attention regarding their teratogenic potential. We have recently documented that mothers of children born with hypospadias had been exposed to statistically higher levels of PBDE during pregnancy than mothers of healthy controls. However, it is not known which congeners of PBDE are associated with this putative teratogenic effect. Objectives: To identify PBDE congeners associated with increased risk for hypospadias. Study Methods: Hair samples from mothers were analyzed and compared between hypospadias cases and healthy controls for eight PBDE congeners using gas chromatography mass spectrometry (GC/MS). Polybrominated diphenyl ether levels were measured in the 0- to 3-cm segment closest to the skull of maternal hair as a proxy for in utero exposure of mothers who lived in the same environment for the duration of their pregnancy. Results: Median maternal hair levels of five PBDE congeners (28, 47, 99, 153, and 154) and of total PBDE (∑PBDE) were significantly higher among mothers of infants with hypospadias (n = 152) than among controls (n = 64). Apparent greater differences in the lower brominated congeners, especially in BDE-47 and BDE-99, may be due to the fact that they had been used in larger amounts, and their persistence properties confer longer exposure. Conclusions: The majority of the lower brominated PBDE congeners measured in maternal hair exhibited higher PBDE body burden during pregnancy in mothers of infants who were born with hypospadias. abstract_id: PUBMED:24846783 Promoting environmentally sound management of polybrominated diphenyl ethers in Asia. Polybrominated diphenyl ethers with persistent organic pollutant properties are required to be controlled by the Stockholm Convention. Recently, polybrominated diphenyl ether contamination has become widespread in Asia, mainly because of the disposal and recycling processes of polybrominated diphenyl ether-containing wastes. The management status, production, usage, import/export, treatment, and disposal, as well as implementation deficiencies for the environmentally sound management of polybrominated diphenyl ethers and polybrominated diphenyl ether-containing materials in ten Asian countries were investigated and assessed in this study. This information could help the participating countries implement the Stockholm Convention and could promote the regional environmentally sound management of polybrominated diphenyl ether-containing articles and products. The results obtained were as follows. (1) Most of the countries studied lacked environmental policies and regulations, or even standards of polybrominated diphenyl ether pollution management and emission control actions. Accurate data on the consumption and importation of polybrominated diphenyl ether-containing materials, however, were not available for all the participating countries. In addition, there were no special treatment or disposal systems for polybrominated diphenyl ether-containing materials, or emission-cutting measures for the treatment of waste in these countries, owing to the lack of sufficient funding or technologies. (2) The improper dismantling of e-waste is a major source of polybrominated diphenyl ether emissions in these countries. (3) Proper e-waste management could result in a breakthrough in the environmentally sound management of this major polybrominated diphenyl ether-containing material flow, and could significantly reduce polybrominated diphenyl ether emissions. Finally, based on the study results, this article puts forward some recommendations for improving the environmentally sound management of polybrominated diphenyl ethers in Asia. abstract_id: PUBMED:16117131 Effects of selected polybrominated diphenyl ether flame retardants on lake trout (Salvelinus namaycush). Polybrominated diphenyl ether (PBDE) flame-retardants have been identified as an emergent contaminants issue in many parts of the world. In vitro analyses were conducted to test the hypothesis that selected PBDEs congeners affect viability, apoptosis, and necrosis of thymocytes from laboratory-reared lake trout (Salvelinus namaycush). At current environmental levels (&lt; 1 mg/L), effects of the tested PBDEs on thymocytes were negligible. However, at 100 mg/L, major effects were seen for congener brominated diphenyl ether 47 (BDE-47) and minor effects were seen for congener BDE-99. abstract_id: PUBMED:11482396 Polybrominated diphenyl ether (PBDE) flame retardants. Polybrominated diphenyl ether, PBDE, flame retardants are now a world-wide pollution problem reaching even remote areas. They have been found to bioaccumulate and there are concerns over the health effects of exposure to PBDEs, they also have potential endocrine disrupting properties. They are lipophilic compounds so are easily removed from the aqueous environment and are predicted to sorb onto sediments and particulate matter or to fatty tissue, aiding their distribution throughout the environment. PBDEs are structurally similar to PCBs and DDT and, therefore, their chemical properties, persistence and distribution in the environment follow similar patterns. Concentrations of PBDEs found in environmental samples are now higher than those of PCBs. Evidence to date demonstrates that PBDEs are a growing problem in the environment and concern over their fate and effects is warranted. The manufacture of reactive and additive flame retardants is briefly discussed and their fate and behaviour in the environment is assessed. PBDE toxicology is reviewed and methods of analysis are evaluated. abstract_id: PUBMED:19921894 Immunoassay for monitoring environmental and human exposure to the polybrominated diphenyl ether BDE-47. We developed a selective competitive enzyme-linked immunosorbent assay (ELISA) to monitor environmental and human exposure to polybrominated diphenyl ether BDE-47 that is used as a flame retardant 2,2',4,4'-Tetrabromodiphenyl ether (BDE-47), a dominant PBDE congener of toxicological concern, was the target analyte. To achieve effective hapten presentation on the carrier protein for antibody production, immunizing haptens with a rigid double-bonded hydrocarbon linker introduced at different positions on the target molecule were synthesized as well as coating haptens that mimic a characteristic fragment of the molecule. Rabbit antisera produced against each immunizing antigen were screened against competitive hapten coating antigens. Underoptimized competitive indirect ELISA conditions, the linear detection range in the assay buffer that includes 50% dimethyl sulfoxide was 0.35-8.50 microg/L with an IC50 value of 1.75 microg/L for BDE-47. Little or no crossreactivity (&lt;6%) was observed to related PBDE congeners containing the BDE-47 moiety and other halogenated compounds. Using a magnetic particle-based competitive direct ELISA increased the sensitivity by 10-fold over the indirect ELISA. The ELISA provided quantitative results when performed on small volume/weight samples such as dust furniture foam, and blood/ serum following sample preparation, suggesting a convenient screening tool. abstract_id: PUBMED:26372669 Polybrominated Diphenyl Ether Exposure and Thyroid Function Tests in North American Adults. Background: Polybrominated diphenyl ethers (PBDEs) are flame-retardant chemicals that are added to many consumer products. Multiple animal studies have shown PBDEs to be thyroid hormone (TH) disruptors. Epidemiologic evidence of PBDE exposure associated with TH disruption has been inconclusive. Objectives: We used repeated measures to estimate associations between serum PBDE concentrations and THs in a North American adult cohort. Methods: From 2010 to 2011, we collected ≤ 3 serum samples at approximately 6-month intervals from 52 healthy adult office workers from Boston, Massachusetts, for analysis of PBDE congeners and THs. Results: The geometric mean sum concentrations of the most prevalent PBDE congeners (BDE-28, BDE-47, BDE-99, BDE-100, and BDE-153) were 22 ng/g lipid in winter 2010, 23 ng/g lipid in summer 2010, and 19 ng/g lipid in winter 2011. BDE-47 was the predominant congener. Based on a multivariable mixed regression model, we estimated that on average, a 1-ng/g serum increase in BDE-47 was associated with a 2.6-μg/dL decrease in total thyroxine (T4) (95% CI: -4.7, -0.35). Total T4 was inversely associated with each PBDE congener. Serum concentrations of PBDEs were not strongly associated with total triiodothyronine (T3), free T4, or thyroid-stimulating hormone (TSH). Conclusion: These results are consistent with those from animal studies showing that exposure to PBDEs is associated with a decrease in serum T4. Because the other TH concentrations did not appear to be associated with BDE exposures, our findings do not indicate effects on the pituitary-thyroid axis. Taken together, our findings suggest that PBDE exposure might decrease the binding of T4 to serum T4 binding proteins. Citation: Makey CM, McClean MD, Braverman LE, Pearce EN, He XM, Sjödin A, Weinberg JM, Webster TF. 2016. Polybrominated diphenyl ether exposure and thyroid function tests in North American adults. Environ Health Perspect 124:420-425; http://dx.doi.org/10.1289/ehp.1509755. Answer: Based on the information provided in the abstracts, flying does not appear to present a significant threat of polybrominated diphenyl ether (PBDE) exposure to US professional airline workers. A study investigating the possible exposure to PBDEs in this population found that total PBDE blood levels from the 30 volunteers were unremarkable, with no statistically significant correlations noted between hours in airplanes in the past 1 or 5 years and levels of individual brominated diphenyl ethers congeners or total PBDEs (PUBMED:21124237). Therefore, the hypothesis that elevated PBDE levels could be associated with time spent in airliners was not supported by the findings of this study.
Instruction: Active management of labor: does it make a difference? Abstracts: abstract_id: PUBMED:37656970 Active versus expectant management for premature rupture of membranes at term: A randomized, controlled study. Objective: To compare the effects on feto-maternal outcomes of expectant versus active management for premature rupture of membranes (PROM) at term. Methods: This was a prospective randomized (1:1) controlled study involving 86 pregnant-women who received either expectant management (n = 43) or active management with misoprostol (n = 43) for PROM at term. Primary outcome was route of delivery. Secondary outcomes were: PROM to presentation interval; latency period; PROM to delivery interval; recruitment to delivery interval; labour and delivery complications. Results: Baseline-characteristics were similar between groups. There was no significant difference between active and expectant groups in mean PROM to presentation/admission, or PROM to delivery. However, mean latency period (11.1 ± 7.3 hours vs 8.8 ± 5.5 hours) and mean recruitment to delivery intervals after PROM (14.7 ± 5.2 hours vs 11.8 ± 5.0 hours) were significantly shorter for the active group compared with the expectant group. Although the rate of caesarean section was less in expectant management group (21%) compared with the active management group (30%), the difference was not statistically significant. There were no significant differences between groups in delivery or perinatal complications. Conclusion: Active and expectant management for PROM at term gave comparable outcomes in terms of methods of delivery and complications. However, active management significantly shortened the latency period and induction to delivery intervals compared with expectant management.Trial-Registration: Pan-African-trial-registry-(PACTR)-approval-number PACTR202206797734088. abstract_id: PUBMED:25456626 The appraisal of difference: critical gerontology and the active-ageing-paradigm. The article deals with the re-negotiation of old age in current times of flexible capitalism and its analysis by Critical Gerontologists who criticize this process as age denial and midlife-imperialism. Starting out from the instructive critique of active ageing and consumer-based anti-ageing strategies, rooted in the heterogeneous field of Critical Gerontology, the here presented contribution aims at critically reviewing and discussing this critique. The article exposes theoretical pitfalls that make this critique run into a dead-end, since old age tends to be homogenized and sometimes even naturalized within Critical Gerontology: Though certainly often unintended, the appreciation of old age as being positively different from midlife ends up with sheltering "old people" as "the others" from the impositions of active society. After elaborating on this difference perspective and discussing its problems, I will finally sketch some conceptual ideas, inspired by poststructuralist thinking, on how to overcome the fruitless dichotomy of imperialism/sameness ("they have to be like us") and difference ("they are the others"). abstract_id: PUBMED:37628346 Maternal Outcomes Following Active vs. Expectant Management of Previable Preterm Pre-Labor Rupture of Membranes: A Meta-Analysis. The diagnosis of previable preterm pre-labor rupture of membranes (PROM) is known to be associated with poor outcomes for both the mother and the fetus. Following previable preterm PROM, patients are generally offered either active management through the termination of the pregnancy or expectant management to increase the chances of fetal survival. It is difficult to counsel patients because there is a lack of data directly comparing maternal outcomes following active vs. expectant management. Using the data in the current literature, the goal of the present meta-analysis was to determine if there were any differences in terms of maternal risks when active versus elective management was chosen. PubMed, Google Scholar, EMBASE, and Scopus were searched. We found four studies accounting for a total of 506 patients. The risk ratio (RR) of chorioamnionitis in active vs. expectant management was 0.30 (with a 95% confidence interval, CI, of 0.09-1.02). The heterogeneity of the study results was 81% (I2). A sub-analysis of two included studies revealed an RR of postpartum hemorrhage in active vs. expectant management of 0.75 (95% CI 0.27-2.07) and an RR of maternal sepsis of 0.23 (95% CI 0.04-1.28). The heterogeneity of the study results for this sub-analysis was 68% (I2) for postpartum hemorrhage and 0% (I2) for maternal sepsis. Overall, there was no statistically significant difference in the risk of chorioamnionitis, postpartum hemorrhage, or maternal sepsis when active management was chosen over expectant management in previable preterm PROM at &lt;24 weeks. The scarcity and the high heterogeneity of the available data likely contributed to the lack of statistical significance and calls for further work directly comparing maternal outcomes following active vs. expectant management. abstract_id: PUBMED:27651575 Obstetric Outcome During an Era of Active Management for Obstetrics Cholestasis. Objective: To study the incidence, maternal, and perinatal outcomes in intrahepatic cholestasis of pregnancy with active management. Materials And Methods: This observational study included 48 cases diagnosed as intrahepatic cholestasis of pregnancy. Medical treatment and active management (fetal surveillance and termination of pregnancy at 37-38 weeks) were offered to all. Maternal and perinatal outcomes are studied. Results: Incidence of ICP was 2.4 %. More than 86 % of patients presented with generalized pruritus mostly after 30 weeks. Incidences of Intrapartum abnormal cardiotocography (12.5 %) and thick meconium (8.33 %) were high. Cesarean section rate was high (41.66 %). Most common indications were CDMR, fetal distress, and non-progress of labor. Incidence of preterm labor, IUGR, Apgar score (&lt;7), and neonatal admission rate were not high. There was one stillbirth at 36 weeks of gestation. Serum transaminase levels tended to be higher in patients with poor perinatal outcomes, such as stillbirth, fetal distress, and meconium-stained amniotic fluid. Conclusion: Serum transaminase levels tended to be higher in patients with poor perinatal outcome. Perinatal outcome is good with active management, at the cost of higher LSCS rate. Further randomized controlled trial of early versus spontaneous delivery may be justified. abstract_id: PUBMED:9322630 Active management of labor: does it make a difference? Objective: Our goal was to evaluate whether active management of labor lowers cesarean section rates, shortens the length of labor, and overcomes any negative effects of epidural analgesia on nulliparous labor. Study Design: We randomly assigned 405 low-risk term nulliparous patients to either an active management of labor (n = 200) or our usual care control protocol (n = 205). Patients who were undergoing active management of labor were diagnosed as being in labor on the basis of having painful palpable contractions accompanied by 80% cervical effacement, underwent early amniotomy, and were treated with high-dose oxytocin for failure to progress adequately in labor. Results: The cesarean section rate in the active management of labor group was lower than that of controls but not significantly so (active management, 7.5%; controls, 11.7%; p = 0.36). The length of labor in the active management group was shortened by 1.7 hours (from 11.4 to 9.7 hours, p = 0.001). Fifty-five percent of patients received epidural analgesics; a reduction in length of labor persisted despite the use of epidural analgesics (active management 11.2 hours vs control 13.3 hours, p = 0.001). A significantly greater proportion of active management patients were delivered by 12 hours compared with controls (75% vs 58%, p = 0.01); this difference also persisted despite the use of epidural analgesics (66% vs 51%, p = 0.03). Conclusions: Patients undergoing active management had shortened labors and were more likely to be delivered within 12 hours, differences that persisted despite the use of epidural analgesics. There was a trend toward a reduced rate of cesarean section. abstract_id: PUBMED:34999323 Outcomes of dietary management approaches in active ulcerative colitis: A systematic review. Background & Aims: The dietary management of active ulcerative colitis (UC) is currently poorly understood. Due to the lack of clinical guidelines for this population, diet choice may be based on the personal judgement of the clinician, and without sound evidence. The aim of this systematic review was to appraise the current literature on the dietary management of individuals with active UC, in both inpatient and outpatient settings, to determine if clinical outcomes differ by diet prescription. Method: PUBMED, CINAHL, EMBASE, Web of Science and SCOPUS were comprehensively searched during March and April 2020. Eligible trials recruited adults with active UC comparing different methods of dietary management, including enteral nutrition (EN), total parenteral nutrition (TPN), elimination diets and standard oral diets, in both the inpatient and outpatient settings. Results: 10 studies met inclusion criteria of this qualitative synthesis. No difference was found between EN, TPN and bowel rest in terms of disease activity measures when compared to a standard oral diet. The results of this study also showed promising potential for the use of elimination diets in the outpatient setting with four studies finding a significant difference in disease activity measures between the intervention diet and control. Conclusion: There is no strong evidence to support the use of any specific dietary prescription to improve clinical outcomes for individuals with active UC. A number of low quality studies suggest benefit of following an elimination diet, however, additional high quality studies are required before any more specific recommendations can be made. abstract_id: PUBMED:24704063 Primum non nocere: how active management became modus operandi for intrahepatic cholestasis of pregnancy. The Royal College of Obstetrics and Gynecology does not endorse routine active management of intrahepatic cholestasis of pregnancy (ICP)-affected pregnancies. In contrast, the American College of Obstetricians and Gynecologists supports active management protocols for ICP. To investigate this controversy, we evaluated the evidence supporting ICP as a medical indication for early term delivery and the evolution of active management protocols for ICP. Sixteen articles published between 1986 and 2011 were identified. We created 2 groups based on whether obstetric care included active management. Group 1 comprised 6 uncontrolled reports without active management that were published between 1967 and 1983 that described high perinatal mortality rates that primarily were related to prematurity sequel. This group became the fundamental 'core' evidence for ICP-associated stillbirths and by extrapolation justification for active management. Group 2 was comprised of 10 reports in which the authors credited empirically adopted active management with the observed low stillbirth rates in ICP-affected pregnancies. Although the group 1 articles routinely are cited as evidence of ICP-associated stillbirth risk, the 1.2% stillbirth rate (4/331) in this group is similar to the background stillbirth rates of 1.1% (11/1000) and 0.6% (6/1000) in 1967 and 2011, respectively (P = .062 and P = .0614, respectively). Likewise, the stillbirth rates for articles in group 2 were similar to their respective national stillbirth rate. Nevertheless, group 2 articles have become the evidence-based support for active management. We found no evidence to support the practice of active management for ICP. abstract_id: PUBMED:34904528 Patients' Active Participation in Postoperative Pain Management in an Urban Hospital of Vietnam: Implications for Patient Empowerment. Active participation in pain management is vital to improve postoperative pain outcomes. However, this issue has not been fully examined in Vietnam. This study aimed to examine the active participation of patients in pain management after surgery, as well as explore its effect on acute postoperative pain. A hospital-based survey on 245 patients after surgery was conducted. Information about demographic and clinical characteristics, pain intensity and active participation in pain management was collected. Multivariate regression models were utilized to determine the associations. 53.9% of patients reported that they were informed about the postoperative pain relief method before surgery. One-third (33.5%) of patients selected preferred pain relief methods; 46.1% reported that they asked physicians when feeling pain immediately after surgery; 49.8% asked physicians when pain was not relieved after taking medications, and 52.2% asked physicians for their current pain in the time of interview. Age and occupation were found to be positively associated with active participation score. Patients being informed about the postoperative pain relief method before surgery had 0.87 points higher than those not receiving explanation (Coef. = 0.87; 95%CI = 0.49-1.26). Patients with high active participation scores were more likely to have pain improvement (OR = 3.41, 95%CI = 2.37-4.92). This study highlights a low level of active participation in postoperative pain management among Vietnamese patients. Routinely providing information about pain control before surgery, and encouraging patients to actively participate in pain management are essential to improve postoperative pain outcomes. abstract_id: PUBMED:38124670 Unexplained subfertility: active or conservative management? Objectives: Unexplained subfertility (UEI) describes a couple whose standard subfertility workout consider acceptable but unable to conceived. Methods: This retrospective study was conducted in the Advanced Reproductive Centre, UKM Hospital, Kuala Lumpur, from January 2016 to December 2019. The data of 268 UEI couples were obtained from the clinical database. Women aged 21-45 years old was included and further divided into four groups according to the female partner's age and subfertility duration: group A (age &lt;35 years and subfertility &lt;2 years), group B (age &lt;35 years and subfertility &gt;2 years), group C (age &gt;35 years and subfertility &lt;2 years), and group D (age &gt;35 years and subfertility &lt;2 years). All statistical analyses were performed using SPSS 22.0 for Windows. Results: A total of 255 cases were included in this study. The mean age of the women was 32.9 ± 4.04 years, and the mean subfertility duration was 5.04 ± 2.9 years. A total of 51 (20 %) cases underwent timed sexual intercourse, 147 (57.6 %) cases had intrauterine insemination (IUI), whereas 57 (22.4 %) cases opted for in vitro fertilization (IVF). A total of 204 cases underwent active management (IUI/IVF), which showed a significant difference (p&lt;0.05). Out of eight clinical pregnancies, half of them were from group B. Conclusions: Active management in younger women with a shorter subfertility duration revealed a better pregnancy outcome. Otherwise, individualized treatment should be considered in selecting a suitable treatment plan. abstract_id: PUBMED:24966500 A comparative study of oxytocin/misoprostol/methylergometrine for active management of the third stage of labor. Objectives: To study oxytocin, misoprostol, and methylergometrine in active management of the third stage of labor and determine duration of the third stage of labor, blood loss, adverse effects, and need for additional uterotonics in each group. Methods: Clinical trial of 300 women with healthy singleton pregnancy allocated into three groups to receive either: 10 IU intravenous oxytocin infusion, 600 μg sublingual misoprostol, or 200 μg intravenous methylergometrine. Primary outcome measure was blood loss in the third stage of labor; secondary measures were duration of the third stage, side effects, and complications. Results: Subjects who received 600 μg of misoprostol had the least blood loss, followed by oxytocin, and methylergometrine. The shortest mean duration of the third stage was with misoprostol. Shivering and pyrexia were observed in misoprostol group, and raised blood pressure in methylergometrine group. Conclusions: Misoprostol is as effective as oxytocin and both are more effective than methylergometrine in active management of the third stage of labor. Answer: Active management of labor does appear to make a difference in certain aspects of the birthing process. According to the studies provided, active management can lead to a reduction in the duration of labor and may influence other outcomes, although the impact on cesarean section rates is not always clear. For instance, a study on active versus expectant management for premature rupture of membranes (PROM) at term found that active management significantly shortened the latency period and induction to delivery intervals compared with expectant management, without a significant difference in the rate of cesarean sections or delivery/perinatal complications (PUBMED:37656970). Another study evaluating active management of labor in low-risk term nulliparous patients showed that those undergoing active management had shortened labors and were more likely to be delivered within 12 hours, even when epidural analgesics were used. There was also a trend toward a reduced rate of cesarean section, although this was not statistically significant (PUBMED:9322630). In the context of intrahepatic cholestasis of pregnancy, active management, which included fetal surveillance and termination of pregnancy at 37-38 weeks, was associated with good perinatal outcomes, albeit with a higher cesarean section rate (PUBMED:27651575). However, another article critically examined the evidence for active management in intrahepatic cholestasis of pregnancy and found no evidence to support the practice (PUBMED:24704063). For previable preterm PROM, a meta-analysis indicated no statistically significant difference in the risk of chorioamnionitis, postpartum hemorrhage, or maternal sepsis when active management was chosen over expectant management (PUBMED:37628346). In the case of dietary management approaches in active ulcerative colitis, no strong evidence was found to support the use of any specific dietary prescription to improve clinical outcomes, although some low-quality studies suggested benefits of following an elimination diet (PUBMED:34999323). Overall, the evidence suggests that active management of labor can lead to shorter labor durations and may have other benefits, but the impact on cesarean section rates and other outcomes may vary depending on the specific context and condition being managed.
Instruction: Are rural residents less likely to obtain recommended preventive healthcare services? Abstracts: abstract_id: PUBMED:11567838 Are rural residents less likely to obtain recommended preventive healthcare services? Background: This study examined rural-urban differences in utilization of preventive healthcare services and assessed the impact of rural residence, demographic factors, health insurance status, and health system characteristics on the likelihood of obtaining each service. Methods: National data from the 1997 Behavioral Risk Factor Surveillance System (BRFSS) and the 1999 Area Resource File were used to evaluate the adequacy of preventive services obtained by rural and urban women and men, using three sets of nationally accepted preventive services guidelines from the American Cancer Society, U.S. Preventive Services Task Force, and Healthy People 2010. Logistic regression models were developed to control for the effect of demographic factors, health insurance status, and health system characteristics. Results: Rural residents are less likely than urban residents to obtain certain preventive health services and are further behind urban residents in meeting Healthy People 2010 objectives. Conclusions: Efforts to increase rural preventive services utilization need to build on federal, state, and community-based initiatives and to recognize the special challenges that rural areas present. abstract_id: PUBMED:21280972 Receipt of preventive counseling among reproductive-aged women in rural and urban communities. Introduction: Preventive health interventions often occur less frequently among rural women compared to urban women. Preventive counseling is an important feature of comprehensive preventive healthcare provision, but geographic disparities in the receipt of preventive counseling services have not been fully described. In this study the framework of the behavioral model of healthcare utilization was employed to investigate the association between rurality and receiving preventive counseling. It was hypothesized that demographic differences in rural and urban communities, as well as differential healthcare resources, explain rural-urban healthcare disparities in preventive counseling. Methods: Data were collected by telephone survey during 2004-2005 for 2002 participants aged 18-45 years in the Central Pennsylvania Women's Health Study. Measures of preventive counseling were based on US Preventive Services Task Force recommendations as of 2004. Multivariable models assessed the independent contribution of rurality to the receipt of counseling for smoking, alcohol/drug use, birth control, nutrition, weight management, and physical activity. Rurality was assessed using Rural-Urban Community Area Codes. All models controlled for variables that predispose individuals to use health services (age, race/ethnicity, educational level), variables that enable or impede healthcare access (having a usual healthcare provider, using an obstetrician-gynecologist, poverty, and continuous health insurance coverage) and need-based variables (health behaviors and indicators). Results: In bivariate analysis, the rural population was older, had lower educational attainment, and was more likely to be White, non-Hispanic. Urban women tended to report seeing an obstetrician-gynecologist more frequently, and engaged more frequently in binge drinking/drug use. Preventive counseling was low among both rural and urban women, and ranged from 12% of the population for alcohol/drug use counseling, to 37% for diet or nutrition counseling. The degree of rurality appeared to impact counseling, with women in small or isolated rural areas significantly less likely than urban women and women in large rural areas to receive counseling related to smoking, alcohol/drug use and birth control. Overall, rural women reported less counseling for alcohol/drug use, smoking, birth control, nutrition and physical activity. In multivariable analysis, rurality was independently associated with lack of preventive counseling for physical activity. However, adjusting for predisposing, enabling and need-based variables fully attenuated the effect of rurality in the remaining models. Younger age, higher educational attainment, and seeing any obstetrician-gynecologist were associated with receipt of counseling in several models. Conclusions: Most women do not receive recommended preventive counseling. While rural women are less likely than urban women to receive counseling, rurality generally was not independently associated with receipt of counseling once demographics, access to health care, and health behaviors and indicators were controlled. This suggests that both demographic differences between rural and urban communities as well as aspects of healthcare access govern rural-urban healthcare disparities in preventive counseling. These results speak to important targets for reducing urban-rural healthcare disparities in receiving preventive counseling, improving the health literacy of the rural population, educating rural healthcare providers about the need for preventive counseling, and the expansion of access to obstetrician-gynecologists in rural communities. abstract_id: PUBMED:27631229 Urban-rural disparity in utilization of preventive care services in China. Preventive care service is considered pivotal on the background of demographic ageing and a rise in chronic diseases in China. The disparity in utilization of preventive care services between urban and rural in China is a serious issue. In this paper, we explored factors associated with urban-rural disparity in utilization of preventive care services in China, and determined how much of the urban-rural disparity was attributable to each determinant of utilization in preventive care services. Using representative sample data from China Health and Nutrition Survey in 2011 (N = 12,976), the present study performed multilevel logistic model to examine the factors that affected utilization of preventive care services in last 4 weeks. Blinder-Oaxaca decomposition method was applied to divide the utilization of preventive care disparity between urban and rural residents into a part that can be explained by differences in observed covariates and unobserved part. The percentage of rural residents utilizing preventive care service in last 4 weeks was lower than that of urban residents (5.1% vs 9.3%). Female, the aged, residents with higher education level and household income, residents reporting self-perceived illness in last 4 weeks and physician-diagnosed chronic disease had higher likelihood of utilizing preventive care services. Household income was the most important factor accounting for 26.6% of urban-rural disparities in utilization of preventive care services, followed by education (21.5%), self-perceived illness in last 4 weeks (7.8%), hypertension (4.4%), diabetes (3.3%), other chronic diseases (0.8%), and health insurance (-1.0%). Efforts to reduce financial barriers for low-income individuals who cannot afford preventive services, increasing awareness of the importance of obtaining preventive health services and providing more preventive health services covered by health insurance, may help to reduce the gap of preventive care services utilization between urban and rural. abstract_id: PUBMED:38505764 Rural/Urban differences in uptake of preventive healthcare services: Variability in observed relationships across measures of rurality. Rural residents are generally less likely to receive preventive healthcare than are urban residents, but variable measurement of rurality introduces inconsistency to these findings. We assessed the relationships between perceived and objective measures of rurality and uptake of preventive healthcare. In our sample, rural participants generally had equal or higher uptake of healthcare (i.e. private health insurance, check-up in the past year, being up-to-date on colorectal and cervical cancer screening) than urban participants. Importantly, the perceived measure of rurality performed similarly to the objective measures, suggesting that participant report could be a valid way to assess rurality in health studies. Significance for Public Health The ability to access routine preventive healthcare is a key component of public health. Comparing uptake of cancer screening in rural versus urban areas is one way to assess equity of healthcare access. Generally, rural areas have a higher burden of cancer than urban areas. The built environment, socioeconomic status, and patient perceptions can impact an individual's access to routine cancer screening. Preventive healthcare is of great importance to public health as a whole because screening can facilitate earlier diagnosis and more successful treatment for many preventable cancers, which may ultimately increase the quality and quantity of life. abstract_id: PUBMED:11915096 Determinants of preventive services utilization in rural older women. Research describing preventive services utilization and determinants of preventive services utilization for rural older women is scant. The purpose of this study was to: (a) describe the prevalence of preventive services utilization in compliance with the U.S. Preventive Services Task Force (USPSTF) recommendations for screening, counseling, and immunizations among rural older women in four Nebraska counties and (b) determine the extent to which personal influences (e.g., demographics, definition of health, perceived health status) and contextual influences (e.g., access to care, sources of health information, provider recommendations) explain preventive services utilization among those women. A sample of 102 community-dwelling rural women age 65 and older participated in the study. The prevalence of receipt of USPSTF-recommended preventive services utilization among these rural women was alarmingly low. Multiple regression analyses revealed that provider recommendations, access to care, sources of health information, and perceived health status were significantly associated with preventive services utilization. These findings provide information that is relevant in designing interventions to increase preventive services utilization among rural older women. abstract_id: PUBMED:11573461 Rural, urban and suburban comparisons of preventive services in family practice clinics. Numerous studies have compared health services provided in rural and urban areas, and overall they have found that utilization is lower in rural areas. A significant factor in lower utilization is that rural residents have less access to health services. Much less is known about rural and urban utilization differences once a patient has access to a service provider. This paper focuses on preventive services received when a patient is already in a clinic. Using data from an in-depth qualitative study of 16 family practice clinics in Nebraska, comparisons of physician-specific preventive service rates are made across three geographic categories: rural, urban and suburban. Results from a one-way multivariate analysis of variance show that preventive services rates for nine services examined were as high or higher in rural areas, suggesting that rural health services do not lag for patients with access. abstract_id: PUBMED:10728114 Preventive services for rural and urban African American adults. Background: Studies have shown that African Americans and rural patients receive fewer preventive services than other patients. Objective: To compare the use of preventive services by African Americans in urban and rural settings to determine if race and rural residence were additive risks for not obtaining preventive services. Methods: Three hundred African American patients seeking care in family practices in South Carolina were surveyed about preventive health care. Results: Rural and urban African Americans were equally likely to know about preventive services and be up-to-date on receiving these services. In both practices, those with lower incomes were less likely to be up-to-date. Patients seen in the urban setting were more likely to receive counseling regarding exercise and smoking than those in the rural practice (87% vs 71%, P = .003). Conclusions: For both urban and rural African American patients with access to primary care physicians, preventive service use is high. The best predictor of poor compliance with preventive service recommendations was low income, suggesting that a lack of access to care is the primary reason why rural and African American populations do not receive adequate preventive health care. abstract_id: PUBMED:25091022 Does China's new cooperative medical scheme promote rural elders' access to healthcare services in relation to chronic conditions? Background: This study examined use of healthcare services by elderly people with a long-term condition living in rural China and participating in that country's New Cooperative Medical Scheme (NCMS). Methods: Data were taken from the 2009 China Health and Nutrition Survey. Results: The study results showed that education level and annual income were low among rural elders (aged ≥65 years); the illiteracy rate was 58.0% (200/345) and 67.6% (92/136) had an annual income of less than 5000 Chinese yuan renminbi. The elders had low rates of treatment and use of preventive healthcare services while the prevalence of chronic disease was highest among them when compared to other age groups. Among the few elders who received preventive healthcare, most received it at village clinics. Conclusions: Preventive healthcare services and chronic disease management are urgently needed by rural elders in China. NCMS should initiate payments to township health centers to enable them to undertake community outreach and education campaigns among rural residents, and train village doctors in standard chronic condition management and prevention, with prioritization of the provision of prevention and management of chronic conditions by village clinics. abstract_id: PUBMED:35493353 Urban-Rural Differences: The Impact of Social Support on the Use of Multiple Healthcare Services for Older People. Background: There are many kinds of chronic diseases, high incidence and high hospitalization rate in older people caused by population aging. The increasing demand for healthcare services has become an increasingly prominent problem in Chinese society. The purpose of this paper is to explore the influence of social support on multiple healthcare services for older people and the urban-rural differences among them. Methods: The data are from our field survey in Shaanxi province in 2019. Using the Anderson model as the basic explanatory framework, this paper uses the Tobit-model to analyses the impact of social support for older people on the utilization of therapeutic healthcare services, and the Logit-model to analyze the impact of living arrangements and social support on the utilization of preventive healthcare services for older people. Results: This paper examines the impact of formal and informal support on outpatient, inpatient, and preventive healthcare services for older people, provides an in-depth analysis of the differences in the impact of social support on healthcare service utilization between urban and rural older people and analyses the contribution of various factors to the impact. The coefficient effect is divided into two parts: the coefficient "premium" of urban older people relative to urban and rural older people as a whole, which accounts for 10.8% of the total difference; and the "premium" of urban and rural older people as a whole to rural older people, which accounts for 18.9% of the total difference. The coefficient effect accounts for 29.7 per cent of the total difference. Conclusions: Rural older people place greater importance on the quantity of formal social support, while urban older people place greater importance on the quality of formal social support. The phenomenon of raising children for old age was evident in the use of healthcare services by rural older people, while the phenomenon of distant relatives being better than close neighbors was evident in the use of healthcare services by urban older people. Free preventive healthcare services in rural China have largely compensated for the lack of health benefits for rural older people. abstract_id: PUBMED:16638524 Preventive health examinations: a comparison along the rural-urban continuum. In this analysis, Medical Expenditure Panel Survey data from 2000 were used to examine differences in reports of preventive health service utilization in 4 types of counties: large metropolitan counties, small metropolitan counties, counties adjacent to metropolitan places, and counties not adjacent to metropolitan areas or with fewer than 10,000 residents. Women from counties with 10,000 or fewer residents and not adjacent to a metropolitan county, classified as rural residents, were less likely to report a number of preventive health examinations during the previous 2 years. Rural women were less likely to obtain blood cholesterol tests, dental exams, and mammograms during the previous 2 years when compared to women from large metropolitan counties. Rural women were more likely to obtain blood pressure checks during the previous year when compared to the metropolitan women. Findings for exams that occurred during the preceding 1- and 2-year periods are reported for blood pressure checks, blood cholesterol checks, physical exams, colon cancer screening, dental exams, breast exams, mammograms, and Pap smears. Answer: Yes, rural residents are less likely to obtain recommended preventive healthcare services compared to urban residents. Studies have shown that rural residents are less likely than urban residents to obtain certain preventive health services and are further behind urban residents in meeting Healthy People 2010 objectives (PUBMED:11567838). Additionally, rural women reported less counseling for alcohol/drug use, smoking, birth control, nutrition, and physical activity, and rurality was independently associated with lack of preventive counseling for physical activity (PUBMED:21280972). In China, the percentage of rural residents utilizing preventive care service was lower than that of urban residents, with household income being the most important factor accounting for urban-rural disparities in utilization of preventive care services (PUBMED:27631229). Moreover, rural women are less likely to receive counseling related to smoking, alcohol/drug use, and birth control, and the prevalence of receipt of recommended preventive services utilization among rural older women was alarmingly low (PUBMED:11915096). In the context of family practice clinics, while preventive services rates for nine services examined were as high or higher in rural areas for patients with access, overall utilization is lower in rural areas due to less access to health services (PUBMED:11573461). Furthermore, rural African American patients were less likely to receive counseling regarding exercise and smoking than those in the urban setting (PUBMED:10728114), and rural elders in China had low rates of treatment and use of preventive healthcare services (PUBMED:25091022). Lastly, rural women were less likely to report a number of preventive health examinations such as blood cholesterol tests, dental exams, and mammograms compared to women from large metropolitan counties (PUBMED:16638524).
Instruction: Can fresh osteochondral allografts restore function in juveniles with osteochondritis dissecans of the knee? Abstracts: abstract_id: PUBMED:22972653 Can fresh osteochondral allografts restore function in juveniles with osteochondritis dissecans of the knee? Background: Failure of initial treatment for juvenile osteochondritis dissecans (OCD) may require further surgical intervention, including microfracture, autograft chondrocyte implantation, osteochondral autografting, and fresh osteochondral allografting. Although allografts and autografts will restore function in most adults, it is unclear whether fresh osteochondral allograft transplantations similarly restore function in skeletally immature patients who failed conventional treatment. Questions/purposes: Therefore, we determined function in (1) daily activity; (2) sports participation; and (3) healing (by imaging) in children with juvenile OCD who failed conventional therapy and underwent fresh osteochondral allograft transplantation. Methods: We retrospectively reviewed 11 children with OCD of the knee treated with a fresh stored osteochondral allograft between 2004 and 2009 (six males and five females). The average age of the children at the time of their allograft surgery was 15.2 years (range, 13-20 years). The clinical assessments included physical examination, radiography, MRI, and a modified Merle D'Aubigné-Postel score. The size of the allograft was an average of 5.11 cm(2). The minimum followup was 12 months (average, 24 months; range, 12-41 months). Results: All patients had returned to activities of daily living without difficulties at 6 months and returned to full sports activities between 9 and 12 months after surgery. The modified Merle D'Aubigné-Postel score improved from an average of 12.7 preoperatively to 16.3 at 24 months postoperatively. Followup radiographs at 2 years showed full graft incorporation and no demarcation between the host and graft bone. Conclusions: Our observations suggested fresh osteochondral allografts restored short-term function in patients with juvenile OCD who failed standard treatments. Level Of Evidence: Level IV, case series. See Guidelines for Authors for a complete description of levels of evidence. abstract_id: PUBMED:24414553 Osteochondral allograft transplantation of the knee in the pediatric and adolescent population. Background: Multiple studies in adults have shown that osteochondral allograft transplantation is an effective treatment option for large chondral and osteochondral defects of the knee. Limited outcome data are available on osteochondral allografts in the pediatric and adolescent patient populations. Purpose: To describe a 28-year experience with osteochondral allograft transplantation in patients younger than 18 years with a focus on subjective outcome measures, return to activities, and allograft survivorship. Study Design: Case series; Level of evidence, 4. Methods: A total of 39 patients (43 knees) underwent fresh osteochondral allograft transplantation for treatment of chondral and osteochondral lesions. Twenty-six male and 17 female knees with a mean age of 16.4 years (range, 11.0-17.9 years) at index surgery were followed-up at a mean of 8.4 years (range, 1.7-27.1 years). Thirty-four knees (79%) had at least 1 previous surgery. The most common underlying causes of the lesions were osteochondritis dissecans (61%), avascular necrosis (16%), and traumatic chondral injury (14%). Mean allograft size was 8.4 cm(2). The most common allograft location was the medial femoral condyle (41.9%), followed by the lateral femoral condyle (35%). Each patient was evaluated with the International Knee Documentation Committee pain, function, and total scores; a modified Merle d'Aubigné-Postel (18-point) scale; and Knee Society function score. Failure was defined as revision osteochondral allograft or conversion to arthroplasty. Results: Five knees experienced clinical failure at a median of 2.7 years (range, 1.0-14.7 years). Four failures were salvaged successfully with another osteochondral allograft transplant. One patient underwent prosthetic arthroplasty 8.6 years after revision allograft. Graft survivorship was 90% at 10 years. Of the knees whose grafts were in situ at latest follow-up, 88% were rated good/excellent (18-point scale). The mean International Knee Documentation Committee scores improved from 42 preoperatively to 75 postoperatively, and the Knee Society function score improved from 69 to 89 (both P &lt; .05). Eighty-nine percent of patients reported "extremely satisfied" or "satisfied." Conclusion: With 88% good/excellent results and 80% salvage rate of clinical failures with an additional allograft, osteochondral allograft transplantation is a useful treatment option in pediatric and adolescent patients. abstract_id: PUBMED:28551052 Fresh osteochondral allotransplants: Outcomes, failures and future developments. Osteochondral allografts are used to treat many different conditions as acute traumatic large-sized lesions, degenerative osteoarthritis, osteochondritis dissecans, avascular necrosis or in case of failure of previous procedures particularly in young patients for whom primary prosthesis is not desirable. Fresh allografts present the advantage of having mature viable hyaline cartilage, not causing donor morbidity, allowing the restoration of even large defects in a single surgical session. Conversely, they could account for risks of disease transmission, immunologic reactions, and for limited availability. The present review aimed to analyze published studies of the last decade in which patients received fresh osteochondral allografts by dividing them for knee or ankle regenerative purposes. We wish to report the observed failure rates and particularly to collect any other reported side effect or outcome for identifying major problems and limits linked to the procedure and for delineating possible future researches and approaches. The overall success rates resulted ranging from 5.3% to 48.3% in the ankle at a mean follow up of 3.3 years and from 0% to 85.7% in the knee at a mean follow up of 7.1 years. Among other outcomes, occurrence or progression of arthritis, osteolysis, graft instability, fractures, nonunions, edema and infections were recorded. Overall, the lack of well designed randomized and controlled clinical trials, of immunological determination of the anti-donor antibodies development and of local and systemic biomarkers to detect reaction to the graft seems to be the major drawback. Improvements in these limiting factors might be desirable in order to enhance the clinical scenario of a well-established and successful procedure to give, especially for young patients, a real regeneration of the joint. abstract_id: PUBMED:12152982 Fresh osteochondral allografts. Fresh osteochondral allografts have a long clinical history and have demonstrated use in a wide spectrum of knee joint pathology. The allografting procedure takes advantage of the unique characteristics of osseous and chondral tissue components. Transplanted bone is readily incorporated by the host while the articular cartilage survives transplantation. Allografts have demonstrated &gt;75% clinical success in the treatment of focal femoral condyle lesions due to trauma, chondral injury, osteochondral trauma, osteochondritis dissecans, avascular necrosis, and post-traumatic reconstruction. Fresh allografts also are finding an increasing role in the salvage of difficult cases that have failed other cartilage procedures, and particularly in individuals who are believed to be too young and active for joint arthroplasty. Further refinements in the technical aspects of the allografting procedure, as well as further understanding of the biology of osteochondral allografts, should lead to improved clinical outcomes. abstract_id: PUBMED:30523367 Treatment of unstable knee osteochondritis dissecans in the young adult: results and limitations of surgical strategies-The advantages of allografts to address an osteochondral challenge. Joint surface incongruence resulting from osteochondritis dissecans (OCD) alters the articular physiologic congruence, increasing the contact stress on adjacent joint surfaces and accelerating wear and the cascade of joint degeneration. Accordingly, the restoration of articular surface integrity is of major importance, especially in young adults where, in lesions left untreated or following simple fragment excision, early osteoarthritis can be anticipated. Therefore, the treatment algorithm in unstable knee OCD of the young adult foresees surgical options to restore the articular surface. Several procedures have been proposed, including refixation of the detached fragment bone marrow stimulation, osteochondral autograft implantation, fresh osteochondral allograft transplantation, and cell-based or cell-free regenerative techniques. The aim of this review was to summarize the evidence for these surgical strategies, reporting their results and limitations. The overall evidence documents positive results for each of the assorted surgical procedures applied to treat unstable OCD, thus indicating support for their selected use to treat osteochondral defects paying particular attention to their specific indications for the lesion characteristics. The fixation of a good quality fragment should be pursued as a first option, while unfixable small lesions may benefit from autografts. For large lesions, available cell-based or cell-free osteochondral scaffold are a feasible solution but with limitation in terms of regenerated tissue quality. In this light, fresh allografts may offer articular surface restoration with viable physiologic osteochondral tissue providing a predictably successful outcome, and therefore they may currently represent the most suitable option to treat unstable irreparable OCD lesion in young adults. LEVEL OF EVIDENCE: V. abstract_id: PUBMED:27496906 Osteochondral Allograft Transplantation in Patients With Osteochondritis Dissecans of the Knee. Background: Osteochondritis dissecans (OCD) of the knee can be difficult to treat. Cartilage restoration techniques are often indicated when the lesion or fragment cannot be salvaged and the patient remains symptomatic. Fresh osteochondral allograft (OCA) transplantation can restore both bone and cartilage defects characteristic of OCD. Hypothesis: We hypothesized that osteochondral allografting is a successful method for treating OCD of the knee. Study Design: Case series; Level of evidence, 4. Methods: This study comprised 135 patients (149 knees) who underwent OCA for OCD of the knee (type III or IV) between 1997 and 2013 and had a minimum follow-up of 2 years. The median age was 21 years (range, 12-55 years) and 75.8% of the patients were male. The mean allograft size was 7.3 cm2 (range, 2.2-25 cm2). Evaluation included the following: frequency and type of reoperations; modified Merle d'Aubigné and Postel (18-point) scale; International Knee Documentation Committee (IKDC) pain, function, and total scores; and Knee Society function (KS-F) and knee (KS-K) scores. Clinical failure was defined as revision OCA or conversion to arthroplasty. Graft survivorship was determined. Results: The median follow-up time was 6.3 years (range, 1.9-16.8 years) and 62% of participants had more than 5-year follow-up. Thirty-four of 149 knees (23%) had reoperations, of which 12 (8%) were classified as allograft failures (7 OCA revisions, 3 unicompartmental knee arthroplasties, and 2 total knee arthroplasties). OCA survivorship was 95% at 5 years and 93% at 10 years. Of the 137 knees whose grafts were still in situ at the latest follow-up, the mean modified Merle d'Aubigné and Postel (18-point) score was 16.8; IKDC pain, function, and total scores were 2.1, 8.1, and 82.3; and KS-F and KS-K scores were 95.7 and 94.3, respectively. The majority of patients (95%) reported being satisfied with the outcome of their procedure. Conclusion: OCA transplantation was an effective treatment for OCD of the knee, with a low rate of graft failure, significant improvement in pain and function scores, and high patient satisfaction. abstract_id: PUBMED:25381401 Cartilage restoration of the hip using fresh osteochondral allograft: resurfacing the potholes. Cartilage defects of the hip cause significant pain and may lead to arthritic changes that necessitate hip replacement. We propose the use of fresh osteochondral allografts as an option for the treatment of such defects in young patients. Here we present the results of fresh osteochondral allografts for cartilage defects in 17 patients in a prospective study. The underlying diagnoses for the cartilage defects were osteochondritis dissecans in eight and avascular necrosis in six. Two had Legg-Calve-Perthes and one a femoral head fracture. Pre-operatively, an MRI was used to determine the size of the cartilage defect and the femoral head diameter. All patients underwent surgical hip dislocation with a trochanteric slide osteotomy for placement of the allograft. The mean age at surgery was 25.9 years (17 to 44) and mean follow-up was 41.6 months (3 to 74). The mean Harris hip score was significantly better after surgery (p&lt;0.01) and 13 patients had fair to good outcomes. One patient required a repeat allograft, one patient underwent hip replacement and two patients are awaiting hip replacement. Fresh osteochondral allograft is a reasonable treatment option for hip cartilage defects in young patients. abstract_id: PUBMED:3903605 Osteochondral allografts in the treatment of osteonecrosis of the knee. In summary, patients with spontaneous osteonecrosis of the knee requiring surgery were elderly and generally had late stage IV disease. They seemed to do better with osteotomy and debridement than with osteochondral allograft replacement because they could not tolerate restricted weight bearing. Patients with steroid-induced osteonecrosis did well initially after allograft replacement (6 to 18 months), especially in experiencing pain relief. However, because of the continuous use of high doses of steroids, revascularization of the allografts was poor, resulting in graft subsidence. Patients have better long-term results following osteotomy and debridement. Patients with traumatic osteonecrosis and osteochondritis dissecans had the best results following osteochondral allograft replacements. In conclusion, based on our series and others, our current surgical approach in the management of osteonecrosis of the knee is as follows: 1. In patients with spontaneous osteonecrosis with asymptomatic small lesions, nonsurgical treatment is recommended. For an asymptomatic or symptomatic large lesion with associated angular deformity, the active patient should have a tibial osteotomy for stages I and II and tibial osteotomy and debridement for stages III and IV. Less active patients with symptomatic stage III or IV disease should have unicompartmental or total knee prosthetic arthroplasty. 2. For steroid-induced osteonecrosis, osteochondral allografts are not recommended. If the patient's systemic disease has a limited prognosis, or if the patient has multijoint involvement, total knee or unicompartmental arthroplasty is warranted. If the patient has a good prognosis and is active, debridement with or without realignment should be performed. 3. For traumatic osteonecrosis in the younger patient or for osteochondritis dissecans, fresh osteochondral allograft replacement is recommended. High tibial osteotomy in combination with allograft replacement should also be done if there is associated malalignment. The realignment should be done prior to or simultaneously with the allograft (providing the osteotomy is done on the side of the joint opposite the allograft). abstract_id: PUBMED:24990975 Distal Femoral Fresh Osteochondral Allografts: Follow-up at a Mean of Twenty-two Years. Background: Osteochondral defects of the knee in young, active patients represent a challenge to the orthopaedic surgeon. Our study examined long-term outcomes of the use of fresh allograft for posttraumatic osteochondral and osteochondritis dissecans defects in the distal aspect of the femur. Methods: We reviewed the cases of sixty-three patients who underwent osteochondral allograft transplantation to the distal aspect of the femur between 1972 and 1995. Five patients who resided out of the country were lost to follow-up. Indications for the allograft procedure were an age of less than fifty years and a unipolar posttraumatic osteochondral or osteochondritis dissecans defect in the distal aspect of the femur that was larger than 3 cm in diameter and 1 cm in depth. Results: Fifty-eight patients ages eleven to forty-eight years at the time of surgery (mean age, twenty-eight years) were followed for a mean of 21.8 years (range, fifteen to thirty-two years). Thirteen of the fifty-eight cases required further surgery; three underwent graft removal, nine were converted to total knee arthroplasty, and one underwent multiple debridements followed by above-the-knee amputation. Three patients died during the study due to unrelated causes. A Kaplan-Meier analysis of graft survival showed rates of 91%, 84%, 69%, and 59% at ten, fifteen, twenty, and twenty-five years, respectively. Patients with surviving grafts had good function, with a mean modified Hospital for Special Surgery (HSS) score of 86 at fifteen years or more following the allograft transplant surgery. Late osteoarthritic degeneration on radiographs was associated with lower HSS scores and poorer clinical outcomes. Conclusions: Fresh osteochondral allograft was found to provide a long-term solution for large articular cartilage defects in the distal aspect of the femur in young, active patients. Level Of Evidence: Therapeutic Level IV. See Instructions for Authors for a complete description of levels of evidence. abstract_id: PUBMED:33587261 Surgical Management of Osteochondral Defects of the Knee: An Educational Review. Purpose Of Review: Numerous surgical techniques are available to treat osteochondral defects of the knee. The aim of this review is to analyse these procedures, including their methodology, outcomes and limitations, to create a treatment algorithm for optimal management. Recent Findings: Osteochondral defects of the knee significantly alter the biomechanics of the joint. This can cause symptomatic and functional impairment as well as considerable risk of progressive joint degeneration. Surgical interventions aim to restore a congruent, durable joint surface providing symptomatic relief and reducing the risk of early arthritic changes. These methods include fixation, chondroplasty, microfracture, autologous matrix-induced chondrogenesis, autograft transplants, allograft transplants and autologous chondrocyte implantation. There is currently much debate as to which of these methods provides optimal treatment of osteochondral defects. The overall evidence supports the use of each technique depending on the individual characteristics of the lesion. New technologies provide exciting prospects; however, long-term outcomes for these are not yet available. Answer: Yes, fresh osteochondral allografts can restore function in juveniles with osteochondritis dissecans (OCD) of the knee. A retrospective review of 11 children with OCD of the knee treated with fresh stored osteochondral allograft between 2004 and 2009 showed that all patients returned to activities of daily living without difficulties at 6 months and to full sports activities between 9 and 12 months after surgery. The modified Merle D'Aubigné-Postel score improved from an average of 12.7 preoperatively to 16.3 at 24 months postoperatively, suggesting that fresh osteochondral allografts restored short-term function in patients with juvenile OCD who failed standard treatments (PUBMED:22972653). Additionally, a study on a larger group of patients younger than 18 years who underwent fresh osteochondral allograft transplantation for treatment of chondral and osteochondral lesions reported that 88% of the knees whose grafts were in situ at the latest follow-up were rated good/excellent. The International Knee Documentation Committee scores improved from 42 preoperatively to 75 postoperatively, and the Knee Society function score improved from 69 to 89, indicating that osteochondral allograft transplantation is a useful treatment option in pediatric and adolescent patients (PUBMED:24414553). Furthermore, a case series of 135 patients (149 knees) who underwent osteochondral allograft (OCA) for OCD of the knee reported a low rate of graft failure, significant improvement in pain and function scores, and high patient satisfaction. OCA transplantation was found to be an effective treatment for OCD of the knee (PUBMED:27496906). In summary, the evidence from these studies suggests that fresh osteochondral allografts are effective in restoring function in juveniles with OCD of the knee who have failed conventional therapy.
Instruction: Periodic leg movements in RLS patients as compared to controls: Are there differences beyond the PLM index? Abstracts: abstract_id: PUBMED:18753004 Periodic leg movements in RLS patients as compared to controls: Are there differences beyond the PLM index? Objective: To characterize periodic leg movements (PLM) and their association with sleep disturbances in drug-free patients with restless legs syndrome (RLS) and healthy subjects without sleep complaints. Methods: Polysomnographic recordings of 95 patients with idiopathic RLS and 31 age-matched controls were compared, and correlation analysis between sleep efficiency and PLM variables was performed. All patients and controls were free of medication for 10 days prior to polysomnography. Results: PLM measures revealed a significantly longer mean duration of single PLM during wakefulness and non rapid eye movement (NREM) sleep in RLS patients as compared to controls. PLM indices were higher in RLS patients than in controls during all sleep stages, but not during wakefulness and slow wave sleep. A significantly higher number of PLM sequences was found in RLS patients than in controls. In RLS patients decreased sleep efficiency was associated with a higher number and a shorter duration of PLM sequences. Conclusions: The mean duration of single PLM might be an appropriate parameter to discriminate between healthy subjects with PLM and patients with RLS. High numbers of PLM sequences of short duration might be an indicator for the decreased sleep quality in RLS patients. abstract_id: PUBMED:28231845 Validation of a leg movements count and periodic leg movements analysis in a custom polysomnography system. Background: Periodic leg movements (PLM) during sleep (PLMS) are considered strongly related to restless legs syndrome (RLS), and are associated with polymorphisms in RLS risk genes. Various software for automatic analysis of PLMS are available, but only few of them have been validated. Aim of this study was to validate a leg movements count and analysis integrated in a commercially available polysomnography (PSG) system against manual scoring. Methods: Twenty RLS patients with a PLMS index &gt; 20/h and 20 controls with a PLMS index &lt; 5/h were included. Manual and computerized scoring of leg movements (LM) and PLM was performed according to the standard American Academy of Sleep Medicine (AASM) criteria. LM and PLM indices during sleep and wakefulness, the rate of PLMS associated with respiratory events, intermovement interval and periodicity indices were manually and automatically scored. Results: The correlation between manual and computerized scoring was high for all investigated parameters (Spearman correlation coefficients 0.751-0.996, p &lt; 0.001; intraclass correlation coefficients 0.775-0.999, p &lt; 0.001). Bland-Altman plots showed high agreement between manual and automatic analysis. Conclusions: This study validated an automatic LM count and PLM analysis against the gold standard manual scoring according to AASM criteria. The data demonstrate that the software used in this study has an outstanding performance for computerized LM and PLM scoring, and LM and PLM indices generated with this software can be reliably integrated in the routine PSG report. This automatic analysis is also an excellent tool for research purposes. abstract_id: PUBMED:32583272 Effects of new PLM scoring rules on PLM rate in relation to sleep and resting wake for RLS and healthy controls. Purpose: This study evaluates the differences in periodic leg movement (PLM) rates for Restless Legs Syndrome (RLS) and healthy controls when using the updated PLM scoring criteria developed by IRLSSG in 2016 versus the prior PLM scoring criteria developed by IRLSSG in 2006. Four major problems with the prior standards had been objectively identified, i.e. minimum inter-movement interval should be 10 not 5 s, non-PLM leg movements should end any preceding PLM sequence, a leg movement (LM) can be any length &gt; 0.5 s, and a PLM should be a persisting movement not a couple or a series of closely spaced, very brief events. Each of these led to including, erroneously, various random leg movements as PLM. Correcting these problems was expected to increase specificity, reducing the number of PLM detected, particularly in situations producing relatively more random leg movements, e.g. wake vs. sleep and controls without PLMD vs. RLS patients. Methods: This study evaluated the putative benefits of the updated, 2016-scoring criteria. The LMs from 42 RLS patients and 30 age- and gender-matched controls were scored for PLMS and PLMW from standard all-night PSG recordings using both 2006 and 2016 WASM criteria. Results/conclusion: The results confirmed that that the 2016 compared to the 2006 criteria generally decreased the PLM rates with particularly large decreases for the conditions with more random non-PLM events, e.g. wake times and normal healthy controls. This supported the view that the new criteria succeeded in increasing the specificity of PLM detection. Moreover, the changes in PLM rates were generally small for the conditions with relatively few random LM, e.g. RLS and sleep. Thus the bulk of existing PLMS research does not require reconsideration of results, with possible exception of special situations with relatively more random leg movements than periodic leg movements, e.g. wake, healthy normals and children. abstract_id: PUBMED:20948585 Assessing periodicity of periodic leg movements during sleep. Background: Periodic leg movements (PLM) during sleep consist of involuntary periodic movements of the lower extremities. The debated functional relevance of PLM during sleep is based on correlation of clinical parameters with the PLM index (PLMI). However, periodicity in movements may not be reflected best by the PLMI. Here, an approach novel to the field of sleep research is used to reveal intrinsic periodicity in inter movement intervals (IMI) in patients with PLM. Methods: Three patient groups of 10 patients showing PLM with OSA (group 1), PLM without OSA or RLS (group 2) and PLM with RLS (group 3) are considered. Applying the "unfolding" procedure, a method developed in statistical physics, enhances or even reveals intrinsic periodicity of PLM. The degree of periodicity of PLM is assessed by fitting one-parameter distributions to the unfolded IMI distributions. Finally, it is investigated whether the shape of the IMI distributions allows to separate patients into different groups. Results: Despite applying the unfolding procedure, periodicity is neither homogeneous within nor considerably different between the three clinically defined groups. Data-driven clustering reveals more homogeneous and better separated clusters. However, they consist of patients with heterogeneous demographic data and comorbidities, including RLS and OSA. Conclusions: The unfolding procedure may be necessary to enhance or reveal periodicity. Thus this method is proposed as a pre-processing step before analyzing PLM statistically. Data-driven clustering yields much more reasonable results when applied to the unfolded IMI distributions than to the original data. Despite this effort no correlation between the degree of periodicity and demographic data or comorbidities is found. However, there are indications that the nature of the periodicity might be determined by long-range interactions between LM of patients with PLM and OSA. abstract_id: PUBMED:8235243 Periodic leg movements (PLM): their relationship to sleep stages. We investigated the characteristics of periodic leg movements (PLM) during nocturnal sleep and wakefulness in 13 drug-free patients presenting with the restless legs syndrome (RLS, n = 9) or with isolated PLM (n = 4). Eight-hour polygraphic sleep recordings included the electromyogram (EMG) of both tibialis anterior muscles. Scoring of leg movements was done according to established criteria for periodic movements in sleep, but movements occurring during episodes of wakefulness were scored as well. Twelve out of 13 patients had PLM during wakefulness, including three subjects not affected by RLS. The frequency of periodic movements in sleep (PMS) per hour of total sleep time was significantly lower than the frequency of PLM (including movements during wakefulness) per hour of polygraphic recording. Movement indices based on PMS alone underestimated the relative frequency of PLM particularly in patients with high amounts of wakefulness (&gt; 20%). All features of PLM clearly differed between sleep stages. Relative frequency of movements, their duration and their arousing effect decreased along the nonrapid eye movement (NREM) sleep stages, whereas the intermovement interval increased. During rapid eye movement (REM) sleep the duration of movements was shortest and the intermovement interval was longest. The results presented suggest that the processes underlying PLM are most active at the transition from wakefulness to sleep and considerably attenuated during deep NREM sleep and even more during REM sleep. We suggest including movements during wakefulness in routine PLM scoring to get a more complete picture of the disturbance. abstract_id: PUBMED:31609696 Asymmetry of Periodic Leg Movements in Sleep (PLMS) in Parkinson's Disease. Background: Periodic limb movements in sleep (PLMS) are repetitive movements usually of the legs strongly associated with Restless-legs syndrome (RLS), which appear more frequently in males, older age and other sleep disturbances, such as sleep-disordered breathing (SDB). Patients with Parkinson's disease (PD) suffer from various sleep disturbances including REM sleep behavior disorder, RLS and PLMS. Although a dopaminergic pathophysiology of PLMS is discussed, no systematic data on PLMS side-to-side distribution in PD and its correlation with asymmetry of motor symptoms are available. Objective: This study aimed at elucidating PLMS asymmetry in correlation to that of motor symptoms in PD compared to SDB and RLS. Methods: Cross-sectional, retrospective analysis of two polysomnography (PSG) recordings per patient scoring PLMS separately for both legs. Results: Of 105 patients (44 PD, 44 age- and sex-matched SDB and 17 RLS patients) PLMS measures (number of PLM, PLM-Index, PLM-arousal index) showed significant side-to-side differences in all disease entities in both PSGs (P &lt; 0.001; Wilcoxon rank test). PLM-Index asymmetry (PLM-I difference of &gt;5/h between both sides) was observed less frequently in PD (34% of patients) compared to RLS (77% , P &lt; 0.05) and SDB (59% , P &lt; 0.05; χ2 test). In asymmetric PD patients, predominant side of PLMS was more stable than in SDB and RLS comparing the two PSGs, but we did not detect an agreement between PLMS predominant side with that of motor symptoms in PD patients. Conclusions: Only the minority of PD patients shows asymmetric PLMS distribution with relatively high night-to-night stability but no correlation with motor symptom asymmetry. abstract_id: PUBMED:29731671 The level of observed physical movement accompanying periodic limb movements measured in a clinical sleep population. Study Objectives: Periodic limb movements (PLMs) are routinely measured during polysomnogram (PSG) testing. During the early years of sleep testing, physical movements were identified and over time, consensus ultimately led to the current definitions of movement disorders including criteria used to measure PLMs on PSG testing. There has been considerable debate about the clinical importance of the PLMs measured during PSG testing. Over the last decade, the author has observed significant variations in the actual visible physical movements observed with a PLM event. This report is the result of work to quantify the amount of movement and the frequency of movements observed among individuals who have PLMs. Methods/principal Findings: Consecutive PSGs performed in a suburban sleep center for an initial diagnosis of a sleep disorder were retrospectively reviewed to identify those with measured PLMs. Of 646 studies on patients &gt;18 years, 460 met criteria for inclusion. Visual assessment of movements was carried out on all of those with PLM events measured using American Academy of Sleep Medicine guidelines. The movements were quantified based on the number of extremities observed to move. PLMs were observed in 237 of the 460 studies that met inclusion criteria (52%). As expected, the PLMs occurred more frequently in older individuals. PLMs occurred with equal frequency in both sexes. Apnea occurred with equal frequency in those with and without observed physical movements. Of those with PLMs, 62% (147) demonstrated observable physical movements. Significant movements involving three or four extremities occurred in 16% of individuals with PLMs. No physical movements were observed in 38%. Conclusion: In this uncontrolled, nonrandom, observational series, visual physical movements with a PLM event identify a unique subset of individuals with PLMs. The presence of any visual movements or more pronounced visual movements involving multiple extremities may represent markers for PLM disorder, for clinically significant PLMs with other disorders, or for other clinical conditions or physiologic variables. abstract_id: PUBMED:26210395 MATPLM1, A MATLAB script for scoring of periodic limb movements: preliminary validation with visual scoring. Background And Purpose: A Matrix Laboratory (MATLAB) script (MATPLM1) was developed to rigorously apply World Associations of Sleep Medicine (WASM) scoring criteria for periodic limb movements in sleep (PLMS) from bilateral electromyographic (EMG) leg recordings. This study compares MATPLM1 with both standard technician and expert detailed visual PLMS scoring. Methods And Subjects: Validation was based on a 'macro' level by agreement for PLMS/h during a night recording and on a 'micro' level by agreement for the detection of each PLMS from a stratified random sample for each subject. Data available for these analyses were from 15 restless leg syndrome (RLS) (age: 61.5 ± 8.5, 60% female) and nine control subjects (age: 61.4 ± 7.1, 67% female) participating in another study. Results: In the 'micro' analysis, MATPLM1 and the visual detection of PLMS events agreed 87.7% for technician scoring and 94.4% for expert scoring. The technician and MATPLM1 scoring disagreements were checked for 36 randomly selected events, 97% involved clear technician-scoring error. In the 'macro' analysis, MATPLM1 rates of PMLS/h correlated highly with visual scoring by the technician (r(2) = 0.97) and the expert scorer (r(2) = 0.99), but the technician scoring was consistently less than MATPLM1: median (quartiles) difference: 10 (5, 23). There was little disagreement with expert scorer [median (quartile) difference: -0.3 (-2.4, 0.3)]. Conclusions: The MATPLM1 produces reliable scoring of PLMS that matches expert scoring. The standard visual scoring without careful measuring of events tends to significantly underscore PLMS. These preliminary results support the use of MATPLM1 as a preferred method of scoring PLMS for EMG recordings that are of a good quality and without significant sleep-disordered breathing events. abstract_id: PUBMED:34992826 Periodic limb movements during sleep: a narrative review. Objective: Using narrative review techniques, this paper evaluates the evidence for separable underlying patho-mechanisms of periodic limb movements (PLMs) to separable PLM motor patterns and phenotypes, in order to elucidate potential new treatment modalities. Background: Periodic limb movement disorder (PLMD) is estimated to occur in 5-8% of the paediatric population and 4-11% of the general adult population. Due to significant sleep fragmentation, PLMD can lead to functional impairment, including hyperactivity and delayed language development in children, and poor concentration and work performance in adults. Longitudinal data demonstrate that those with PLMD are at greater risk of depression and anxiety, and a 4-fold greater risk of developing dementia. PLMD has been extensively studied over the past two decades, and several key insights into the genetic, pathophysiological, and neural correlates have been proposed. Amongst these proposals is the concept of separable PLM phenotypes, proposed on the basis of nocturnal features such as the ratio of limb movements and distribution throughout the night. PLM phenotype and presentation, however, varies significantly depending on the scoring utilized and the nocturnal features examined, across age, and co-morbid clinical conditions. Furthermore, associations between these phenotypes with major neurologic and psychiatric disorders remain controversial. Methods: In order to elucidate potential divergent biological pathways that may help clarify important new treatment modalities, this paper utilizes narrative review and evaluates the evidence linking PLM motor patterns and phenotypes with hypothesised underlying patho-mechanisms. Distinctive, underlying patho-mechanisms include: a pure motor mechanism originating in the spinal cord, iron deficiency, dopamine system dysfunction, thalamic glutamatergic hyperactivity, and a more cortical-subcortical interplay. In support of the latter hypothesis, PLM rhythmicity appears tightly linked to the microarchitecture of sleep, not dissimilarly to the apnoeic/hypopneic events seen in obstructive sleep apnea (OSA). Conclusions: This review closes with a proposal for greater investigation into the identification of potential, divergent biological pathways. To do so would require prospective, multimodal imaging clinical studies which may delineate differential responses to treatment in restless legs syndrome (RLS) without PLMS and PLMS without RLS. This could pave the way toward important new treatment modalities. abstract_id: PUBMED:24587572 Respiratory-related leg movements and their relationship with periodic leg movements during sleep. Study Objectives: To describe the time structure of leg movements (LM) in obstructive sleep apnea (OSA) syndrome, in order to advance understanding of their clinical significance. Location: Sleep Research Centre, Oasi Institute (IRCCS), Troina, Italy. Setting: Sleep laboratory. Patients: Eighty-four patients (16 females, 68 males, mean age 55.1 y, range 29-74 y). Methods: Respiratory-related leg movements (RRLM) and those unrelated to respiratory events (NRLM) were examined within diagnostic polysomnograms alone and together for their distributions within the sleep period and for their periodicity. Measurements And Results: Patients with OSA and RRLM exhibited more periodic leg movements in sleep (PLMS), particularly in NREM sleep. A gradual decrease in number of NRLM across the sleep period was observed in patients with RRLM. This pattern was less clear for RRLM. Frequency histograms of intermovement intervals of all LMs in patients with RRLM showed a prominent first peak at 4 sec, and a second peak at approximately 24 sec coincident with that of PLMS occurring in the absence of OSA. A third peak of lowest amplitude was the broadest with a maximum at approximately 42 sec. In patients lacking RRLM, NRLM were evident with a single peak at 2-4 sec. A stepwise linear regression analysis showed that, after controlling for a diagnosis of restless legs syndrome and apnea-hypopnea index, PLMS remained significantly associated with RRLM. Conclusion: The time structure of leg movements occurring in conjunction with respiratory events exhibit features of periodic leg movements in sleep occurring alone, only with a different and longer period. This brings into question the validity, both biologic and clinical, of scoring conventions with their a priori exclusion from consideration as periodic leg movements in sleep. Answer: Yes, there are differences in periodic leg movements (PLM) in restless legs syndrome (RLS) patients as compared to controls beyond the PLM index. Studies have found that RLS patients have a significantly longer mean duration of single PLM during wakefulness and non-rapid eye movement (NREM) sleep compared to controls. Additionally, RLS patients have a higher number of PLM sequences, and their sleep efficiency is negatively associated with both a higher number and a shorter duration of PLM sequences. This suggests that the mean duration of single PLM and the high numbers of PLM sequences of short duration might be indicators for decreased sleep quality in RLS patients (PUBMED:18753004). Furthermore, the updated PLM scoring criteria developed by the International Restless Legs Syndrome Study Group (IRLSSG) in 2016, as opposed to the prior criteria from 2006, generally decreased the PLM rates, with particularly large decreases for conditions producing more random non-PLM events, such as wake times and in normal healthy controls. This supports the view that the new criteria succeeded in increasing the specificity of PLM detection (PUBMED:32583272). Additionally, the characteristics of PLM differ between sleep stages, with the frequency of movements, their duration, and their arousing effect decreasing along the NREM sleep stages, whereas the intermovement interval increases. During REM sleep, the duration of movements is shortest, and the intermovement interval is longest (PUBMED:8235243). Moreover, in Parkinson's disease (PD), a study found that PLMS measures showed significant side-to-side differences in all disease entities, but the asymmetry of PLM distribution in PD patients showed no correlation with motor symptom asymmetry (PUBMED:31609696). Lastly, the clinical importance of PLMs measured during polysomnogram (PSG) testing has been debated, and significant variations in the actual visible physical movements observed with a PLM event have been noted. A study quantified the amount of movement and the frequency of movements observed among individuals who have PLMs, finding that 62% demonstrated observable physical movements, with significant movements involving three or four extremities occurring in 16% of individuals with PLMs (PUBMED:29731671).
Instruction: Are there differences between right and left colectomies when performed by laparoscopy? Abstracts: abstract_id: PUBMED:26139496 Are there differences between right and left colectomies when performed by laparoscopy? Background: There is an extended belief that the laparoscopic approach to left colectomy (LC) is technically more demanding and associated with more postoperative complications than to right colectomy (RC). However, there is no consensus in the literature about whether the short-term outcomes of RC differ from those of LC. The aim of this paper was to compare the postoperative course of patients undergoing RC and LC. Methods: We retrospectively analyzed 1000 consecutive patients who underwent a laparoscopic RC or LC between 1998 and 2012. Factors analyzed were intraoperative complications, surgical time, postoperative complications, and length of stay. The two groups were divided into four subgroups (neoplasia, diverticular disease, polyps, and others). Results: LC was associated with more postoperative complications than RC and longer operative time both in the two main groups (postoperative complications 30 vs. 19%; operative time 139 vs. 118 min) and in the neoplasia subgroups (27 vs. 18%; 137 vs. 118 min). No differences between groups were found for rates of reintervention or death. Comparison between LC subgroups showed that the operative time was longer and the conversion rate was higher in the diverticular disease subgroup than in the neoplasia subgroup (155 vs. 137 min; 21 vs. 8%). Conclusions: In this large cohort of patients undergoing laparoscopic colectomy, LC carried a higher risk than RC of postoperative complications. These findings provide new data on the differences between the two surgeries. Our findings strengthen the notion that right and left colectomies have a different intraoperative and postoperative course and should be analyzed as two separate entities. abstract_id: PUBMED:35024933 Laparoscopic versus robotic-assisted, left-sided colectomies: intra- and postoperative outcomes of 683 patients. Background: Robotic-assisted colorectal surgery has gained more and more popularity over the last years. It seems to be advantageous to laparoscopic surgery in selected situations, especially in confined regions like a narrow male pelvis in rectal surgery. Whether robotic-assisted, left-sided colectomies can serve as safe training operations for less frequent, low anterior resections for rectal cancer is still under debate. Therefore, the aim of this study was to evaluate intra- and postoperative results of robotic-assisted laparoscopy (RAL) compared to laparoscopic (LSC) surgery in left-sided colectomies. Methods: Between June 2015 and December 2019, 683 patients undergoing minimally invasive left-sided colectomies in two Swiss, high-volume colorectal centers were included. Intra- and postoperative outcome parameters were collected and analyzed. Results: A total of 179 patients undergoing RAL and 504 patients undergoing LSC were analyzed. Baseline characteristics showed similar results. Intraoperative complications occurred in 0.6% of RAL and 2.0% of LSC patients (p = 0.193). Differences in postoperative complications graded Dindo ≥ 3 were not statistically significant (RAL 3.9% vs. LSC 6.3%, p = 0.227). Occurrence of anastomotic leakages showed no statistically significant difference [RAL n = 2 (1.1%), LSC n = 8 (1.6%), p = 0.653]. Length of hospital stay was similar in both groups. Conversions to open surgery were significantly higher in the LSC group (6.2% vs.1.7%, p = 0.018), while stoma formation was similar in both groups [RAL n = 1 (0.6%), LSC n = 5 (1.0%), p = 0.594]. Operative time was longer in the RAL group (300 vs. 210.0 min, p &lt; 0.001). Conclusion: Robotic-assisted, left-sided colectomies are safe and feasible compared to laparoscopic resections. Intra- and postoperative complications are similar in both groups. Most notably, the rate of anastomotic leakages is similar. Compared to laparoscopic resections, the analyzed robotic-assisted resections have longer operative times but less conversion rates. Further prospective studies are needed to confirm the safety of robotic-assisted, left-sided colectomies as training procedures for low anterior resections. abstract_id: PUBMED:18471731 Experience with 969 minimal access colectomies: the role of hand-assisted laparoscopy in expanding minimally invasive surgery for complex colectomies. Background: Laparoscopic-assisted colectomy (LAC) is not performed widely despite numerous patient benefits. LAC is difficult to perform and the learning curve is steep. Hand-assisted laparoscopy (HALS) permits placement of a hand into the abdomen to assist the dissection. Our aim was to analyze the impact of HALS on a minimal access colectomy (MAC) practice. Study Design: A prospectively maintained database was retrospectively reviewed for all LAC and HALS colectomies. HALS was introduced in November 2003, and analysis was performed for the ensuing 3-year period. Procedure types, conversion rates, operation duration, complications, and length of stay were determined. Results: During the study period, 969 MACs were performed (373 HALS, 596 LAC). Although HALS was used for all types of colorectal resection, it was most commonly used for complex colectomies (left-sided and total colectomies). HALS complex colectomies increased 44% from 2004 to 2005, and 24% from 2005 to 2006. Conversely, LAC complex colectomies decreased 29% (2004 to 2005) and then increased 27% (2005 to 2006). There were no demographic differences between patient groups. For complex colectomies, HALS substantially reduced operative time (mean +/- SD; LAC, 258+/-90 minutes; HALS, 242+/-89 minutes; p=0.037) and conversion rate (LAC, 15.3% versus HALS, 3.4%, p &lt; 0.001), with the same complication rate (LAC, 13.6%; HALS, 15.4%; p=0.629). The average length of stay (mean +/- SD) was increased 1 day in the HALS group (LAC, 5.0+/-3.0 days; HALS, 6.0+/-3.4 days; p &lt; 0.001), likely because of the higher proportion of total colectomies. Conclusions: HALS increased the number of MACs performed. More notably, HALS was used preferentially for complex colectomies. HALS effectively bridges the complexity divide between minimal access and open procedures. HALS may serve as a technology to expand MAC. abstract_id: PUBMED:36897505 First worldwide report on Hugo RAS™ surgical platform in right and left colectomy. The diffusion of robotic surgery is rapidly and constantly growing in different surgical specialties. Recently, novel robotic platforms have entered into the market. To date, however, most of the reports on their clinical use have specifically focused on gynecological and urological surgery. In this study, we present the first three robotic-assisted colectomies performed with the new Hugo RAS system (Medtronic, Minneapolis, MN, USA). The surgical team had previous robotic experience and completed simulation training and an official 2-day cadaver laboratory session. Operating room setting and trocar layout were planned and two full cadaver procedures were carried out (right and left colectomy). Onsite dry-run sessions were performed before tackling clinical cases. Three patients underwent robotic-assisted colectomies: one left colectomy, two right colectomies with complete mesocolic excision (CME) and high vascular ligation (HVL) at our Institution. Preoperative diagnosis was colonic adenocarcinoma in all cases. A description of the operative room setup, robotic arm configuration and docking angles is provided. Mean docking time and console time were 8 and 259 min, respectively. All the surgical steps were completed without critical surgical errors or high-priority alarms. Neither intraoperative complications nor conversions to open surgery were recorded. Postoperative courses were uneventful with a mean length of stay of 5 days. Further clinical data and experience are required for procedural standardization and potential integration of the system into robotic general surgery and colorectal programs. abstract_id: PUBMED:34581955 Minimally invasive right versus left colectomy for cancer: does robotic surgery mitigate differences in short-term outcomes? Studies comparing right (RC) and left colectomies (LC) show higher rates of ileus in RC and higher wound infection and anastomotic leak rates in LC. However, prior studies did not include robotic procedures. We compared short-term outcomes of laparoscopic and robotic RC and LC for cancer, with sub-analysis of robotic procedures. In a retrospective review of a prospective database, preoperative factors, intraoperative events, and 30-day postoperative outcomes were compared. Student's t tests and Chi-square tests were used for continuous and categorical variables, respectively. A logistic binomial regression was performed to assess whether type of surgery was associated with postoperative complications. Between January 2014 and August 2020, 115 patients underwent minimally invasive RC or LC for cancer. Sixty-eight RC [30 (44.1%) laparoscopic, 38 (55.9%) robotic] and 47 LC [13 (27.6%) laparoscopic, 34 (72.4%) robotic] cases were included. On univariate analysis, RC patients had significantly higher overall postoperative complications but no differences in rates of ileus/small bowel obstruction, wound infection, time to first flatus/bowel movement, length of hospital stay, and 30-day readmissions. On multivariate analysis, there was no significant difference in overall complications and laparoscopic surgery had a 2.5 times higher likelihood of complications than robotic surgery. In sub-analysis of robotic cases, there was no significant difference among all outcome variables. Previously reported outcome differences between laparoscopic RC and LC for cancer may be mitigated by robotic surgery. abstract_id: PUBMED:29121694 Right-Sided versus Left-Sided Colectomies for Cancer: Surgical Outcomes and Novel Considerations. Introduction: The aim of this study is to compare short-term outcomes of right versus left colectomies performed as a form of cancer treatment. Materials And Methods: This study includes 305 consecutive patients with adenocarcinoma treated by laparoscopic or open colectomy. Right colectomy has been compared with left colectomy. The study endpoints were the first flatus day, the first evacuation day, the first day of postoperative solid oral diet intake, and the postoperative hospital stay length. Results: There were 140 (45.9%) right colectomies and 165 (54.1%) left colectomies performed. The cut-off values for the considered (median) endpoints were three, five, four, and eight days, respectively. The first day of postoperative solid oral diet intake and the length of postoperative hospital stay are significantly associated with the type of resection. Conclusions: The colon cancer patients treated by right-sided colectomy assumed a solid oral diet and presented a longer postoperative hospital stay compared with the patients treated by left-sided colectomy. abstract_id: PUBMED:17008953 Telerobotic surgery for right and sigmoid colectomies: 30 consecutive cases. Background: This study aimed to evaluate the feasibility of using a robotic assistant for colon resections. This report describes the experience, advantages, and disadvantages of using the DaVinci system for a colectomy on the basis of 30 consecutive cases managed by a minimally invasive surgery fellowship-trained surgeon. Methods: Data were prospectively collected on 30 consecutive colectomies performed using the DaVinci system from September 2002 to March 2005. Results: A total of 13 sigmoid colectomies with splenic flexure mobilization and 17 right colectomies were performed for 14 men and 16 women. The preoperative diagnoses for the procedures were cancer (n = 5), diverticulitis (n = 8), polyps (n = 16), and carcinoid (n = 1). The right colectomies required 29.7 +/- 6.7 min (range, 22-44 min) for the port setup, 177.1 +/- 50.6 min (range, 103-306 min) for the robot, and 218.9 +/- 44.6 min (range, 167-340 min) for the total case. The length of stay was 5.2 +/- 5.8 days (range, 2-27 days). The robot portion was 80.9% of the total case time. The sigmoid colectomies required 30.1 +/- 9.6 min (range, 15-50 min) for the port setup, 103.2 +/- 29.4 min (range, 69-165 min) for the robot, and 225.2 +/- 37.1 min (range, 147-283 min) for the total case. The hospital length of stay was 6.0 +/- 7.3 days (range, 3-30 days). The robot portion was 45.8% of the total case time. Six complications occurred: left hip paresthesia, cecal injury, anastomotic leak, patient slipped from the operating table after the robotic portion of the case, transverse colon injury, and return of a patient to the office with urinary retention. Two sigmoid colectomies were converted to laparotomy. The specific advantages and disadvantages of using the DaVinci system for colectomies are discussed. Conclusions: The 30 consecutive cases demonstrated the technical feasibility of using the DaVinci system for a colectomy. The longevity of the DaVinci system's use for colectomy will be determined by comparison of its cost and outcomes with those for conventional laparoscopic colectomy. abstract_id: PUBMED:27403288 Differences in Left Versus Right Ventricular Electrophysiological Properties in Cardiac Dysfunction and Arrhythmogenesis. A wide range of ion channels, transporters, signaling pathways and tissue structure at a microscopic and macroscopic scale regulate the electrophysiological activity of the heart. Each region of the heart has optimised these properties based on its specific role during the cardiac cycle, leading to well-established differences in electrophysiology, Ca(2+) handling and tissue structure between atria and ventricles and between different layers of the ventricular wall. Similarly, the right ventricle (RV) and left ventricle (LV) have different embryological, structural, metabolic and electrophysiological features, but whether interventricular differences promote differential remodeling leading to arrhythmias is not well understood. In this article, we will summarise the available data on intrinsic differences between LV and RV electrophysiology and indicate how these differences affect cardiac function. Furthermore, we will discuss the differential remodeling of both chambers in pathological conditions and its potential impact on arrhythmogenesis. abstract_id: PUBMED:34368911 Right colectomy with intracorporeal anastomosis for cancer: a prospective comparison between robotics and laparoscopy. Robotics in right colectomy are still under debate. Available studies compare different techniques of ileocolic anastomosis but results are non-conclusive. Our study aimed to compare intraoperative outcomes, and short-term postoperative results between robotic and standard laparoscopic right colectomies for cancer with intracorporeal anastomosis (ICA) fashioned with the same technique. All consecutive patients scheduled for laparoscopic or robotic right hemicolectomies with ICA for cancer in two hospitals, one of which is a tertiary care centre, were prospectively enrolled in our prospective observational study, from April 2018 to December 2019. ICA was fashioned with the same stapled hand-sewn technique. Continuous and categorical variables were analysed using t test and chi-squared test as required. Statistical significance was set at p &lt; 0.05. Forty patients underwent laparoscopic surgery, and 48 underwent robotic right colectomy and were included in the intention-to-treat analysis. Operative time was not statistically different between the two groups (robotic group 265.9 min vs laparoscopic group 254.2 min, p = 0.29). The robotic group had a significantly shorter time for stump oversewing (ileum reinforcement: robotic group 9.3 min vs laparoscopic group 14.2 min, p &lt; 0.001; colon reinforcement: robotic 7.7. min, laparoscopy 13.9 min, p &lt; 0.001) and for ICA (robotic 31.6 min vs laparoscopy 43.0, p &lt; 0.001). One patient underwent extracorporeal anastomosis in the robotic group. The short-term outcomes were comparable between standard laparoscopic and robotic right colectomies with ICA. The limitation of the study is its small sample size and the fact that it was done in two institutions under the supervision of one person. Our data demonstrate that intracorporeal ileocolic anastomosis is safe, and faster and easier with robotic systems. Robotics can facilitate more challenging ICA in minimally invasive surgery. abstract_id: PUBMED:21296008 Hand-assisted laparoscopic right colectomy: how does it compare to conventional laparoscopy? Background: There is sufficient evidence to support the use of hand-assisted laparoscopy for sigmoid, total, and proctocolectomy. As a result, the hand-assisted technique has gained acceptance for these relatively complex types of colorectal surgery. For right colectomy, the use of conventional laparoscopy is supported by studies that have demonstrated its advantages over open surgery. Although the hand-assisted technique is also being used by some surgeons for right colectomy, there are few reported data to justify its use. With this deficiency in mind, we performed a study to compare the short-term outcomes of right colectomy performed by either the hand-assisted or conventional laparoscopic technique. Study Design: A single-center retrospective analysis was performed. Patients who underwent hand-assisted or conventional laparoscopic right colectomy were identified from a prospectively maintained departmental database. Preoperative clinical information, details of the operation, lymph node count for cancer cases, postoperative morbidity, length of stay, and 30-day hospital readmissions were evaluated. Results: From 2006 to 2009, 43 hand-assisted and 84 conventional laparoscopic right colectomies were performed. Comparison of the hand-assisted and conventional laparoscopic groups revealed no differences in the preoperative clinical variables, including average body mass index (calculated as kg/m(2); 28 and 29), percent obese (33% and 34%), earlier abdominal surgery (30% and 39%), operative time (122 and 126 minutes), lymph nodes evaluated for cancer cases (22 and 21), postoperative morbidity (30%), length of stay (5 days), or 30-day hospital readmission (16% and 11%). Conclusions: Short-term outcomes of hand-assisted and conventional laparoscopic right colectomy are similar. The decision to perform hand-assisted or conventional laparoscopic right colectomy should be based on the surgeons' preference and not on the perception that one technique is preferable to the other. Answer: Yes, there are differences between right and left colectomies when performed by laparoscopy. A study comparing the postoperative course of patients undergoing right colectomy (RC) and left colectomy (LC) found that LC was associated with more postoperative complications than RC and also had a longer operative time. Specifically, the study reported postoperative complications of 30% for LC versus 19% for RC, and an operative time of 139 minutes for LC versus 118 minutes for RC. These differences were also observed in the neoplasia subgroups, with LC having a higher complication rate (27% vs. 18%) and longer operative time (137 vs. 118 minutes) compared to RC (PUBMED:26139496). Furthermore, the study noted that within the LC subgroups, the operative time was longer and the conversion rate was higher in the diverticular disease subgroup compared to the neoplasia subgroup. This suggests that the complexity of LC can vary depending on the underlying condition being treated (PUBMED:26139496). In conclusion, the findings from this large cohort of patients undergoing laparoscopic colectomy indicate that LC carries a higher risk of postoperative complications and requires a longer operative time compared to RC, suggesting that right and left colectomies have different intraoperative and postoperative courses and should be considered as two separate entities (PUBMED:26139496).
Instruction: Does routine abdominal ultrasound enhance diagnostic accuracy in irritable bowel syndrome? Abstracts: abstract_id: PUBMED:8677992 Does routine abdominal ultrasound enhance diagnostic accuracy in irritable bowel syndrome? Objective: The current recommendation that irritable bowel syndrome (IBS) should be diagnosed positively using minimum investigation raises the possibility that some disorders, particularly gynecological, may be overlooked. Transabdominal ultrasound is now sufficiently sophisticated to allow assessment of all abdominal viscera for associated pathology, and this study was designed to evaluate its role in the diagnosis of IBS. Methods: An ultrasound scan was performed in 125 patients (100 females, 25 males) for whom a confident diagnosis of IBS had been made. Results: No serious intra-abdominal pathology was encountered, but 20% of females and 8% of males had an ultrasound abnormality. None of these abnormalities resulted in any additional therapeutic measures. A hepatobiliary abnormality was found in similar proportions of males and females (10 vs 8%). Gallstones were most common (5 vs 4%), but in no individual were they considered to entirely account for the presenting symptoms. Eight percent of the women were found to have a pelvic abnormality, but it was not regarded as serious in any of them. Conclusions: This study confirms that a positive approach to diagnosing IBS is a safe policy. Furthermore, routine ultrasound scanning in IBS is unnecessary and could be counter-productive by detecting many minor abnormalities, which can pose further therapeutic dilemmas. abstract_id: PUBMED:11507357 Abdominal pain and bowel dysfunction: diagnostic role of intestinal ultrasound. Background: Abdominal pain and irregular bowel habits are common among young people. Irritable bowel syndrome is frequent in the general population and has important economic and social costs. Inflammatory bowel diseases are chronic processes with an acute or indolent onset in young people. Differential clinical diagnosis between irritable bowel syndrome and inflammatory bowel disease can be difficult since symptoms and signs are often non-specific. Objective: To evaluate the role of intestinal ultrasound, a non-invasive, simple and cheap diagnostic tool, in the differentiation between organic and functional bowel diseases. Methods: Abdominal and intestinal ultrasound examinations were performed on 313 consecutive outpatients presenting with abdominal pain and irregular bowel habits lasting more than 3 months. These patients had no symptoms or signs indicative of organic disorders and no previous diagnosis of organic disease. An intestinal wall thickness of more than 7 mm was considered diagnostic for inflammatory bowel disease. Subsequently, we compared the ultrasound results with diagnoses obtained following the traditional criteria (radiological and endoscopic examinations). Results: Intestinal ultrasound for the diagnosis of inflammatory bowel disease showed 74% sensitivity, 98% specificity, a positive predictive value of 92% and a negative predictive value of 92%. Conclusions: In our experience, intestinal ultrasound seems important as a first diagnostic tool in young patients without clear symptoms or signs of organic diseases, and can be used as an indication that subsequent invasive tests are required. abstract_id: PUBMED:37696732 The burden of abdominal pain after bariatric surgery in terms of diagnostic testing: the OPERATE study. Background: Abdominal pain after bariatric surgery (BS) is frequently observed. Despite numerous diagnostic tests, the cause of abdominal pain is not always found. Objectives: To quantify type and number of diagnostic tests performed in patients with abdominal pain after BS and evaluate the burden and their yield in the diagnostic process. Setting: A bariatric center in the Netherlands. Methods: In this prospective study, we included patients who presented with abdominal pain after BS between December 1, 2020, and December 1, 2021. All diagnostic tests and reoperations performed during one episode of abdominal pain were scored using a standardized protocol. Results: A total of 441 patients were included; 401 (90.9%) were female, median time after BS was 37.0 months (IQR, 11.0-66.0) and mean percentage total weight loss was 31.41 (SD, 10.53). In total, 715 diagnostic tests were performed, of which 355 were abdominal CT scans, 155 were ultrasounds, and 106 were gastroscopies. These tests yielded a possible explanation for the pain in 40.2% of CT scans, 45.3% of ultrasounds, and 34.7% of gastroscopies. The diagnoses of internal herniation, ileus, and nephrolithiasis generally required only 1 diagnostic test, whereas patients with anterior cutaneous nerve entrapment syndrome, irritable bowel syndrome, and constipation required several tests before diagnosis. Even after several negative tests, a diagnosis was still found in the subsequent test: 86.7% of patients with 5 or more tests had a definitive diagnoses. Reoperations were performed in 37.2% of patients. Conclusion: The diagnostic burden in patients with abdominal pain following BS is high. The most frequently performed diagnostic test is an abdominal CT scan, yielding the highest number of diagnoses in these patients. abstract_id: PUBMED:11213411 Abdominal pain and bowel dysfunction: the diagnostic role of ultrasonography Purpose: To evaluate the role of intestinal ultrasound (US) in differentiating organic from functional bowel disease. Material And Methods: We examined with abdominal and intestinal US 313 consecutive outpatients presenting abdominal pain and bowel dysfunction, lasting more than 3 months, with no symptoms or signs of alarm and with no previous diagnosis of organic disease. Our population consisted of 191 women and 122 men, with average age at diagnosis of 36.5 years; 236 of these patients had irritable bowel syndrome, 61 Crohn's disease, and 16 ulcerative colitis. Intestinal wall thickness exceeding 7 mm was considered diagnostic for inflammatory bowel diseases. We compared US findings with the diagnosis made with the conventional diagnostic workup of radiological and endoscopic examinations. Results: Sensitivity of intestinal US for diagnosis of inflammatory bowel diseases was 74% and specificity 98% (respectively 84% and 98% for the diagnosis of Crohn's disease and 38% and 98% for ulcerative colitis), and positive and negative predictive values were both 92%; efficacy was also 92%. The likelihood ratio was 35 if US was positive for inflammatory bowel diseases and 0.26 if bowel wall thickness was less than 7 mm. Diagnostic efficacy was 95% for Crohn's disease and 94% for ulcerative colitis. Discussion: Intestinal US proved to be a valuable tool in diagnosing Crohn's disease. As the first step examination US can show the disease site and suggest further instrumental tests. Conclusions: In our experience intestinal US is an important diagnostic examination for the approach to young patients without symptoms or signs suggestive of organic diseases and can help avoid invasive instrumental examinations. abstract_id: PUBMED:16493360 Abdominal pain and bowel dysfunction: diagnostic flow-chart could be simplified? Background: The aim of the study was to evaluate the diagnostic role of Kruis score and intestinal ultrasound in young patients with abdominal pain and bowel dysfunction. Methods: Prospective, double blind, case-control study in 297 consecutive patients with Crohn's disease and irritable bowel syndrome (from 1993 to 1995). Inclusion Criteria: abdominal pain, bowel dysfunction without clear symptoms or signs of organic disease. The final diagnosis is obtained with usual diagnostic criteria and confirmed by at least 2 years of follow-up. Intestinal ultrasound is considered diagnostic of Crohn's disease if bowel wall thickness is = or &gt; 7 mm; the Kruis score is diagnostic for irritable bowel syndrome if = or &gt; 44. Results: To diagnose Crohn's disease, intestinal ultrasound and Kruis score respectively showed sensitivity of 84 and 97%, specificity of 98 and 50%, positive predictive value of 91 and 33%, negative predictive value of 96 and 98%, efficacy of 95 and 60%. Both exams suggest the same diagnosis in 55% of patients with a correct diagnosis of 97%. Conclusions: The intestinal ultrasound and the Kruis score can be a good diagnostic association in young patients with abdominal pain and bowel dysfunction but without clear symptoms or signs of organic disease. If their diagnostic conclusions are the same (55%), they have a low probability of diagnostic error (3%). If they show a different diagnostic hypothesis, other markers of disease, for example ASCA, can be used. abstract_id: PUBMED:24204070 Chronic abdominal wall pain misdiagnosed as functional abdominal pain. Introduction: The abdominal wall is often neglected as a cause of chronic abdominal pain. The aim of this study was to identify chronic abdominal wall pain syndromes, such as anterior cutaneous nerve entrapment syndrome (ACNES), in a patient population diagnosed with functional abdominal pain, including irritable bowel syndrome, using a validated 18-item questionnaire as an identification tool. Methods: In this cross-sectional analysis, 4 Dutch primary care practices employing physicians who were unaware of the existence of ACNES were selected. A total of 535 patients ≥18 years old who were registered with a functional abdominal pain diagnosis were approached when they were symptomatic to complete the questionnaire (maximum 18 points). Responders who scored at least the 10-point cutoff value (sensitivity, 0.94; specificity, 0.92) underwent a diagnostic evaluation to establish their final diagnosis. The main outcome was the presence and prevalence of ACNES in a group of symptomatic patients diagnosed with functional abdominal pain. Results: Of 535 patients, 304 (57%) responded; 167 subjects (31%) recently reporting symptoms completed the questionnaire. Of 23 patients who scored above the 10-point cutoff value, 18 were available for a diagnostic evaluation. In half of these subjects (n = 9) functional abdominal pain (including IBS) was confirmed. However, the other 9 patients were suffering from abdominal wall pain syndrome, 6 of whom were diagnosed with ACNES (3.6% prevalence rate of symptomatic subjects; 95% confidence interval, 1.7-7.6), whereas the remaining 3 harbored a painful lipoma, an abdominal herniation, and a painful scar. Conclusion: A clinically relevant portion of patients previously diagnosed with functional abdominal pain syndrome in a primary care environment suffers from an abdominal wall pain syndrome such as ACNES. abstract_id: PUBMED:31120214 Abdominal pain Abdominal pain Abdominal pain can be part of a variety of abdominal and non-abdominal conditions. Individual symptoms and signs from history taking and physical examination have limited discriminatory value for a clear diagnosis. Additional laboratory testing strategies and imaging techniques can provide more guidance in this respect. In this article, we will answer specific practical questions with respect to abdominal pain. Topics discussed include various abdominal conditions, diagnostic markers, the diagnostic value of imaging techniques and new treatments for irritable bowel syndrome. abstract_id: PUBMED:21395574 Abdominal migraine: an under-diagnosed cause of recurrent abdominal pain in children. Objective: Our objective was to demonstrate that, despite recognition by both the gastroenterology and headache communities, abdominal migraine (AM) is an under-diagnosed cause of chronic, recurrent, abdominal pain in childhood in the USA. Background: Chronic, recurrent abdominal pain occurs in 9-15% of all children and adolescents. After exclusion of anatomic, infectious, inflammatory, or other metabolic causes, "functional abdominal pain" is the most common diagnosis of chronic, idiopathic, abdominal pain in childhood. Functional abdominal pain is typically categorized into one, or a combination of, the following 4 groups: functional dyspepsia, irritable bowel syndrome, AM, or functional abdominal pain syndrome. International Classification of Headache Disorders--(ICHD-2) defines AM as an idiopathic disorder characterized by attacks of midline, moderate to severe abdominal pain lasting 1-72 hours with vasomotor symptoms, nausea and vomiting, and included AM among the "periodic syndromes of childhood that are precursors for migraine." Rome III Gastroenterology criteria (2006) separately established diagnostic criteria and confirmed AM as a well-defined cause of recurrent abdominal pain. Methods: Following institutional review board approval, a retrospective chart review was conducted on patients referred to an academic pediatric gastroenterology practice with the clinical complaint of recurrent abdominal pain. ICHD-2 criteria were applied to identify the subset of children fulfilling criteria for AM. Demographics, diagnostic evaluation, treatment regimen and outcomes were collected. Results: From an initial cohort of 600 children (ages 1-21 years; 59% females) with recurrent abdominal pain, 142 (24%) were excluded on the basis of their ultimate diagnosis. Of the 458 patients meeting inclusion criteria, 1824 total patient office visits were reviewed. Three hundred eighty-eight (84.6%) did not meet criteria for AM, 20 (4.4%) met ICHD-2 formal criteria for AM and another 50 (11%) had documentation lacking at least 1 criterion, but were otherwise consistent with AM (probable AM). During the observation period, no children seen in this gastroenterology practice had received a diagnosis of AM. Conclusion: Among children with chronic, idiopathic, recurrent abdominal pain, AM represents about 4-15%. Given the spectrum of treatment modalities now available for pediatric migraine, increased awareness of cardinal features of AM by pediatricians and pediatric gastroenterologists may result in improved diagnostic accuracy and early institution of both acute and preventative migraine-specific treatments. abstract_id: PUBMED:38479421 Non-distinctive abdominal pain: is it diverticular disease or not? Acute abdominal pain may relate to specific organ systems and needs an interdisciplinary approach with close collaboration between internal and surgical disciplines. Main objective is to shorten the diagnostic work-up between the beginning of the symptoms and their therapy. After clarifying of the five w-questions: when, how, how long, why, and where, abdominal ultrasound, ECG, laboratory diagnostics and early application of computed tomography should be performed.For the most part, chronic abdominal pain is caused by disorders of the gut-brain-axis such as the irritable bowel syndrome. Because of the synaptic plasticity, the processing of pain is dynamic and cannot be related to a single organ system. This problem is obvious in patients with irritable bowel syndrome and colonic diverticula, which may be interpreted as symptomatic uncomplicated diverticular disease (SUDD, type 3a). However, a reliable clinical differentiation between both groups is not possible. The establishment of SUDD (type 3a) considerable widened the application area of mesalazine. abstract_id: PUBMED:33116042 Can irritable bowel syndrome be detected by ultrasound? Functional gastrointestinal disease is one in which gastrointestinal symptoms persist chronically or recurrently. This disease is challenging because it does not have an organic cause that can be detected in routine laboratory tests. Among them, the symptoms of irritable bowel syndrome (IBS), which is a type of functional gastrointestinal tract disease, include abnormal bowel movements associated with abdominal pain. However, no specific test has been established to definitively diagnose these diseases, including IBS. The traditional Rome IV diagnostic criteria are used to diagnose IBS by assessing subjective symptoms. However, it has been suggested that IBS is difficult to diagnose using the Rome IV criteria among unconscious or cognitively impaired patients. It is recommended that abdominal ultrasonography be used to assess IBS with diarrhea and constipation. Previously, constipation among elderly people who ingested food orally was objectively assessed by ultrasound, and colonic fecal distribution patterns were classified in constipated patients and healthy people. Objective visualization of the large intestine was used to assess constipation. Therefore, fecal retention among adults and elderly individuals was reported using ultrasonography. It was suggested that stool retention could be confirmed by observing the hyperechoic region of the rectum. Strong hyperechoic regions with acoustic shadows in the rectum indicate the presence of hard stools, thus enabling medical workers to identify constipation. In the future, ultrasonography may be useful for diagnosing IBS in unconscious patients or those with cognitive decline. Answer: Routine abdominal ultrasound does not appear to significantly enhance diagnostic accuracy in irritable bowel syndrome (IBS). A study indicated that while routine ultrasound scanning in IBS patients detected some abnormalities, these were often minor and did not result in additional therapeutic measures. The study concluded that a positive approach to diagnosing IBS without routine ultrasound is a safe policy, as routine scanning could be counter-productive by detecting many minor abnormalities that may pose further therapeutic dilemmas (PUBMED:8677992). However, abdominal ultrasound can play a role in differentiating between organic and functional bowel diseases. Intestinal ultrasound has shown high specificity and a good negative predictive value in diagnosing inflammatory bowel disease, which can be important in young patients presenting with abdominal pain and irregular bowel habits without clear symptoms or signs of organic diseases (PUBMED:11507357, PUBMED:11213411). In the context of abdominal pain after bariatric surgery, abdominal CT scans were found to be the most frequently performed diagnostic test and yielded the highest number of diagnoses, suggesting that ultrasound is one of several important diagnostic tools in this specific patient group (PUBMED:37696732). Moreover, abdominal ultrasound has been suggested as a useful tool for diagnosing conditions like irritable bowel syndrome in unconscious patients or those with cognitive decline, as it can objectively visualize stool retention in the large intestine (PUBMED:33116042). In summary, while routine abdominal ultrasound is not necessary for all IBS patients, it can be valuable in specific circumstances, particularly when differentiating IBS from other organic conditions or assessing certain patient groups, such as those with cognitive impairments or post-bariatric surgery abdominal pain.
Instruction: Is acupuncture in addition to conventional medicine effective as pain treatment for endometriosis? Abstracts: abstract_id: PUBMED:20728977 Is acupuncture in addition to conventional medicine effective as pain treatment for endometriosis? A randomised controlled cross-over trial. Objective: The aim of the study was to examine whether acupuncture is an effective additional pain treatment for endometriosis. Study Design: One hundred and one women aged 20-40 years participated and were randomised into two groups, each receiving two units of 10 acupuncture treatments, twice a week over a period of five weeks. Group 1 (n = 47) received verum-acupuncture during the first series, and group 2 (n = 54) received non-specific acupuncture. After the first unit of 10 treatments, an observation period of at least two menstruation cycles was set, followed by a second unit, according to a cross-over design. Prior to the study (during a two-week run-in period) the patients' actual pain intensity was surveyed. Throughout the study period, participants were asked to keep a 'pain protocol', in which they defined and recorded their pain according to the 10-point visual analogue scale (VAS). Results: Eighty-three out of a total of 101 patients finished the study. Group 1 showed a significant reduction of pain intensity after the first 10 treatments. In comparison, group 2 showed significant pain relief only after the cross-over. Conclusion: Acupuncture treatment on specific acupuncture points appears to be an effective pain treatment for endometriosis, but this has to be confirmed in further study. abstract_id: PUBMED:27069371 Is acupuncture effective in the treatment of pain in endometriosis? Introduction: Endometriosis is a multifactorial, estrogen-dependent, inflammatory gynecological condition - often with long-lasting visceral pelvic pain of different origin, and infertility among women. Current management options for patients' are often inadequate, with side effects for many for whom acupuncture techniques could be an alternative. Earlier studies have discussed the efficacy of acupuncture, but not its methodological aspects. Objectives: To summarize the documented clinical effects of acupuncture on rated visceral pelvic endometriosis-related pain, and associated variables among individuals, within and between studied groups, and to discuss the methodological treatment aspects. Methods: Published full text clinical studies, case reports, and observational studies with abstracts written in English were searched by using the keywords "Acupuncture and Endometriosis" in databases such as PubMed, Web of Science, and CINAHL. The reporting guidelines, Standards for Reporting Interventions in Clinical Trials of Acupuncture was used for the methodological report. Results: Three studies were found including 99 women, 13-40 years old, with diagnosed endometriosis. The studies were different in research design, needle stimulation techniques, and evaluation instruments. Methodological similarities were seven to12 needle insertions per subject/session, and 15-25 minutes of needle retention time. The needles were placed in lower back/pelvic-abdominal area, in the shank, feet, and hands. Treatment numbers varied from nine to 16 and patients received one to two treatments per week. Similarity in reported treatment effects in the quoted studies, irrespective of research design or treatment technique, was reported decrease of rated pain intensity. Discussion: Meta-analysis is the standard procedure for the evaluation of evidence of treatment effects, ie, on a group level, usually without analysis of the individual responses even with obvious spread in the results leading to lack of guidance for treatment of the individual patient. By conceptualizing pain as subjective, the individual aspect should serve as the basis for the analysis to allow clinical recommendations. From a physiological and a western medical perspective, acupuncture can be regarded as a type of sensory stimulation that induces changes in the function of the central nervous system that partly can explain the decrease of perceived pain in response to acupuncture treatment irrespective of the technique. Conclusion: Endometriosis is often painful, although with various origin, where standard treatments may be insufficient or involve side effects. Based on the reported studies, acupuncture could be tried as a complement as it is an overall safe treatment. In the future, studies designed for evaluating effectiveness between treatment strategies rather than efficacy design would be preferred as the analyses of treatment effects in the individual patients. abstract_id: PUBMED:20597619 Acupuncture in reproductive medicine. Acupuncture is increasingly being used in reproductive medicine. This review summarizes the evidence of acupuncture in pain relief for oocyte retrieval, improving pregnancy outcomes of in vitro fertilization treatment, management of ovulation disorders, male subfertility, primary dysmenorrhea, endometriosis and menopausal symptoms. However, most of the studies are nonrandomized uncontrolled trials, case reports or case series. For randomized controlled trials, the sample size is underpowered and blinding of assessors is lacking. Different acupuncture protocols and controls are used. These heterogeneities make it difficult to compare studies and draw any firm conclusions. Further studies should also evaluate the cost-effectiveness of acupuncture and investigate the underlying mechanism of acupuncture treatment. abstract_id: PUBMED:31871528 Acupuncture for Endometriosis: A Case Study. Background: Endometriosis is the cause of chronic pelvic pain in 70% of all chronic pelvic pain cases in women in the United States. Traditional Chinese Medicine may present either an adjunct or optional treatment for women with chronic pelvic pain due to endometriosis. Objective: To detail possible effects of Traditional Chinese Medicine on pelvic pain secondary to endometriosis through presentation of a single case. Design, Setting, Patient: A nulliparous 43-year-old woman presented with endometriosis and fibroids. The patient was treated twice weekly for a 6-month period in a Traditional Chinese Medicine clinic in Berkley, CA. Treatment included 6 months of acupuncture at the following points: CV 3, 4, and 5; ST 25 and 36; GV 20; LI 4; LV 3; KI 3; SP 6 and 9; and Zi Gong Xue and Ah Shi points along with 1 month of herbal treatment in the form of both tea and tea pills. Intervention: Acupuncture and herbal formulas. Main Outcome Measures: Decrease in pain level and symptoms measured by patient assessed value of pain through the Pain Quality Assessment Scale. Results: Patient overall disposition and pain level improved for a 6-month period. Conclusions: Acupuncture did have some positive effect on pain management. abstract_id: PUBMED:21901713 Acupuncture for pain in endometriosis. Background: Endometriosis is a prevalent gynaecological condition, significantly affecting women's lives. Clinical presentations may vary from absence of symptoms to complaints of chronic pelvic pain, most notably dysmenorrhoea. The management of pain in endometriosis is currently inadequate. Acupuncture has been studied in gynaecological disorders but its effectiveness for pain in endometriosis is uncertain. Objectives: To determine the effectiveness and safety of acupuncture for pain in endometriosis. Search Strategy: We searched the Cochrane Menstrual Disorders and Subfertility Group (MSDG) Specialised Register of controlled trials, Cochrane Central Register of Controlled Trials (CENTRAL) (The Cochrane Library), MEDLINE, EMBASE, CINAHL, AMED, PsycINFO, CNKI and TCMDS (from inception to 2010) and reference lists of retrieved articles. Selection Criteria: Randomised single or double-blind controlled trials enrolling women of reproductive age with a laparoscopically confirmed diagnosis of endometriosis and comparing acupuncture (body, scalp or auricular) to either placebo or sham, no treatment, conventional therapies or Chinese herbal medicine. Data Collection And Analysis: Three authors independently assessed risk of bias and extracted data; we contacted study authors for additional information. Meta-analyses were not performed as only one study was included. The primary outcome measure was decrease in pain from endometriosis. Secondary outcome measures included improvement in quality of life scores, pregnancy rate, adverse effects and rate of endometriosis recurrence. Main Results: Twenty-four studies were identified that involved acupuncture for endometriosis; however only one trial, enrolling 67 participants, met all the inclusion criteria. The single included trial defined pain scores and cure rates according to the Guideline for Clinical Research on New Chinese Medicine. Dysmenorrhoea scores were lower in the acupuncture group (mean difference -4.81 points, 95% confidence interval -6.25 to -3.37, P &lt; 0.00001) using the 15-point Guideline for Clinical Research on New Chinese Medicine for Treatment of Pelvic Endometriosis scale. The total effective rate ('cured', 'significantly effective' or 'effective') for auricular acupuncture and Chinese herbal medicine was 91.9% and 60%, respectively (risk ratio 3.04, 95% confidence interval 1.65 to 5.62, P = 0.0004). The improvement rate did not differ significantly between auricular acupuncture and Chinese herbal medicine for cases of mild to moderate dysmenorrhoea, whereas auricular acupuncture did significantly reduce pain in cases of severe dysmenorrhoea. Data were not available for secondary outcomes measures. Authors' Conclusions: The evidence to support the effectiveness of acupuncture for pain in endometriosis is limited, based on the results of only a single study that was included in this review. This review highlights the necessity for developing future studies that are well-designed, double-blinded, randomised controlled trials that assess various types of acupuncture in comparison to conventional therapies. abstract_id: PUBMED:37453585 Acupuncture and moxibustion for endometriosis: A systematic review and analysis. Objectives: This study aimed to examine the effect of acupuncture on symptoms and health-related quality of life in patients with endometriosis. Methods: Nine biomedical databases were searched to April 2022 to identify randomized controlled trials of acupuncture and/or moxibustion used alone or as adjunct to guideline-recommended pharmacotherapy for the treatment of endometriosis. One reviewer extracted data and another verified the data. A random effects model was used to calculate mean differences. Results: Fifteen trials involving 1018 patients met the inclusion criteria, but diversity in comparisons and outcome measures prevented meta-analysis. Compared to sham acupuncture, manual acupuncture was more effective at reducing dysmenorrhea VAS pain score (mean difference [MD] - 2.40, 95 % CI [- 2.80, - 2.00]; moderate certainty evidence), pelvic pain VAS score (MD - 2.65, 95 % CI [- 3.40, - 1.90]; high certainty evidence) and dyspareunia VAS scores (MD - 2.88, [- 3.83, - 1.93]), lessened the size of ovarian cyst (MD - 3.88, 95 % CI [- 7.06, - 0.70]), and improved quality of life. Compared to conventional therapy, manual acupuncture plus conventional therapy and warm needle alone resulted in greater improvements in quality of life than conventional therapy. Among the six studies that reported safety, fewer adverse events were reported in participants who received acupuncture or moxibustion. Conclusions: Low to moderate certainty evidence from single studies showed that manual acupuncture may improve pain-related symptoms and quality of life; however, there is insufficient evidence on the overall effectiveness of acupuncture and moxibustion for endometriosis. abstract_id: PUBMED:21442808 Effect of abdominal acupuncture on pain of pelvic cavity in patients with endometriosis Objective: To compare the effect of abdominal acupuncture and Chinese medicine on pain relieving in pelvic cavity in patients with endometriosis. Methods: Fifty-eight cases were randomly divided into 2 groups. Thirty cases were in abdominal acupuncture group and 28 cases in Chinese medicine group. Abdominal acupuncture points such as Zhongwan (CV 12), Xiawan (CV 10) and Qihai (CV 6), etc. were adopted for the abdominal acupuncture group, and Tianqi Tongjing Capsule (radix notoginseng capsule for dysmenorrhea) was taken by the Chinese medicine group. After a 3-month treatment, the scores of McGill pain questionaire, level of serum CA125, average value of the radial line of endometrial cyst of ovary and the sum of 3 radial lines of the uterus of patients with adenomyosis as the complication of both groups were observed before and after treatment. Results: The McGill estimation of 6 items for both groups improved obviously after treatment (all P &lt; 0.01, except numbers of selected deseriptors in Chinese medicine group). The differences of the result of McGill estimation of 6 items after treatment had statistical significance, the scores in the abdominal acupuncture group were obviously better than those in the Chinese medicine group (all P &lt; 0.01). The differences of CA125 levels within one group or between 2 groups had statistical significance (P &lt; 0.01, P &lt; 0.05). The difference of the radial lines of patients with endometrial cyst of ovary within one group or between 2 groups after treatment had not statistical significance (all P &gt; 0.05). For the value of 3 radial lines of the uterus of patients with adenomyosis within one group before and after treatment, only the difference in the abdominal acupuncture group had statistical significance (P &lt; 0.01). The differences before and after treatment in the Chinese medicine group and the difference between 2 groups after treatment had no statistical significance (all P &gt; 0.05). Conclusion: Effect of abdominal acupuncture on relieving pain of pelvic cavity caused by endometriosis, reducing the level of serum CA125 is obverious than Tianqi Tongjing Capsule (radix notoginseng capsule for dysmenorrhea). However, the effects on reducing the size of the ovarian endometrial cyst and the size of uterus with adenomyosis are not significant. Therefore, it is concluded that abdominal acupuncture is a better choice for endometriosis with pain as the chief complaints. abstract_id: PUBMED:38115310 Acupuncture for abdominal wall endometriosis: A case report. Introduction: Endometriosis refers to a series of symptoms caused by the presence of endometrial-like tissue outside the uterine cavity. In extrapelvic endometriosis, abdominal wall endometriosis (AWE) is very common. Acupuncture therapy has been widely used as an alternative therapy to treat multiple diseases, such as sequelae of stroke, pain, and facial paralysis. To our knowledge, case reports of acupuncture for the treatment of AWE has not been reported. We report a case of acupuncture in the treatment of abdominal endometriosis. Rationale: AWE could result in symptoms including pelvic pain, dysmenorrhea, and infertility. Acupuncture might be effective in the treatment of the disease. Patient Concerns: A 38-year-old woman complained of the aggregation of pain in a mass, which is located in her abdominal wall. Diagnoses: The patient was diagnosed with AWE, surgical history (excision of deep abdominal wall mass, repair of abdominal wall defect with patch). According to traditional Chinese medicine theory, traditional Chinese medicine diagnosis is Zhengjia (qi stagnation and blood stasis pattern). Interventions: Combined with the theory of disentanglement, we use acupuncture, cupping, and needle therapy to promote qi circulation, activate blood circulation, relieve pain, and dissipate masses. Outcomes: After treatment, abdominal ultrasound showed that the mass gradually decreased. Conclusion: Acupuncture can effectively relieve the pain caused by abdominal endometriosis and reduce the size of abdominal endometriosis masses. abstract_id: PUBMED:31871521 Blood Support for Fertility, Pregnancy, and Postpartum Using Acupuncture from Ancient Medical Texts' Theory. Conventional medicine suggests that adequate maternal blood is essential for a woman to become pregnant, maintain a healthy pregnancy to term, and recover postpartum. Adequate blood is recognized for successful embryo attachment and to provide fetal nourishment to support the fetus' continued growth and development. It is common for women desiring pregnancy to take folic acid and sometimes iron to support their blood. Energetic medicine, particularly phytotherapy, offers many therapeutic options to treat the blood, particularly in reproductive medicine. Acupuncture, however, is not as commonly considered on its own for building, preserving, and protecting the Blood, although this modality may be more utilized for treating Blood Stasis and pain conditions such as endometriosis. A synthesis of the ancient medical texts written 4000 bc to 1619 ad-as taught by Tran Viet Dzung, MD, and Nguyen Van Nghi, MD, scholars of the ancient medical texts-offers many clinically relevant acupuncture therapies to consider when treating the Blood to support fertility, pregnancy, and postpartum. This article offers advanced acupuncture techniques clinically used to build, preserve, and protect the Blood-derived from a synthesis of basic Energetic concepts mentioned throughout the ancient medical texts-that can be applied in clinics and research in reproductive medicine today. Acupuncture points with reasons for using them are reviewed with examples in application and treatment to support the infertile, pregnant, and postpartum patient. The acupuncture treatment methods offered in this article have been used throughout the world by medical acupuncturists trained by Drs. Viet Dzung and Van Nghi with positive clinical results. Further clinical observation and research is warranted to evaluate the full potential for acupuncture to build and preserve the Blood, an essential component for creating life in preparing for, preserving, and recovering from pregnancy. abstract_id: PUBMED:36716811 Efficacy of acupuncture for endometriosis-associated pain: a multicenter randomized single-blind placebo-controlled trial. Objective: To evaluate the efficacy and safety of acupuncture in the treatment of endometriosis-associated pain. Design: A multicenter, randomized, single-blind, placebo-controlled trial. Institutions: Four tertiary hospitals in Jiangxi and Hainan Provinces. Subjects: Women with endometriosis-associated pain aged between 20 and 40 years. Intervention: Subjects were assigned randomly to receive either acupuncture or sham acupuncture treatment for 12 weeks, starting one week before each expected menstruation and administered as a 30-minute session once per day, 3 times a week. During the menstruation period, acupuncture was administered daily when pelvic pain associated with endometriosis occurred. After acupuncture or sham acupuncture treatment, the subjects were followed for another 12 weeks. Main Outcome Measures: Changes in maximum pain as assessed with the visual analog scale (VAS) for various pelvic pain, duration of dysmenorrhea, and scores on the Multidimensional Pain Inventory, Beck Depression Inventory, Profile of Mood States, and Endometriosis Health Profile from baseline to week 12 and week 24. Results: A total of 106 women were assigned randomly to the acupuncture and sham groups. In the acupuncture group, the reduction in the dysmenorrhea VAS score was significantly greater after treatment, but not at the end of the trial, compared to the sham group. The duration of pain was significantly shorter in the acupuncture group. All test scores were improved to a significantly greater extent in the acupuncture group than in the sham group at week 12 but not at week 24. Changes in nonmenstrual pelvic pain and dyspareunia VAS scores were not different between the groups. No severe adverse events or differences in adverse events were recorded. Conclusion: Acupuncture is an effective and safe method of relieving dysmenorrhea, shortening the pain duration, and improving wellbeing and quality of life in women with endometriosis-associated pain, although its efficacy fades after treatment is discontinued. Clinical Trial Registration Number: NCT03125304. Answer: Based on the provided abstracts, acupuncture in addition to conventional medicine appears to be effective as a pain treatment for endometriosis in several studies. One randomized controlled cross-over trial showed that acupuncture treatment on specific acupuncture points significantly reduced pain intensity for endometriosis, suggesting that acupuncture could be an effective additional pain treatment (PUBMED:20728977). Another study found that manual acupuncture was more effective at reducing dysmenorrhea visual analogue scale (VAS) pain score, pelvic pain VAS score, and dyspareunia VAS scores, and improved quality of life when compared to sham acupuncture (PUBMED:37453585). Furthermore, a multicenter randomized single-blind placebo-controlled trial concluded that acupuncture is an effective and safe method for relieving dysmenorrhea, shortening the pain duration, and improving wellbeing and quality of life in women with endometriosis-associated pain (PUBMED:36716811). However, the evidence is not unequivocal. The abstract from PUBMED:21901713 highlights that the evidence to support the effectiveness of acupuncture for pain in endometriosis is limited, based on the results of only a single study that was included in their review. They emphasize the need for well-designed, double-blinded, randomized controlled trials to assess the effectiveness of acupuncture compared to conventional therapies. In summary, while there is some evidence suggesting that acupuncture may be an effective adjunctive treatment for pain in endometriosis, the overall body of evidence is limited and further research is needed to confirm these findings and provide more definitive guidance for clinical practice.
Instruction: Mobility and dementia: is physiotherapy treatment during respite care effective? Abstracts: abstract_id: PUBMED:10389044 Mobility and dementia: is physiotherapy treatment during respite care effective? Background: Mobility problems experienced by elderly people with a dementia are associated with falls, fractures and admission to long-term care. A hospital respite care admission is therefore often seen as an opportunity to provide physiotherapy treatment. Aim: To find whether elderly people with a dementia and a mobility problem show a greater improvement in mobility skills if given physiotherapy treatment than if given non-physical activities intervention during a hospital respite admission. Method: A controlled randomized multicentre trial with independent blinded assessment. The Southampton Mobility Assessment (mobility score) and Two Minute Walking Test (distance walked) were undertaken at the beginning and end of the study admission and beginning of the next respite admission. Following the first assessment, participants were randomized to either physiotherapy or activities. Results: Eighty-one participants, from 12 clinical centres, with a mean age of 81.9 years and CAPE I/O score of 2. During the study admission there was a non-significant trend for a lower reduction in mobility score of the physiotherapy group (Mann-Whitney; p = 0.614) and a non-significant trend for greater decrease in distance walked in the activities group (t-test; p = 0.325). Discussion: The results of this trial do not support the positive changes demonstrated elsewhere. However, changes in respite care during the early stages of this trial may have produced differences between the sample for this trial and that for the pilot study. This trial was therefore underpowered. Conclusion: This trial suggests that future research needs to change the focus from clinical settings to presentations. abstract_id: PUBMED:15106250 Respite care for people with dementia and their carers. Background: Caring for someone with dementia can be emotionally and physically demanding. Respite care is any intervention designed to give rest or relief to caregivers. It is not clear what positive and negative effects the provision of respite care may have on people with dementia and their caregivers. Objectives: To assess the effects of respite care for people with dementia and their caregivers, in particular the effect of respite care on rates of institutionalization. Search Strategy: The trials were identified from a last updated search of the Cochrane Dementia and Cognitive Improvement Group's Specialized Register on 2 July 2003 using the terms respite* and "day care". This Register contains up to date records from all major health care databases and many ongoing trial databases. Selection Criteria: Randomized controlled trials comparing respite care with a control intervention for people with dementia. Data Collection And Analysis: Both reviewers carried out study selection independently and reached a consensus through discussion. Data was extracted by a single reviewer. The reviewers contacted all investigators for methodological details not reported in the text and for additional data. Main Results: Three trials were included in the review. They were different in many ways including intervention, duration and outcomes so pooling of data was not possible. Re-analysis of outcomes using data from the published studies found no significant effects of respite care on any variable. Reviewers' Conclusions: Current evidence does not demonstrate any benefits or adverse effects from the use of respite care for people with dementia or their caregivers. These results should be treated with caution, however, as they may reflect the lack of high quality research in this area rather than an actual lack of benefit. Given the frequency with which respite care is advocated and provided, well-designed trials in this area are needed. abstract_id: PUBMED:31668639 Residential Respite Care Use Is Associated With Fewer Overall Days in Residential Aged Care. Objectives: To examine the use of residential respite care and determine associations between respite care and total days spent in residential care (respite days plus long-term care days). Design: A retrospective cohort study of individuals accessing aged care services in Australia using the National Historical Cohort of the Registry of Senior Australians was conducted. Setting: Residential respite care (short stays in residential aged care homes) and long-term residential care accessed in all government-subsidized residential aged care homes in Australia. Participants: This study included people who were approved for government-subsidized residential respite care between January 2005 and June 2012 (n = 480,862) and included a 2-year follow-up period. Methods: Poisson regression models were used to examine associations between use of residential respite care and number of days spent in residential care. Results: Of people approved for residential respite care, 36.9% used their approval within 12 months (32.0% used respite once and went directly to long-term care without returning home, 40.7% used respite once and did not go directly to long-term care, and 27.3% used respite ≥2 times). Compared with people who did not use respite care, using respite care once and not going directly to long-term care was associated with less total days in residential care [incidence rate ratio (IRR) 0.68, 95% confidence interval (CI) 0.67-0.69; P &lt; .001] and using respite care ≥2 times was also associated with fewer days (IRR 0.86, 95% CI 0.84-0.87, P &lt; .001). Using respite care once and going directly to long-term care was associated with more days in residential care (IRR 1.11, 95% CI 1.10-1.12, P &lt; .001). Conclusions And Implications: Using residential respite care was associated with fewer days spent in residential care when people returned home after using respite. The findings suggest that using residential respite as intended by returning home after use achieves the goal of helping people stay living at home longer. abstract_id: PUBMED:31863540 Perspectives of Australian family carers of people with dementia on the 'cottage' model of respite: Compared to traditional models of residential respite provided in aged care facilities. The majority of people living with dementia reside in the community and are often reliant on the support of informal carers to do so. Family carers face many challenges in supporting the person with dementia to remain at home, and short-term respite care is a valued service that offers a temporary break from the role. Respite cottages provide short-term care in a residential home-like setting with a limited number of clients and is a more flexible approach to accessing the service. Disproportionate use of cottage respite in Australia suggests this model is preferred over traditional respite within residential aged care facility (RACF) settings, yet limited research exists to compare these models. This study sought to understand the perceptions of carers who had used cottage respite in comparison to other models, and explore the contribution of cottage respite for supporting carers to continue in their role and maintain their care recipient (CR) living at home. Semi-structured interviews were conducted with 126 family carers who had used one of two New South Wales-based respite cottages within a 2-year period; 67 of whom had also used RACF respite. Thematic analysis revealed four main themes around the benefits of cottage respite: (a) an effective essential service, (b) flexibility, (c) familiarity and (d) appropriateness, especially for early stage or younger onset dementia. Carers indicated that the more homely, familiar and intimate cottage model of respite care was preferential to that of the larger, institutional-style RACF respite setting. Carers credited the cottage model of respite service with delaying their need for permanent residential placement by over 12 months. The cottage respite model provides an important avenue to supporting the individual needs of dementia dyads, with potential to delay permanent placement, and should be offered more broadly to provide people with more choice about their care. abstract_id: PUBMED:27245986 Effectiveness of respite care in supporting informal caregivers of persons with dementia: a systematic review. Objective: Supporting informal caregivers of persons with dementia is considered to be an effective strategy for improving the well-being of caregivers and care recipients and for delaying nursing home placement. Although considerable research has been conducted to investigate the effectiveness of psychoeducational interventions, cognitive behavioral therapy, and occupational therapy, research into the effectiveness of respite care is rare. This systematic review aims to investigate the effectiveness of different types of respite care in supporting informal caregivers of persons with dementia. Methods: A systematic literature search was conducted using Web of Science and PubMed, and the Quality Assessment Tool for Quantitative Studies was used to assess the methodological quality. Randomized controlled trials, quasi-experimental studies, pretest-posttest studies without a control group, and cohort studies were included. Results: Seventeen papers met the inclusion criteria. Day care services are effective in decreasing caregiver burden and behavioral problems in persons with dementia, but they also accelerate time to nursing home admission. The results of temporary residential admission are rather mixed and show unexpected adverse effects on both caregivers and care recipients. High-quality comparable evidence on community-based respite care is still lacking, although earlier qualitative evidence indicated promising results. Conclusion: Unlike in previous reviews, we were able to draw some conclusions about the effectiveness of some types of respite care. There is nonetheless still a need for new intervention studies measuring the impact of respite care, especially in-home respite care programs, on the caregiver, the care recipient, and health care resource utilization. Copyright © 2016 John Wiley &amp; Sons, Ltd. abstract_id: PUBMED:27172782 Handover between home and respite care concerning people with dementia: An online survey. Introduction: Respite care for people with dementia is increasingly promoted. However, little is known about the safety-relevant aspects of handover between home and respite care facilities for people with dementia. We thus focus on the safety-relevant information exchange taking place at the admission to and the discharge from respite care and discuss the need for improvement. Methods: A pilot online survey with staff in management positions in different types of respite care facilities in Germany was conducted between April and June 2014. The questionnaire contained 65 (open and closed) questions. The answers were submitted to explorative data analysis using descriptive statistics and qualitative content analysis. Results: 50 respite care facility management staff participated. Nearly half of them reported involving external healthcare professionals (e.g., physicians). The results showed that early identification of informal caregivers or people with dementia in need of support (e.g., so-called red flags) is poorly addressed. Often, there are no medical, medication- and handover-related documents. Information delivery deadlines are not clearly defined. Only 38 % of the participating respite care facilities reported regular measurement of the information exchange quality, and only 15 % of the management staff reported that they conducted follow-up calls. Most participants reported having standardized procedures. Conclusion: Handover between home and respite care facilities for people with dementia needs to be improved. Information delivery deadlines must be clearly defined. Communication training for respite care staff, including narrative intake methods, could enhance data collection at admission. Overall improved coordination between the parties involved would have a positive impact on safety-relevant aspects of information exchange. abstract_id: PUBMED:24435941 Respite care for people with dementia and their carers. Background: Caring for someone with dementia can be emotionally and physically demanding. Respite care is any intervention designed to give rest or relief to caregivers. It is not clear what positive and negative effects such care may have on them, or on people with dementia. Objectives: To assess the benefits and harms of respite care for people with dementia and their caregivers, in particular the effect of respite care on rates of institutionalisation. Search Methods: The trials were identified from a search of ALOIS, the Specialized Register of the Cochrane Dementia and Cognitive Improvement Group, using the terms respite* OR daycare OR caregiver* relief. ALOIS contains up-to-date records from all major healthcare databases and many ongoing trial databases. Selection Criteria: Randomised controlled trials comparing respite care with a control intervention for people with dementia. Data Collection And Analysis: Two review authors carried out study selection independently and reached a consensus through discussion. Data were extracted by a single review author. The review authors contacted all investigators for methodological details not reported in the text and for additional data for three studies included in the previous version of the review. Main Results: Four trials are now included in the review, with 753 participants. They were different in many ways including the intervention, duration, outcomes and control group so pooling of data was not possible. Overall, the quality of the evidence was rated as very low. Re-analysis of outcomes using data from the published studies found no significant effects of respite care compared to no respite care on any caregiver variable. When respite care was compared to polarity therapy a significant effect was found in favour of polarity therapy for caregiver perceived stress (n = 38, MD 5.80, 95% CI 1.43 to 10.17), but not for other measures of psychological health and other caregiver outcomes. No studies reported evaluable data on outcomes related to the people with dementia. Authors' Conclusions: Current evidence does not demonstrate any benefits or adverse effects from the use of respite care for people with dementia or their caregivers. These results should be treated with caution, however, as they may reflect the lack of high quality research in this area rather than an actual lack of benefit. Given the frequency with which respite care is advocated and provided, well-designed trials are needed in this area. abstract_id: PUBMED:8833274 Effects of respite care on patients with dementia and their caregivers. The purpose of the literature review discussed in this article was to determine the effect of formal respite care on patients with dementia and their caregivers. Three computerized databases were searched for relevant English language articles published from 1975 to 1994, and the bibliographies of retrieved articles were systematically reviewed for additional references. Five studies met the following inclusion criteria: original research; controlled trial of a defined respite intervention; sample including a dementia population; and at least one outcome measure relating to at least one of eight specified caregiver and patient variables. Four studies met validity criteria and were included for the review. There was little evidence that formal respite care has a significant effect on caregivers' burden, psychiatric status, or physical health; or on patients' cognition, function, physical health, or rate of institutionalization. However, given the small number of controlled studies found and their methodological and conceptual limitations, these data should be interpreted with caution. abstract_id: PUBMED:28659025 Respite in Dementia: An Evolutionary Concept Analysis. Aim: There is a lack of conceptual clarity around 'respite' as it relates to people with dementia and their carers. This study provides clarification on the use and meaning of the term and considers the concept in relation to the dominant care paradigm in dementia, i.e. person-centred care. Methods: Rodgers' (1989) evolutionary framework was employed. A systematic search was conducted on the Pubmed/MedLine, Embase, Cinahl, PsychInfo, Scopus, Web of Science and Cochrane databases (1980-2016, English) with fixed search terms relating to 'respite' and 'dementia'. Papers with primary qualitative data and literature reviews were included. This search was supplemented with snowballing techniques (back/forward searching, generic search engines). Data were analysed thematically, through an iterative process of constant comparison. Results: Respite is understood both as a service that provides a physical break for the carer and as a psychological outcome, i.e. a mental break for the carer, which can be facilitated by formal services, under certain conditions. The conceptual model outlines how client factors (dyadic relations, recognising/accepting need, carer psychosocial issues, restorative occupation, and stigma) and service factors (model/characteristics, care quality, staff expertise, meaningful occupation for people with dementia and communication and support), interact to influence a respite outcome. The key antecedent for a positive respite experience is that the carer perceives that mutual benefit is garnered from service use. Conclusion: The term respite can be interpreted as both a service and an outcome. However, it is clear that 'respite', as currently understood, acknowledges the relational experience of the carer only; it is, therefore, potentially damaging to the planning and delivery of person-centred dementia care. We suggest 'restorative care' as a potential alternative nomenclature to respite care, thereby highlighting the importance of providing mutual, personalised health and social care services that serve to enhance care relationships rather than diminish them. abstract_id: PUBMED:35815533 Characteristics of dementia family caregivers who use paid professional in-home respite care. Caring for a person living with dementia (PLWD) can be challenging, making support services like respite important. Of the types of respite, research on paid professional in-home care specific to family caregivers of PLWD is limited. This study aimed to identify characteristics of dementia caregivers using paid in-home respite. A self-administered online survey (N = 98) examined use of in-home respite. Measures assessed depressive symptoms, burden, and self-rated health. Analyses included descriptive statistics and qualitative coding. Most respondents utilized weekly schedules of in-home respite with a moderate correlation between dependency and hours of respite. Caregivers' self-reported average health, depressive symptoms, and mild-to-moderate burden. The majority of users had lowered perceived stress, were satisfied with services, and indicated the importance of in-home respite during a pandemic. Future research should assess changes in use of paid in-home respite services and overall landscape of respite options for dementia caregivers. Answer: The effectiveness of physiotherapy treatment during respite care for elderly people with dementia and mobility problems has been explored in research, but the results are not conclusive. A controlled randomized multicentre trial aimed to determine whether physiotherapy treatment could lead to greater improvements in mobility skills compared to non-physical activities intervention during hospital respite admission. The study involved 81 participants with a mean age of 81.9 years. The results showed a non-significant trend for a lower reduction in mobility score for the physiotherapy group and a non-significant trend for a greater decrease in distance walked in the activities group. However, the trial was underpowered and suggested that future research should change focus from clinical settings to presentations (PUBMED:10389044). In general, the evidence on the effects of respite care for people with dementia and their caregivers, including the impact on rates of institutionalization, is limited and does not demonstrate clear benefits or adverse effects. This may reflect the lack of high-quality research rather than an actual lack of benefit. Given the frequency with which respite care is advocated and provided, there is a need for well-designed trials in this area (PUBMED:15106250, PUBMED:24435941). One study found that using residential respite care was associated with fewer overall days spent in residential care when people returned home after using respite, suggesting that using residential respite as intended by returning home after use achieves the goal of helping people stay living at home longer (PUBMED:31668639). In summary, while there is some indication that physiotherapy treatment during respite care might have potential benefits for mobility in elderly people with dementia, the current evidence is not strong enough to draw definitive conclusions. More robust and well-designed studies are needed to determine the effectiveness of physiotherapy and other interventions during respite care for this population.
Instruction: Can body mass index help predict outcome in patients with bipolar disorder? Abstracts: abstract_id: PUBMED:19689507 Can body mass index help predict outcome in patients with bipolar disorder? Objective: Several studies have reported higher prevalence of obesity in patients suffering from bipolar disorder (BD). To study the relation of elevated body mass index (BMI) in patients with BD more closely, we investigated differences in sociodemographic, clinical, and medical characteristics with respect to BMI, with the hypothesis that BMI is related to prognosis and outcome. Methods: We measured the BMI of 276 subjects of a tertiary care sample from the Maritime Bipolar Registry. Subjects were 16 to 83 years old, with psychiatric diagnoses of bipolar I disorder (n = 186), bipolar II disorder (n = 85), and BD not otherwise specified (n = 5). The registry included basic demographic data and details of the clinical presentation. We first examined the variables showing a significant association with BMI; subsequently, we modeled the relationship between BMI and psychiatric outcome using structural equation analysis. Results: The prevalence of obesity in our sample was 39.1%. We found higher BMI in subjects with a chronic course (p &lt; 0.001) and longer duration of illness (p = 0.02), lower scores on the Global Assessment of Functioning Scale (p = 0.02), and on disability (p = 0.002). Overweight patients had more frequent comorbid subthreshold social (p = 0.02) and generalized anxiety disorders (p = 0.05), diabetes mellitus type II (p &lt; 0.001), and hypertension (p = 0.001). Subjects who achieved complete remission of symptoms on lithium showed significantly lower BMI (p = 0.01). Conclusions: Our findings suggest that BMI is associated with the prognosis and outcome of BD. Whether this association is causal remains to be determined. abstract_id: PUBMED:26590023 Medical burden, body mass index and the outcome of psychosocial interventions for bipolar depression. Objective: Individuals with bipolar disorder experience a disproportionately high incidence of medical co-morbidity and obesity. These health-related problems are a barrier to recovery from mood episodes and have been linked with unfavorable responses to pharmacological treatment. However, little is known about whether and how these characteristics affect responses to adjunctive psychotherapy. Method: Embedded in the Systematic Treatment Enhancement Program for Bipolar Disorder was a randomized controlled trial of psychotherapy for bipolar depression comparing the efficacy of intensive psychotherapy plus pharmacotherapy with collaborative care (a three-session psycho-educational intervention) plus pharmacotherapy. We conducted a post-hoc analysis to evaluate whether medical burden and body mass index predicted and/or moderated the likelihood of recovery and time until recovery from a depressive episode among patients in the two treatments. Results: Participants who had medical co-morbidity and body mass index data constituted 199 of the 293 patients in the original Systematic Treatment Enhancement Program for Bipolar Disorder trial. Higher medical burden predicted a lower likelihood of recovery from depression in both treatment conditions (odds ratio = 0.89), but did not moderate responses to intensive psychotherapy vs collaborative care. Intensive psychotherapy yielded superior recovery rates for individuals of normal body mass index (odds ratio= 2.39) compared with collaborative care, but not among individuals who were overweight or obese. Conclusion: Medical co-morbidity and body weight impacts symptom improvement and attention to this co-morbidity may inform the development of more personalized treatments for bipolar disorder. abstract_id: PUBMED:28692905 Prediction of outcome of bright light treatment in patients with seasonal affective disorder: Discarding the early response, confirming a higher atypical balance, and uncovering a higher body mass index at baseline as predictors of endpoint outcome. Background: We tested the hypothesis that the early improvement in mood after the first hour of bright light treatment compared to control dim-red light would predict the outcome at six weeks of bright light treatment for depressed mood in patients with Seasonal Affective Disorder (SAD). We also analyzed the value of Body Mass Index (BMI) and atypical symptoms of depression at baseline in predicting treatment outcome. Methods: Seventy-eight adult participants were enrolled. The first treatment was controlled crossover, with randomized order, and included one hour of active bright light treatment and one hour of control dim-red light, with one-hour washout. Depression was measured on the Structured Interview Guide for the Hamilton Rating Scale for Depression-SAD version (SIGH-SAD). The predictive association of depression scores changes after the first session. BMI and atypical score balance with treatment outcomes at endpoint were assessed using multivariable linear and logistic regressions. Results: No significant prediction by changes in depression scores after the first session was found. However, higher atypical balance scores and BMI positively predicted treatment outcome. Limitations: Absence of a control intervention for the six-weeks of treatment (only the first session in the laboratory was controlled). Exclusion of patients with comorbid substance abuse, suicidality and bipolar I disorder, and patients on antidepressant medications, reducing the generalizability of the study. Conclusion: Prediction of outcome by early response to light treatment was not replicated, and the previously reported prediction of baseline atypical balance was confirmed. BMI, a parameter routinely calculated in primary care, was identified as a novel predictor, and calls for replication and then exploration of possible mediating mechanisms. abstract_id: PUBMED:31661974 Diet quality, dietary inflammatory index and body mass index as predictors of response to adjunctive N-acetylcysteine and mitochondrial agents in adults with bipolar disorder: A sub-study of a randomised placebo-controlled trial. Aims: We aimed to explore the relationships between diet quality, dietary inflammatory potential or body mass index and outcomes of a clinical trial of nutraceutical treatment for bipolar depression. Methods: This is a sub-study of a randomised controlled trial of participants with bipolar depression who provided dietary intake data (n = 133). Participants received 16 weeks adjunctive treatment of either placebo or N-acetylcysteine-alone or a combination of mitochondrial-enhancing nutraceuticals including N-acetylcysteine (combination treatment). Participants were followed up 4 weeks post-treatment discontinuation (Week 20). Diet was assessed by the Cancer Council Victoria Dietary Questionnaire for Epidemiological Studies, Version 2, converted into an Australian Recommended Food Score to measure diet quality, and energy-adjusted dietary inflammatory index score to measure inflammatory potential of diet. Body mass index was also measured. Generalised estimating equation models were used to assess whether diet quality, energy-adjusted dietary inflammatory index score and/or body mass index were predictors of response to significant outcomes of the primary trial: depression symptoms, clinician-rated improvement and functioning measures. Results: In participants taking combination treatment compared to placebo, change in depression scores was not predicted by Australian Recommended Food Score, dietary inflammatory index or body mass index scores. However, participants with better diet quality (Australian Recommended Food Score) reported reduced general depression and bipolar depression symptoms (p = 0.01 and p = 0.03, respectively) and greater clinician-rated improvement (p = 0.02) irrespective of treatment and time. Participants who had a more anti-inflammatory dietary inflammatory index had less impairment in functioning (p = 0.01). Combination treatment may attenuate the adverse effects of pro-inflammatory diet (p = 0.03) on functioning. Participants with lower body mass index who received combination treatment (p = 0.02) or N-acetylcysteine (p = 0.02) showed greater clinician-rated improvement. Conclusion: These data support a possible association between diet (quality and inflammatory potential), body mass index and response to treatment for bipolar depression in the context of a nutraceutical trial. The results should be interpreted cautiously because of limitations, including numerous null findings, modest sample size and being secondary analyses. abstract_id: PUBMED:2618775 Lithium-induced changes in the body mass index. A total of 117 manic-depressives who had been on lithium for a mean duration of 4.7 years were examined before lithium therapy and subsequently at intervals. Information relating to pre-lithium height and weight and current weight were determined and used to calculate the body mass index (BMI) for each individual. Other relevant variables such as age, sex, cumulative lithium dose, duration of therapy, thyroid profile and serum lithium levels were recorded. The results indicated that, although there was a nonsignificant increase in BMI for the whole population, lithium and sex were not significant predictors of any increase in BMI. In nearly 27% of patients BMI actually slightly decreased during lithium therapy. The overall conclusions from this study are that, in the population studied, lithium may not have exerted any pharmacological effects to increase BMI. abstract_id: PUBMED:23445513 Season of birth is associated with adult body mass index in patients with bipolar disorder. Cardiovascular risk factors, such as abdominal obesity and obesity in general, are very prevalent among patients with bipolar disorder (BD). Although long-term use of psychotropic medications is an important determinant of these risk factors, other evidence suggests that early development may interact with the mood disorder diathesis to exponentially increase the risk of obesity. The goal of our study was to test whether season of birth is associated with adult body mass index (BMI) and abdominal obesity in individuals with bipolar disorder. We compared season of birth effects on BMI in 375 adult patients with bipolar disorder and 196 adult patients with unipolar major depression. We found a significant season of birth effect on BMI in patients with bipolar disorder, but not unipolar. In patients with bipolar disorder, season of birth was also associated with waist circumference, with a stronger effect in males. Season of birth affects adult BMI and waist circumference in patients with bipolar disorder, but not in patients with unipolar depression. Our results suggest that early environmental factors, yet to be identified, interact with specific neurobiological features of bipolar disorder to determine stable traits and disease risk factors in adult life. abstract_id: PUBMED:26529281 Differences in body mass index according to fat mass- and obesity-associated (FTO) genotype in Mexican patients with bipolar disorder. Objectives: The prevalence of obesity has dramatically increased in many countries and it is particularly high in patients with bipolar disorder (BD). A region in the first intron of the fat mass- and obesity-associated (FTO) gene, encompassing markers rs9939973, rs8050136, and rs9939609, has been consistently associated with obesity and body mass index (BMI) in different populations. We sought to determine whether FTO is associated with BMI and/or obesity in patients with BD. Methods: The sample included 129 Mexican Mestizo patients with bipolar I or bipolar II disorder. After obtaining informed consent, participants were evaluated with the Structured Clinical Interview for DSM-IV Axis I Disorders and weight, height, and body measurements were recorded. DNA was extracted from a 5-mL blood sample and real-time polymerase chain reaction was performed. The results were analyzed with Haploview v4.2 and SPSS v21. Results: Differences in mean BMI were explained by rs8050136 and rs9939609 genotypes, especially by comparing non-carriers and carriers of two copies of the risk allele (Tukey's p ≤ 0.019), with a mean difference in BMI as high as 7.81 kg/m(2) . Differences in BMI were also explained by the interaction of the genotype (rs8050136 and/or rs9939609), the use of second-generation antipsychotics, and the use of mood stabilizers (p ≤ 0.41). Obesity was also associated with these two markers when patients with and without obesity were compared. Conclusions: In patients with BD, differences in BMI may be affected by the presence of FTO risk alleles, especially in homozygous individuals for these variants. Besides evaluating the possible metabolic effects of certain antipsychotics or mood stabilizers, it is important to evaluate the role of other factors such as FTO risk alleles. abstract_id: PUBMED:29116126 Development of multivariable models to predict change in Body Mass Index within a clinical trial population of psychotic individuals. Many antipsychotics promote weight gain, which can lead to non-compliance and relapse of psychosis. By developing models that accurately identify individuals at greater risk of weight gain, clinicians can make informed treatment decisions and target intervention measures. We examined clinical, genetic and expression data for 284 individuals with psychosis derived from a previously published randomised controlled trial (IMPACT). These data were used to develop regression and classification models predicting change in Body Mass Index (BMI) over one year. Clinical predictors included demographics, anthropometrics, cardiac and blood measures, diet and exercise, physical and mental health, medication and BMI outcome measures. We included genetic polygenic risk scores (PRS) for schizophrenia, bipolar disorder, BMI, waist-hip-ratio, insulin resistance and height, as well as gene co-expression modules generated by Weighted Gene Co-expression Network Analysis (WGCNA). The best performing predictive models for BMI and BMI gain after one year used clinical data only, which suggests expression and genetic data do not improve prediction in this cohort. abstract_id: PUBMED:35232502 Body mass index in the middle-aged offspring of parents with severe mental illness. Background: People with severe mental illness (SMI) have an elevated risk of obesity but the causes and mechanisms are unclear. We explored the familial association between parental SMI and body mass index (BMI) in middle-aged offspring. Our objective was to determine if the offspring of either parent with SMI have an increased risk for obesity. Methods: The Northern Finland Birth Cohort 1966 is a cohort study of offspring with expected date of birth in 1966. The data include originally 12 068 mothers and 12 231 children from the provinces of Lapland and Oulu in Finland. The final study sample included 5050 middle-aged offspring. Parental SMI was used as exposure in the study. BMI measured at the age of 46 years was used as a primary outcome. Results: Risk for obesity was elevated in the offspring of mothers with SMI [overweight: adjusted odds ratio (OR) 1.93 (1.29-2.90), obese class I: 1.97 (1.20-3.25), obese classes II-III: 2.98 (1.67-5.33)]. For the offspring of either parent with SMI, statistically significant results were found in obese class I and obese classes II-III [overweight: adjusted OR 1.21 (0.94-1.54), obese class I: 1.52 (1.03-1.08), obese classes II-III: 1.53 (1.01-2.32)]. Conclusions: We found an elevated risk of obesity in the middle-aged offspring of either parent with SMI, especially in the offspring of mothers with SMI. Thus, there might be a common familial pathway leading to the co-occurrence of obesity and SMI. abstract_id: PUBMED:33267928 Does body mass index predict response to intravenous ketamine treatment in adults with major depressive and bipolar disorder? Results from the Canadian Rapid Treatment Center of Excellence. Background: Higher body mass index (BMI) has been found to predict greater antidepressant response to intravenous (IV) ketamine treatment. We evaluated the association between BMI and response to repeat-dose IV ketamine in patients with treatment-resistant depression (TRD). Methods: Adults (N = 230) with TRD received four infusions of IV ketamine at a community-based clinic. Changes in symptoms of depression (ie, Quick Inventory for Depressive Symptomatology-Self-Report 16; QIDS-SR16), suicidal ideation (SI; ie, QIDS-SR16 SI item), anxiety (ie, Generalized Anxiety Disorder-7 Scale), anhedonic severity (ie, Snaith-Hamilton Pleasure Scale), and functioning (ie, Sheehan Disability Scale) following infusions were evaluated. Participants were stratified by BMI as normal (18.0-24.9 kg/m2; n = 72), overweight (25-29.9 kg/m2; n = 76), obese I (30-34.9 kg/m2; n = 47), or obese II (≥35.0 kg/m2; n = 35). Results: Similar antidepressant effects with repeat-dose ketamine were reported between BMI groups (P = .261). In addition, categorical partial response (P = .149), response (P = .526), and remission (P = .232) rates were similar between the four BMI groups. Conclusions: The findings are limited by the observational, open-label design of this retrospective analysis. Pretreatment BMI did not predict response to IV ketamine, which was effective regardless of BMI. Answer: Body mass index (BMI) has been studied as a potential predictor of outcome in patients with bipolar disorder (BD), and several findings suggest that it may have an association with prognosis and treatment response. One study found that a higher BMI in patients with BD was associated with a chronic course, longer duration of illness, lower scores on the Global Assessment of Functioning Scale, and greater disability. Overweight patients had more frequent comorbid anxiety disorders, diabetes mellitus type II, and hypertension. Those who achieved complete remission of symptoms on lithium had significantly lower BMI (PUBMED:19689507). Another study indicated that higher medical burden, which is often associated with higher BMI, predicted a lower likelihood of recovery from depression in BD patients, regardless of the type of psychotherapy received. Intensive psychotherapy yielded superior recovery rates for individuals of normal BMI compared with collaborative care, but not among individuals who were overweight or obese (PUBMED:26590023). Research on bright light treatment for Seasonal Affective Disorder, a condition that can co-occur with BD, found that higher BMI at baseline positively predicted treatment outcome, suggesting that BMI could be a novel predictor of response to certain treatments (PUBMED:28692905). A sub-study of a randomized placebo-controlled trial explored the relationships between diet quality, dietary inflammatory potential, or BMI and outcomes of nutraceutical treatment for bipolar depression. The study found that participants with better diet quality reported reduced depression symptoms and greater clinician-rated improvement, irrespective of treatment. Participants with lower BMI who received combination treatment or N-acetylcysteine showed greater clinician-rated improvement (PUBMED:31661974). However, not all studies have found a significant association between BMI and treatment outcomes in BD. For instance, one study reported that lithium therapy did not significantly increase BMI in the population studied (PUBMED:2618775). Another study found no significant prediction of treatment outcome by BMI in patients receiving intravenous ketamine for treatment-resistant depression (PUBMED:33267928). In summary, while there is evidence to suggest that BMI may be associated with the prognosis and outcome of BD, the relationship is complex and may vary depending on the type of treatment and other individual factors. Further research is needed to clarify the role of BMI in predicting outcomes in BD patients.
Instruction: Cohen cross-trigonal ureteral reimplantation: is a one-year postoperative renal ultrasound scan necessary after normal initial postoperative ultrasound findings? Abstracts: abstract_id: PUBMED:18372024 Cohen cross-trigonal ureteral reimplantation: is a one-year postoperative renal ultrasound scan necessary after normal initial postoperative ultrasound findings? Objectives: To determine the utility of the 1-year postoperative renal ultrasound scan in children undergoing uncomplicated Cohen cross-trigonal ureteral reimplantation. Methods: A retrospective review of 108 patients (233 ureters) who had undergone uncomplicated Cohen cross-trigonal ureteral reimplantation during a 6.5-year period with a postoperative follow-up of 1 year was performed. Renal ultrasound scans were obtained at 1 month and 1 year postoperatively. Results: Of the 108 patients who had undergone ureteral reimplantation, 26 patients (24%) had mild to moderate hydronephrosis at 1 month postoperatively. Of these 26 patients, 20 had preexisting hydronephrosis and 6 had new hydronephrosis. Of the 20 patients with preoperative hydronephrosis, 19 (95%) had similar hydronephrosis on the 1-month postoperative ultrasound scan and in 1 (5%), the hydronephrosis had resolved. At 1 year, 13 (69%) of the 20 patients with preoperative hydronephrosis had stable hydronephrosis, and it resolved in 6 (31%). In the 6 patients with new-onset hydronephrosis at 1 month postoperatively, the hydronephrosis had resolved by 1 year in 4 patients (67%) and was mild in 2 (33%). Three patients developed transient obstruction requiring stent placement. Conclusions: It has been recommended that a renal ultrasound scan be obtained in children 1 year after open ureteral reimplantation to rule out delayed obstruction. Our results suggest this is of low yield in patients with uncomplicated open ureteral reimplantation. abstract_id: PUBMED:35205018 Vesicoscopic Cross-Trigonal Ureteral Reimplantation for Vesicoureteral Reflux: Intermediate Results. For the treatment of vesicoureteral reflux, the introduction of vesicoscopic procedures offers new perspectives for improving patient comfort and quality. Our aim was to examine whether minimally invasive vesicoscopic cross-trigonal ureteral reimplantation (VCUR) would meet expectations. Between 2012 and 2021, 99 girls and 35 boys with high-grade vesicoureteral reflux (VUR) underwent VCUR. For two boys, we failed to establish the pneumovesicum, leading to conversion to open surgery. The mean age was 4.5 years, ranging from 10 months to 18 years. VCUR was successfully performed in 132 patients, including 75 patients with bilateral VUR and 12 children with double ureters with unilateral or bilateral VUR, corresponding to a total of 229 operated ureters. The mean time of operation was 151 min for all patients. There were no perioperative complications, with the exception of three cases of pneumoperitoneum without consequences. Postoperatively, we recognized three cases of acute hydronephrosis, two of them required transient drainage. Three patients developed extravasation of urine after the postoperative removal of the transurethral catheter, rapidly resolved by new drainage. In two patients, we combined VCUR with laparoscopic heminephrectomy and opposite laparoscopic nephrectomy, respectively. Overall, mean postoperative hospital stay was 4.2 days. We observed recurrent VUR in seven ureters, resulting in a success rate for VCUR of 96.9%. These results demonstrate the feasibility of VCUR and its potential to displace open surgery with high safety and wide applicability. abstract_id: PUBMED:11744485 Is postoperative cystography necessary after ureteral reimplantation? Objectives: To evaluate the patients in our practice to determine whether postoperative cystography was useful in monitoring the outcome after ureteroneocystostomy. Surgical repair of vesicoureteral reflux is the treatment of choice after medical therapy failure. Radiographic evaluation by ultrasonography or voiding cystourethrography (VCUG) is commonly used postoperatively to evaluate for urinary tract obstruction or persistent reflux. However, imaging modalities are not without cost, both in monetary terms and in terms of radiation exposure and trauma to the child. Methods: We reviewed the records of all patients who underwent ureteroneocystostomy without ureteral tapering at our institution between January 1, 1996 and December 31, 1999 for primary vesicoureteral reflux. These records were evaluated with respect to the type of surgical procedure, preoperative and postoperative clinical course, and radiographic studies performed. Results: We performed reimplantation on 267 renal units in 153 patients. The surgical technique was the Cohen cross-trigonal in 120 renal units (45%), Glenn-Anderson ureteral advancement in 92 (35%), and modified Leadbetter-Politano ureteral advancement in 55 (20%). All patients underwent imaging with ultrasonography within 6 weeks of surgery. The follow-up ranged from 4 to 42 months (average 14.2). Between 3 and 8 months postoperatively, 61 patients underwent imaging with VCUG. We identified persistent reflux in six renal units. Four of six had marked improvement in their reflux. All the patients with persistent reflux were asymptomatic, including 2 patients who were no longer receiving antibiotics. Four patients developed febrile urinary tract infections postoperatively. Three of the four underwent imaging with VCUG after treatment; all three had no evidence of reflux. Conclusions: In our population, the addition of VCUG to the postoperative evaluation did not allow us to identify those patients at risk of febrile urinary tract infections. Patients in whom persistent reflux was identified were all asymptomatic. We continue to monitor patients with ultrasonography, but believe that VCUG often provides little benefit to these children. abstract_id: PUBMED:31542465 Robot-assisted Laparoscopic Extravesical Cross-trigonal Ureteral Reimplantation With Tailoring for Primary Obstructive Megaureter. Objective: To describe a novel, minimally invasive surgical technique, robotic-assisted laparoscopic dismembered extravesical cross-trigonal ureteral reimplantation for primary obstructive megaureter and to report a case series of 13 patients. Methods: Thirteen patients between the ages of 10 and 96 months who were diagnosed with primary obstructive megaureter underwent robotic-assisted laparoscopic dismembered extravesical cross-trigonal ureteral reimplantation between April 2017 and May 2019. The principle of this technique is performing an extravesical cross-trigonal ureteral reimplantation with intracorporeal tailoring of the ureter. This provides the advantage of achieving a long tunnel mimicking the Cohen cross-trigonal ureteral reimplantation, without performing the open surgical technique and offers the potential benefits of minimally invasive surgery. Surgical technique is described, demographic data and intra- and/or postoperative parameters are reported. Results: Median age was 26 months (IQR 16-60). Median weight was 15 kg (IQR 10.1-31). Median console time was 113 minutes (IQR 90.5-140). Median postoperative stay was 2.5 days (IQR 1.3-3). Median ureteral diameter decreased from 17 mm (IQR 12.5-18.5) to 3 mm (IQR 0-6.5, P = .001). Median renal pelvis diameter decreased from 28 mm (IQR 20.5-37.8) to 4 mm (IQR 1.5-5, P = .005). Median renal function before surgery was 46% (IQR 24.5-48.5) and following surgery was 42% (IQR 36-42, P = .700). Median T1/2 decreased from 28 minutes (IQR 19.3-30) to 4.5 minutes (IQR 3-5, P = .009). Postoperative complications graded by the Clavien-Dindo classification were apparent in 3 patients (21%). One had grade I complication (fever, resolved spontaneously) and 2 had a Grade II complication (urinary tract infection). Conclusion: Robot-assisted cross-trigonal ureteral reimplantation with intracorporeal tailoring is safe, feasible, and reproducible in short-term follow-up. abstract_id: PUBMED:10458466 Cystography after the Cohen ureterovesical reimplantation: is it necessary at a training center? Purpose: Reimplantation by the Cohen procedure has a low rate of recurrent reflux, although postoperative cystography is done routinely at most centers. According to the French training program for pediatric surgery and urology residents, reimplantation is the main pediatric urology procedure performed during residency. We determine whether it is necessary to perform postoperative cystography routinely and whether the fact that the procedure is done by a junior surgeon modifies management. Materials And Methods: A total of 268 children with primary vesicoureteral reflux underwent ureteral reimplantation by the Cohen transtrigonal technique. Bilateral reimplantation was done in 97% of the cases. Reimplantation was performed by a surgery resident assisted by a clinical fellow or senior consultant surgeon in 37% of the cases. Routine cystography and renal ultrasound were done in all patients postoperatively. Followup ranged from 6 months to 5 years (mean 10 months). Results: In 2 children (0.7%) with recurrent reflux surgery was not performed by a resident. One of the 2 children had asymptomatic persistent reflux and no further surgery was done. In the other child postoperative cystography was normal at 6 months. One year later she had acute pyelonephritis with recurrent unilateral reflux and underwent repeat reimplantation. Conclusions: Routine cystography is not necessary after bilateral Cohen reimplantation. Reflux recurrence is low even at a training center where surgery may be performed by junior surgeons. abstract_id: PUBMED:6854798 Initial results with the Cohen cross-trigonal ureteroneocystotomy. Between January 1976 and December 1980, 109 children with 157 ureters with reflux were seen in consultation. Of these ureters 52 were operated upon using the Cohen cross-trigonal technique, while 105 were followed conservatively. The operative procedure is a complete intravesical ureteral mobilization followed by the creation of a submucosal tunnel across the base of the bladder. Followup studies, including an excretory urogram and voiding cystourethrogram, showed minimal hydronephrosis in 1 ureter and persistent grade I reflux in 1 ureter. Evaluation of these initial results indicates that the Cohen cross-trigonal ureteroneocystotomy is a safe and effective antireflux procedure. abstract_id: PUBMED:32666258 Detection of distal ureteral stones in pregnancy using transvaginal ultrasound. Aims: To determine the performance of transvaginal ultrasound for the visualization of distal ureteral stones in pregnant patients with renal colic and to evaluate the diagnostic value of secondary findings suggestive of obstructing ureteral stone disease. Methods: We retrospectively identified 129 pregnant patients with a total of 142 encounters with both abdominal and transvaginal ultrasound. Ultrasound images for each patient were reviewed recording the presence of stone with location, hydronephrosis, resistive indices (RI), and status of the ureteral jets. Patients were subcategorized into two groups based on the visualization of distal ureteral stone. Results: The transvaginal technique identified 94% (N = 16/17) of sonographically detected stones in the distal ureter/urethra, while the transabdominal technique identified 29% (N = 5/17). The combined imaging for initial assessment of renal colic in pregnancy demonstrated a sensitivity of 89%, specificity 100%, and negative predictive value (NPV) of 98%. The frequency of hydronephrosis was statistically greater in the visualized stone group (94% vs 51%). Mean RI was identical in both groups however the delta RI was significantly elevated in those patients with distal ureteral stones with a mean delta RI value of 0.05. The rate of absence of ureteral jets was not statistically significant. Conclusion: The present data would suggest a utility of transvaginal ultrasound for the evaluation of the pregnant patient with 94% of distal stones being detected transvaginal versus 29% transabdominally. Additionally, there was significantly increased hydronephrosis and elevated RIs in patients with distal ureteral stones. abstract_id: PUBMED:38020508 Role of ultrasound and MRI in the evaluation of postoperative rotator cuff. Rotator cuff tears are common shoulder injuries in patients above 40 years of age, causing pain, disability, and reduced quality of life. Most recurrent rotator cuff tears happen within three months. Surgical repair is often necessary in patients with large or symptomatic tears to restore shoulder function and relieve symptoms. However, 25% of patients experience pain and dysfunction even after successful surgery. Imaging plays an essential role in evaluating patients with postoperative rotator cuff pain. The ultrasound and magnetic resonance imaging are the most commonly used imaging modalities for evaluating rotator cuff. The ultrasound is sometimes the preferred first-line imaging modality, given its easy availability, lower cost, ability to perform dynamic tendon evaluation, and reduced post-surgical artifacts compared to magnetic resonance imaging. It may also be superior in terms of earlier diagnosis of smaller re-tears. Magnetic resonance imaging is better for assessing the extent of larger tears and for detecting other complications of rotator cuff surgery, such as hardware failure and infection. However, postoperative imaging of the rotator cuff can be challenging due to the presence of hardware and variable appearance of the repaired tendon, which can be confused with a re-tear. This review aims to provide an overview of the current practice and findings of postoperative imaging of the rotator cuff using magnetic resonance imaging and ultrasound. We discuss the advantages and limitations of each modality and the normal and abnormal imaging appearance of repaired rotator cuff tendon. abstract_id: PUBMED:28392131 Modern flexible ureteroscopy in Cohen cross-trigonal ureteral reimplantations. Purpose: We describe a feasible flexible ureteroscopy (fURS) technique with the latest instruments to and to discuss their advantages. Methods: Three patients underwent a fURS for stone treatment. A 7F angled orifice catheter and a hydrophilic angled tip stiff wire is used to guide the wire in the proper ureteral direction sighting the ureter allowing the use of a 10/12 ureteral access sheath. A single use ureteroscope was used. Results: All of them had successful ureteral access and laser lithotripsy being stone free endoscopically. No complications reported. Conclusion: The modern fURS technique was found feasible and safe in patients with cross-trigonal ureteroneocystostomy. abstract_id: PUBMED:16145456 Is long-term sonographic followup necessary after uncomplicated ureteral reimplantation in children? Purpose: We examined the necessity of postoperative ultrasound following surgical correction of vesicoureteral reflux beyond initial postoperative assessment. The followup among children who have undergone correction of vesicoureteral reflux has varied, and currently there are no standards to document how long postoperative monitoring for hydronephrosis, renal scarring or renal growth should continue. Materials And Methods: The study population included 128 children who underwent surgical correction of primary vesicoureteral reflux between 1992 and 2002. Data were collected as part of a retrospective chart review and included age at surgery, preoperative grade of reflux, indications for surgery, type of surgical correction, postoperative voiding cystourethrogram and ultrasound results, and postoperative course relative to urinary tract infections, incontinence and other symptoms. Results: Of 128 patients 4 had postoperative reflux on voiding cystourethrogram. In each of these cases reflux either resolved completely or remained stable during postoperative followup. A total of 17 patients had grade 2 or 3 hydronephrosis on the initial 3-month postoperative ultrasound. In all of these cases hydronephrosis remained stable or improved during followup. In no case was there evidence of development of new hydronephrosis or worsening of previously established hydronephrosis beyond the 1-year postoperative ultrasound. Conclusions: Our data indicate that followup of uncomplicated ureteral reimplantation in children more than 1 year postoperatively is not warranted. The elimination of studies beyond 1 year following surgery would result in a significant cost savings. Answer: The necessity of a one-year postoperative renal ultrasound scan after Cohen cross-trigonal ureteral reimplantation with normal initial postoperative ultrasound findings is debated. A retrospective review of 108 patients who underwent uncomplicated Cohen cross-trigonal ureteral reimplantation showed that a one-year postoperative renal ultrasound scan may be of low yield in patients with uncomplicated open ureteral reimplantation, as most patients with preoperative hydronephrosis had stable hydronephrosis or resolution by one year, and those with new-onset hydronephrosis at one month postoperatively also showed resolution or mild hydronephrosis by one year (PUBMED:18372024). This suggests that in cases where the initial postoperative ultrasound is normal, a follow-up scan at one year may not be necessary. Additionally, another study examining the necessity of postoperative ultrasound following surgical correction of vesicoureteral reflux found that in cases of uncomplicated ureteral reimplantation, follow-up beyond one year postoperatively was not warranted, as there was no evidence of development of new hydronephrosis or worsening of previously established hydronephrosis beyond the one-year postoperative ultrasound (PUBMED:16145456). This further supports the view that a one-year postoperative renal ultrasound may not be necessary after normal initial postoperative ultrasound findings. Therefore, based on these findings, it appears that a one-year postoperative renal ultrasound scan may not be routinely necessary after uncomplicated Cohen cross-trigonal ureteral reimplantation if the initial postoperative ultrasound findings are normal. However, individual patient circumstances and clinical judgment should always be considered when making follow-up decisions.
Instruction: Does practice make perfect? Abstracts: abstract_id: PUBMED:10185146 Practice made perfect. The perfect practice management system? Users say it is one that integrates financial operational and clinical information within a practice, and at the same time connects the practice to other parts of the organization. abstract_id: PUBMED:29927497 Towards perfect NMR: Spin-echo versus perfect-echo building blocks. The development of new tools to improve the quality of nuclear magnetic resonance (NMR) spectra is a challenging task. The concept of "perfect NMR" includes the design of robust pulse sequences that allow an investigator to obtain undistorted pure in-phase signals, with pure absorption lineshapes that are free of phase anomalies derived from undesired J modulations. Here, alternative NMR building blocks to the spin-echo that are based on a general double SE module, known as a perfect-echo, are reviewed. Several implementations to minimize/remove unwanted dispersive contributions in homonuclear and heteronuclear NMR experiments are described and illustrated with some examples of broad interest for small molecules. abstract_id: PUBMED:30545038 Graphene-Based Perfect Absorption Structures in the Visible to Terahertz Band and Their Optoelectronics Applications. Graphene has unique properties which make it an ideal material for photonic and optoelectronic devices. However, the low light absorption in monolayer graphene seriously limits its practical applications. In order to greatly enhance the light absorption of graphene, many graphene-based structures have been developed to achieve perfect absorption of incident waves. In this review, we discuss and analyze various types of graphene-based perfect absorption structures in the visible to terahertz band. In particular, we review recent advances and optoelectronic applications of such structures. Indeed, the graphene-based perfect absorption structures offer the promise of solving the key problem which limits the applications of graphene in practical optoelectronic devices. abstract_id: PUBMED:38481794 Perfect Matchings with Crossings. For sets of n points, n even, in general position in the plane, we consider straight-line drawings of perfect matchings on them. It is well known that such sets admit at least Cn/2 different plane perfect matchings, where Cn/2 is the n/2-th Catalan number. Generalizing this result we are interested in the number of drawings of perfect matchings which have k crossings. We show the following results. (1) For every k≤164n2-3532nn+122564n, any set with n points, n sufficiently large, admits a perfect matching with exactly k crossings. (2) There exist sets of n points where every perfect matching has at most 572n2-n4 crossings. (3) The number of perfect matchings with at most k crossings is superexponential in n if k is superlinear in n. (4) Point sets in convex position minimize the number of perfect matchings with at most k crossings for k=0,1,2, and maximize the number of perfect matchings with n/22 crossings and with n/22-1 crossings. abstract_id: PUBMED:32428311 A case of ingrown nail using buckle VHO-Osthold® Perfect as a method of conservative treatment. Background: Ingrowing Nail is an ailment in the toe area. This problem is observed in 20% of the population, in all age groups, but most often occurs in teenagers and young adults. The process of ingrowing nail stimulates natural defense mechanisms of the body in the form of inflammation and severe pain. Aims: The aim of this paper was to make the VHO-Osthold® Perfect buckle effective as an alternative to nail plate surgery and to determine the patient's comfort during this method of treatment. Patients/methods: A descriptive case study conducted in a 15-year-old patient who had ingrowing nails in the big toes of both feet. Results: It has been shown that the VHO-Osthold® Perfect buckle therapy constitutes an effective method for ingrowing nails and one in a few conservative methods in orthonyxia, as an alternative to surgery. Conclusions: The study and clinical experience confirm that the therapy of ingrowing nails with the VHO-Osthold® Perfect buckle is painless and noninvasive. This treatment can be safely and effectively carried out by a qualified podiatrist or cosmetologist in podological practice. abstract_id: PUBMED:35663521 Too perfect to be good? An investigation of magicians' Too Perfect Theory. The "Too Perfect Theory" states that if a trick is too perfect, it might paradoxically become less impressive, or give away its secret method. This theory suggests that an increased impossibility results in a less magical effect. The Too Perfect Theory is often applied to magic effects, but it conflicts with recent scientific investigations showing that participants' level of enjoyment of a magic performance is positively related to their perceived impossibility of the trick. The current article investigated whether an imperfect magic performance is more impressive than a perfect one. Across two experiments, we studied whether participants enjoy a performance more if the effect is not perfect. We also examined the different types of explanations people give to these two types of performances. The results showed that participants enjoyed a perfect performance more than an imperfect one. However, consistently with the Too Perfect Theory, participants watching the perfect performance also discovered the correct method behind the magic trick more frequently and believed the performance was staged more often. Moreover, participants' method explanation significantly impacted their reports about the performance. abstract_id: PUBMED:31692456 Perfect colorings of patterns with multiple orbits. This paper studies colorings of patterns with multiple orbits, particularly those colorings where the orbits share colors. The main problem is determining when such colorings become perfect. This problem is attacked by characterizing all perfect colorings of patterns through the construction of sufficient and necessary conditions for a coloring to be perfect. These results are then applied on symmetrical objects to construct both perfect and non-perfect colorings. abstract_id: PUBMED:36397838 Scientific counterfactuals as make-believe. Counterfactuals abound in science, especially when reasoning about and with models. This often requires entertaining counterfactual conditionals with nomologically or metaphysically impossible antecedents, namely, counternomics or counterpossibles. In this paper I defend the make-believe view of scientific counterfactuals, a naturalised fiction-based account of counterfactuals in science which provides a means to evaluate their meanings independently of the possibility of the states of affairs their antecedents describe, and under which they have non-trivial truth-values. Fiction is here understood as imagination (in contrast with its most typical association with falsity), characterised as a propositional attitude of pretense or 'make-believe' (Walton 1990). The application of this theory to scientific counterfactuals makes their evaluation a game of make-believe: a counterfactual is (fictionally) true iff its antecedent and the rules of the game prescribe the imagining of its consequent (Kimpton-Nye 2020). The result is a practice-based account of counterfactuals and counterfactual reasoning in science which incorporates insights from theoretical and experimental analytic philosophy as well as cognitive science. This way, the make-believe view of scientific counterfactuals shows that the evaluation of scientific counterfactuals is none other than a question of scientific representation in disguise. abstract_id: PUBMED:33840334 Perception of the most perfect female breast shape among Malays, Chinese, and Indians community. Researchers have long tried to search for the 'perfect' aesthetic outcome of breast surgery. Although lots of anthropometric studies have been done, there is no consensus when dealing with harmony, abstract, and proportion that make up for its ideal shape. This study was convened via a survey to find the most perfect breast shape by assessing the upper pole contour and upper pole to lower pole ratio (UP:LP) preference among the Malays, Chinese, and Indians races. Hundred thirty-five individuals partook in this survey. Most of the participants generally preferred a more convex shape of upper pole contour with the Malay (50%) and Indian (35.7%) favored a more convex shape while the Chinese preferred it to be just slightly convex (42.8%). Single unmarried individuals (64.7%) significantly preferred a more convex upper pole contour (p = 0.018). The UP:LP ratio of 45:55 (p = 0.002) was the most significantly preferred proportion (37.8%) which largely comprised of the Chinese (51%), followed by Indian (21.6%) and Malay (19.6%). With these findings, using a controlled perception-preference method is a more preferred choice when describing an ideal breast shape as compare to an anthropometric measurement that might be less accurate. Therefore, breast and plastic surgeons alike need to look beyond the anthropometric numbers and should consider the other 'abstract' aesthetic shape which difficult to measure including the upper pole breast that has more convexity and the UP:LP ratio of 45:55 which showed to be the most aesthetically perfect form as agreed in the current works of literature. abstract_id: PUBMED:9726180 Will evidence-based nursing practice make practice perfect? Evidence-based practice, or evidence-based decision-making, is rapidly developing as a growth industry in nursing and the health professions more widely. It has its origins in the work of the British epidemiologist Archie Cochrane and has recently been re-energized in Canada by the National Forum on Health and its call for a culture of evidence-based decision-making. Before we adopt evidence-based nursing (EBN) as a mantra for the 21st century, we should examine its origins and its consequences, and we should probe related concepts, 2 of which are the nature and structure of practice-based knowledge and the nature and structure of evidence generally. Findings of a recent survey of nurses in western Canada are used to illustrate that nurses use a broad range of practice knowledge, much of which is experientially based rather than research-based. Answer: The concept of "practice makes perfect" is a common adage suggesting that repeated practice or experience in an activity or skill can lead to mastery or proficiency. However, the abstracts provided do not directly address the validity of this saying in a general sense. Instead, they discuss specific contexts where the idea of "perfection" is relevant. For instance, the abstract from PUBMED:10185146 discusses the idea of a "perfect practice management system" in healthcare, which integrates various types of information within a practice and connects it to other parts of the organization. This suggests that in the context of healthcare management, the pursuit of an integrated and efficient system is seen as a path to achieving "perfect" practice operations. In the field of nuclear magnetic resonance (NMR), the pursuit of "perfect NMR" involves designing robust pulse sequences to obtain undistorted signals, as mentioned in PUBMED:29927497. This indicates that in scientific research, practice in the form of methodical experimentation and refinement can lead to improved outcomes, which might be considered "perfect" in the context of the goals of the research. The abstract from PUBMED:35663521 explores the "Too Perfect Theory" in magic performances, suggesting that a trick that appears too perfect may paradoxically become less impressive or reveal its secret. This implies that in the art of magic, the notion of "perfect" may not always align with the audience's enjoyment or the effectiveness of a performance. While these abstracts touch on the concept of perfection in various domains, they do not provide empirical evidence or a broad philosophical discussion on whether practice universally leads to perfection. Therefore, based on the provided abstracts, it cannot be conclusively stated that practice makes perfect in a general sense. Instead, the pursuit of perfection is context-dependent and may have different implications across different fields and activities.
Instruction: Is a one-year follow-up an efficient method for better management of MRI BI-RADS(®) 3 lesions? Abstracts: abstract_id: PUBMED:27212693 Is a one-year follow-up an efficient method for better management of MRI BI-RADS(®) 3 lesions? Objectives: To evaluate the necessity of a prolonged follow-up after one year for lesions incidentally diagnosed as Breast Imaging Reporting and Data System (BI-RADS)(®) 3 on magnetic resonance imaging (MRI). Methods: Institutional review board approved this study (CRC_june 2014_ project 11). We retrospectively analysed 218 consecutive MRI BI-RADS(®) 3 lesions in 110 women followed-up at 6, 12, 18 and/or 24 months from 2011 through to 2015. We excluded MRI scans of BRCA mutation carriers, in staging before treatment and in patients undergoing therapy for breast cancer. Second-look ultrasound and mammography were normal. We analysed 43 masses, 46 foci and 129 non-mass enhancements. In the follow-up, the image was scored as BI-RADS(®) 2 if it was completely resolved, BI-RADS(®) 3 if it was stable, and BI-RADS(®) 4 if a suspicious change occurred. Results: Forty-four lesions (20%) completely resolved during the follow-up. Six were scored as BI-RADS(®) 4: five at 6 months (benign) and one at 12 months (8 mm invasive ductal carcinoma). One hundred and sixty eight stable lesions were assigned BI-RADS(®) 2 at 18 or 24 months. The predictive positive value of cancer for BI-RADS(®) 3 lesions is 0.5% (95% CI [0.08-2.55]) in our study. Conclusions: Our results suggest that a one year follow-up instead of two years is sufficient for BI-RADS 3 lesions, in case of using rigorous inclusion criteria. Given the very low PPV of the BI-RADS(®) 3 lesions incidentally found on MRI, further studies with very large inclusions are necessary to establish new guidelines for MRI follow-up. abstract_id: PUBMED:38293846 Follow-up results of BI-RADS 3 lesions on magnetic resonance imaging: a retrospective study. Purpose: The categorization of Breast Imaging Reporting and Data System (BI-RADS) 3 lesions is not as clear in magnetic resonance imaging (MRI) as it is in mammography (MG). With the increasing number of MRI scans currently being conducted globally, incidentally detected lesions falling into the probably benign category are frequently being observed. In this study, our aim was to investigate the imaging characteristics and follow-up results of BI-RADS 3 lesions detected by MRI and to determine their malignancy rates. Methods: Breast MRI scans performed between January 2010 and January 2020 and classified as BI-RADS 3 lesions were retrospectively analyzed. The study included 216 lesions with known biopsy or surgical excision results or with at least one year of radiological follow-up. We assessed the patients' age, the presence of breast cancer, the follow-up interval, and the imaging findings at the beginning and during the follow-up. Lesions that remained stable, disappeared, or decreased in size and had a benign histopathological diagnosis were classified as benign. Lesions with the histopathological diagnosis of malignancy, identified by either biopsy or surgical excision, were classified as malignant. We determined the malignancy rate based on the histopathology and follow-up results. Results: Considering the follow-up results of all cases, 8% of lesions were excised, 0.5% decreased in size, 1.4% became enlarged, 17.1% disappeared, and 73% remained stable. The malignancy rate was 2.8%. A significant relationship was found between lesion shape and malignancy, as progression to malignancy was more likely in round lesions than in other types. An irregular margin, heterogeneous enhancement, and kinetic curve (type 2) features were significant for lesion upgrade to malignancy. Conclusion: The malignancy rate in BI-RADS 3 lesions detected by MRI is low and falls within the accepted cancer rate for MG and sonography. Changes in size, morphology, and enhancement pattern should be considered in terms of malignancy development during follow-up. The follow-up intervals should be determined on a case-by-case basis. abstract_id: PUBMED:33993330 Synchronous BI-RADS category 3 lesions detected by preoperative breast MRI in patients with breast cancer: may follow-up be adequate? Objective: The purpose of this study was to analyze the rate of malignancy of synchronous Breast Imaging Reporting and Data System (BI-RADS) category 3 lesions identified by preoperative magnetic resonance imaging (MRI) in patients with breast cancer that were followed up rather than biopsied. Methods: From electronic medical records, we identified 99 patients treated in our institution for whom preoperative breast MRI identified synchronous BI-RADS 3 lesions. Lesion characteristics, rate of second-look ultrasonography (US), rate of collegial decision-making, and rate of biopsies performed during the period of monitoring were analyzed. Results: Second-look US was performed in 96 of 99 patients and did not reveal any lesion. Collegial decision-making for follow-up validation was asked in 32%. The median time to last MRI was 24.4 months (interquartile range [IQR] [19.3; 36.3]). The median follow-up was 39 months (IQR [28; 52]). Two cancers were diagnosed, one at 5 months and one at 26 months of follow-up. The incidence of malignancy of followed up synchronous BI-RADS category 3 lesions was 1.0% (95% CI [0.1%; 7.1%]) at 6 months and 2.2% (95% CI [0.6%; 8.6%]) at 30 months. Conclusion: Monitoring could be proposed for synchronous BI-RADS category 3 lesions detected in preoperative breast cancer patients. A continued follow-up beyond 2 years could be of benefit. Key Points: • Follow-up can be proposed for MRI BI-RADS category 3 lesions not detected at second-look ultrasound, possibly after a collegial decision. • Follow-up should be continued annually since cancer may occur beyond 2 years. abstract_id: PUBMED:31799867 Frequency and Cancer Yield of BI-RADS Category 3 Lesions Detected at High-Risk Screening Breast MRI. OBJECTIVE. The purpose of this study was to evaluate the frequency and cancer yield of BI-RADS category 3 lesions in baseline versus nonbaseline (those with at least one prior) MRI screening examinations. MATERIALS AND METHODS. Consecutive MRI screening examinations performed from 2011 through 2015 were reviewed. Pearson and Wilcoxon tests were used to examine differences in age, breast density, screening indication, background parenchymal enhancement, and cancer yield between baseline and nonbaseline MRI BI-RADS category 3 assessments. Multivariate logistic regression models based on generalized estimating equations were used to assess the odds of receiving a BI-RADS 3 assessment as a function of the variables. RESULTS. Of 6672 MRI screening examinations of 3214 patients, 202 examinations (3%) were assessed BI-RADS category 3. Among baseline examinations, 8% (82/983) were assessed BI-RADS 3, compared with 2% (120/5689) of nonbaseline examinations (p &lt; 0.001). Among the total BI-RADS 3 examinations, 6% (13/202) yielded malignancy of the lesion that had been assessed BI-RADS 3; 12 of 13 cancers were stage 0 or I at diagnosis. The cancer yield of BI-RADS 3 at baseline examinations was 2% (2/82), compared with 9% (11/120) for nonbaseline examinations (p = 0.056). Ten of 13 examinations were upgraded at or before 6-month follow-up MRI. CONCLUSION. Baseline screening breast MRI examinations are associated with a significantly higher rate of BI-RADS category 3 assessments than are nonbaseline examinations. Most cancers diagnosed at follow-up of BI-RADS 3 lesions are in an early stage and are diagnosed at or before the 6-month follow-up examination. When used judiciously, short-interval follow-up MRI is an appropriate method for identifying early-stage breast cancer while avoiding unnecessary biopsies with benign findings. abstract_id: PUBMED:31604077 Breast MRI BI-RADS 3: Impact of Patient-Level Factors on Compliance With Short-Term Follow-Up. Purpose: A BI-RADS 3 assessment on breast MRI is given when a finding is estimated to have less than 2% chance of breast cancer. Patients in this category are typically recommended to return for a 6-month follow-up MRI. Compliance with this recommendation is low, and we aim to understand which factors are associated with compliance. Materials And Methods: All patients with an MRI examination given a BI-RADS category 3 between February 1, 2011, and June 30, 2016, were retrospectively reviewed. Patient demographics and breast-related medical history were extracted from the electronic medical record. Patients presenting for follow-up MRI between 3 and 10 months were considered compliant. Univariate and multivariate analysis was performed to identify which patient-level factors were associated with compliance with follow-up MRI. Results: Overall, 190 women with a BI-RADS 3 assessment on MRI were included in the study. Of these women, 106 were compliant with the recommended follow-up MRI (57.3%), 34 had delayed follow-up (18.4%), and 45 were noncompliant (24.3%). Reason for examination, personal history of breast cancer, and family history of breast cancer were significantly associated with compliance. Conclusions: We found that 75.7% of patients had a follow-up MRI after a BI-RADS 3 assessment, but only 57.3% were timely in their follow-up. Our data suggest that there may be subsets of patients who would benefit from additional support and resources to help increase overall compliance and timely compliance. abstract_id: PUBMED:36247094 BI-RADS 3 Assessment on MRI: A Lesion-Based Review for Breast Radiologists. Unlike mammography and US, limited data exist to establish well-defined criteria for MRI findings that have a ≤2% likelihood of malignancy. Therefore, determining which findings are appropriate for a BI-RADS 3 assessment on MRI remains challenging and variable among breast radiologists. Emerging data suggest that BI-RADS 3 should be limited to baseline MRI examinations (or examinations with less than two years of prior comparisons) performed for high-risk screening and only used for masses with all of the typical morphological and kinetic features suggestive of a fibroadenoma or dominant enhancing T2 hypointense foci that is distinct from background parenchymal enhancement and without suspicious kinetics. This article presents an updated discussion of BI-RADS 3 assessment (probably benign) for breast MRI using current evidence. abstract_id: PUBMED:38424837 Image-guided Procedure Versus 2-year Follow-up for a BI-RADS 3 Probably Benign Lesion: A Cost Comparison Analysis. Objective: Perform a comparison between the costs of image-guided breast procedures versus 2-year imaging follow-up for findings classified as BI-RADS assessment category 3-probably benign. Methods: The national payment amount costs at non-facility locations were obtained from the Centers for Medicare and Medicaid Services physician fee schedule for breast imaging-related Current Procedural Terminology codes. Total costs were calculated and compared for management algorithms of 2-year imaging follow-up of a BI-RADS 3 lesion from 2018 through 2019 versus performing an image-guided procedure of the lesion in 2018 after the initial diagnostic imaging. Results: Two-year mammographic follow-up of a BI-RADS 3 finding costs $484. This was less than a stereotactic-guided breast biopsy, which cost at least $1055. Two-year follow-up for a probably benign US finding cost $615 compared to $1173 for the least expensive US-guided breast biopsy scenario. For breast MRI, 2-year imaging follow-up cost $1510, which was also less than most MRI-guided breast biopsy scenarios. The one exception in which biopsy costs less than 2-year imaging follow-up was in the setting of an MRI-guided biopsy in the average-risk population without a post-benign biopsy follow-up breast MRI; in this setting, MRI biopsy cost $1235. Conclusion: In 2018-2019, 2-year imaging follow-up of a BI-RADS 3 finding continues to be less costly than an immediate procedure, except for MRI-guided breast biopsy in the average-risk population without a post-benign biopsy follow-up MRI. abstract_id: PUBMED:28248009 Low Compliance in a Health-Conscience Spending Era Likely Helps Obviates the Need for 6 month BI-RADS 3 Breast MRI Follow-up After 1 year. The goal of our IRB-approved study was to assess if a follow-up MRI every 6 months for 2 years is the most appropriate short-interval follow-up schedule. 203 breast MRI exams were performed from October 2009 to January 2014 as part of a BI-RADS 3 follow-up representing 2.6% of all breast MRIs (7,822) performed. We performed a retrospective longitudinal medical records review of compliance; malignancy rate of BI-RADS 3 exams; and average time and number of breast MRIs necessary prior to definitive disposition. While 77.8% eventually returned, only 45.5% of patients were compliant with follow-up at or near 6 months (4.5-7.5 months). Of those who eventually returned, it took an average of 1.31 follow-up MRIs (95% CI: 1.20-1.43 exams) and 10.3 months (95% CI: 9.0-11.7 months) before definitive disposition. 93.5% of initial findings were dispositioned as benign after two follow-up MRI exams (malignancy rate: 0.98%). Our results lend support to the possibility that the follow-up interval for BI-RADS 3 breast MRIs could be lengthened to 12 months if additional follow-up MRIs are necessary after the first year of 6-month follow-up breast MRIs. Foremost, this appears to be a safe follow-up alternative since benign definitive disposition can usually be made in less than 1 year. Supplemental reasons include persistent low-patient compliance (as redemonstrated in our study) and the higher cost of breast MRI compared to mammogram/ultrasound follow-up. Finally, this paper's findings further support the suggested MRI follow-up interval in the newest BI-RADS atlas. abstract_id: PUBMED:29399419 BI-RADS 3: Current and Future Use of Probably Benign. Purpose Of Review: Probably benign (BI-RADS 3) causes confusion for interpreting physicians and referring physicians and can induce significant patient anxiety. The best uses and evidence for using this assessment category in mammography, breast ultrasound, and breast MRI will be reviewed; the reader will have a better understanding of how and when to use BI-RADS 3. Recent Findings: Interobserver variability in the use of BI-RADS 3 has been documented. The 5th edition of the BI-RADS atlas details the appropriate use of BI-RADS 3 for diagnostic mammography, ultrasound, and MRI, and discourages its use in screening mammography. Data mining, elastography, and diffusion weighted MRI have been evaluated to maximize the accuracy of BI-RADS 3. Summary: BI-RADS 3 is an evolving assessment category. When used properly, it reduces the number of benign biopsies while allowing the breast imager to maintain a high sensitivity for the detection of early stage breast cancer. abstract_id: PUBMED:35392255 Value of BI-RADS 3 Audits. Objectives: BI-RADS 3 is an established assessment category in which the probability of malignancy is equal to or less than 2%. However, monitoring adherence to imaging criteria can be challenging and there are few established benchmarks for auditing BI-RADS 3 assignments. In this study, we explore some parameters that could serve as useful tools for quality control and clinical practice management. Materials And Methods: This retrospective study covered a 4-year period (Jan 2014-Dec 2017) and included all women over 40 years who were recalled from a screening exam and had an initial assignment of BI-RADS 3 (probably benign) category after diagnostic workup. A follow-up period of 2 years following the assignment of BI-RADS 3 was used for quantitative quality control metrics. Results: Among 135,765 screening exams, 13,453 were recalled and 1,037 BI-RADS 3 cases met inclusion criteria. The follow-up rate at 24 months was 86.7%. The upgrade rate was 7.4% (77/1,037) [CI: 5.9-9.2%] and the PPV3 was 33.8% (26/77) [CI: 23.4-45.5%]. The cancer yield was 2.51% (26/1,037) [CI: 1.64-3.65%] and did not differ (p=0. 243) from the 2% probability of malignancy. The initial BI-RADS3 per screening exam and per recall from screening were 0.76% (1,037/135,765) [CI: 0.72-0.81%] and 7.7% (1,037/13,453) [CI: 7.26-8.17%], respectively. Conclusion: Regular audit of BIRADS 3 metrics has the potential to provide additional insights for clinical practice management. Data from varied clinical settings with input from an expert committee could help establish benchmarks for these metrics. Answer: Yes, a one-year follow-up appears to be an efficient method for the management of MRI BI-RADS® 3 lesions. Several studies have evaluated the necessity and outcomes of follow-up intervals for these lesions, which are considered probably benign with a low likelihood of malignancy. One study suggests that a one-year follow-up, instead of two years, is sufficient for BI-RADS 3 lesions when using rigorous inclusion criteria. This is based on the finding that the predictive positive value (PPV) of cancer for BI-RADS® 3 lesions incidentally found on MRI is very low, at 0.5% in their study (PUBMED:27212693). Another study found that the malignancy rate in BI-RADS 3 lesions detected by MRI is low, at 2.8%, and falls within the accepted cancer rate for mammography and sonography. They concluded that changes in size, morphology, and enhancement pattern should be considered in terms of malignancy development during follow-up, and that follow-up intervals should be determined on a case-by-case basis (PUBMED:38293846). Furthermore, a study on synchronous BI-RADS category 3 lesions detected by preoperative breast MRI in patients with breast cancer suggested that monitoring could be proposed, and a continued follow-up beyond 2 years could be of benefit, as cancer may occur beyond this period (PUBMED:33993330). Additionally, a study on high-risk screening breast MRI found that most cancers diagnosed at follow-up of BI-RADS 3 lesions are in an early stage and are diagnosed at or before the 6-month follow-up examination, indicating that short-interval follow-up MRI is an appropriate method for identifying early-stage breast cancer while avoiding unnecessary biopsies (PUBMED:31799867). In summary, the evidence suggests that a one-year follow-up is generally efficient for managing MRI BI-RADS® 3 lesions, with some studies advocating for continued annual follow-up beyond one year to ensure early detection of any malignancies that may develop over time.
Instruction: Can untrained laypersons use a defibrillator with dispatcher assistance? Abstracts: abstract_id: PUBMED:17541030 Can untrained laypersons use a defibrillator with dispatcher assistance? Objectives: Automated external defibrillators (AEDs) provide an opportunity to improve survival in out-of-hospital cardiac arrest by enabling laypersons not trained in rhythm recognition to deliver lifesaving therapy. This study was performed to examine whether untrained laypersons could safely and effectively use these AEDs with telephone-guided instructions and if this action would compromise the performance of cardiopulmonary resuscitation (CPR) during a simulated ventricular fibrillation out-of-hospital cardiac arrest. Methods: Fifty-four conscripts without previous medical education were recruited from the Western Command in Finland. For this study, the participants were divided at random to form teams of two persons. The teams were randomized to dispatcher-assisted CPR with or without AED operation during a simulated ventricular fibrillation out-of-hospital cardiac arrest. The time interval from collapse to first shock, hands-off time, and the quality of CPR were compared between the two groups. Results: The quality of CPR was poor in both groups. The use of an AED did not increase the hands-off time or the time interval to the first compression. Sixty-four percent of the teams in the AED group managed to give the first defibrillatory shock within 5 minutes. Conclusions: The quality of dispatcher-assisted CPR is poor. Dispatcher assistance in defibrillation by a layperson not trained to use an AED seems feasible and does not compromise the performance of CPR. abstract_id: PUBMED:11581083 Dispatcher assistance and automated external defibrillator performance among elders. Objectives: Automated external defibrillators (AEDs) provide an opportunity to improve survival in out-of-hospital, ventricular fibrillation (VF) cardiac arrest by enabling laypersons not trained in rhythm recognition to deliver lifesaving therapy. The potential role of emergency dispatchers in the layperson use of AEDs is uncertain. This study was performed to examine whether dispatcher telephone assistance affected AED skill performance during a simulated VF cardiac arrest among a cohort of older adults. The hypothesis was that dispatcher assistance would increase the proportion who were able to correctly deliver a shock, but might require additional time. Methods: One hundred fifty community-dwelling persons aged 58-84 years were recruited from eight senior centers in King County, Washington. All participants had received AED training approximately six months previously. For this study, the participants were randomized to AED operation with or without dispatcher assistance during a simulated VF cardiac arrest. The proportions who successfully delivered a shock and the time intervals from collapse to shock were compared between the two groups. Results: The participants who received dispatcher assistance were more likely to correctly deliver a shock with the AED during the simulated VF cardiac arrest (91% vs 68%, p = 0.001). Among those who were able to deliver a shock, the participants who received dispatcher assistance required a longer time interval from collapse to shock [median (25th, 75th percentile) = 193 seconds (165, 225) for dispatcher assistance, and 148 seconds (138, 166) for no dispatcher assistance, p = 0.001]. Conclusions: Among older laypersons previously trained in AED operation, dispatcher assistance may increase the proportion who can successfully deliver a shock during a VF cardiac arrest. abstract_id: PUBMED:19058896 Defibrillation and the quality of layperson cardiopulmonary resuscitation-dispatcher assistance or training? Aims Of The Study: To examine whether basic life support-defibrillation (BLS-D) training of laypersons enhances the speed of defibrillation and the quality of cardiopulmonary resuscitation (CPR) during a simulated ventricular fibrillation scenario compared with a situation where the care provider has no previous BLS-D training but receives dispatcher assistance with the use of an automated external defibrillator (AED) and the performance of CPR. Methods: Fifty-two military conscripts of the Finnish Defence Forces who without previous medical education had been tested in a simulated cardiac arrest scenario with dispatcher assistance and thereafter received a 4-h BLS-D training. Six months later they were randomly divided to form teams of two and again tested in a similar scenario but without dispatcher assistance. The time interval from collapse to first shock, hands-off time and the quality of CPR were compared between the two tests. Results: The quality of mouth-to-mouth ventilation was better after training, but there was only a minor improvement in the quality of compressions and the speed of defibrillation. Conclusions: Training improved the quality of mouth-to-mouth ventilation performed by laypersons but had only a minor effect on defibrillation and the quality of compressions. abstract_id: PUBMED:30876922 The use of dispatcher assistance in improving the quality of cardiopulmonary resuscitation: A randomised controlled trial. Aims: The introduction of dispatcher assistance (DA) services has led to increased bystander cardiopulmonary resuscitation (CPR) participation rates. However, the extent to which DA improves CPR quality remains unclear. This study aimed to evaluate the efficacy of DA in improving CPR quality among healthcare professionals and laypersons within a multi-ethnic Southeast Asian population. Methods: A parallel, randomised controlled, open label trial was performed. Four hundred and twelve participants were recruited via convenience sampling in a public location. In a simulated cardiac-arrest scenario, the participants were randomised to perform CPR with DA over the phone (DA+) or CPR without DA (DA-). The ratio of participant assignment to DA+ and DA- was 1:1. The primary outcomes were CPR compression depth, compression rate, no-flow time, complete release of pressure between compressions, and hand location. The assessment involved CPR manikins and human assessors. Results: A larger proportion of participants in DA + achieved the correct compression rate (34.3% vs 18.1%, p &lt; 0.001). There was no difference in the other primary outcomes. A subgroup analysis revealed that healthcare professionals in DA+ had a higher proportion of correct hand location compared to those in DA- (82.1% vs. 53.5%, p &lt; 0.05). There was no significant difference in CPR quality among laypersons with valid CPR certification regardless of whether they received DA. Conclusion: DA should be provided to laypersons without valid CPR certification, as well as healthcare professionals. The identification of gaps in the current DA protocol highlights areas where specific changes can be made to improve CPR quality. abstract_id: PUBMED:18723715 Performance of cellular phones with video telephony in the use of automated external defibrillators by untrained laypersons. Aim: To evaluate the hypothesis that using an automated external defibrillator (AED) with video telephony-directed cellular phone instructions for untrained laypersons would increase the probability of successful performance of AEDs. Real-time communication with visual images can provide critical information and appropriate instructions to both laypersons and dispatchers. Methods: A prospective observational study was undertaken. 52 public officers with no previous experience in the use of a defibrillator were presented with a scenario in which they were asked to use an AED on a manikin according to the instructions given to them by cellular phones with video telephony. The proportion who successfully delivered a shock and the time interval from cardiac arrest to delivery of the shock were recorded. Results: Placement of the electrode pads was performed correctly by all 52 participants and 51 (98%) delivered an accurate shock. The mean (SD) time to correct shock delivery was 131.8 (20.6) s (range 101-202). Conclusion: Correct pad placement and shock delivery can be performed using an AED when instructions are provided via video telephone because a dispatcher can monitor every step and provide correct information. abstract_id: PUBMED:34223297 Sustaining improvement of dispatcher-assisted cardiopulmonary resuscitation for out-of-hospital cardiac arrest patients in Japan: An observational study. Objectives: We aimed to estimate the relationship between the promotion of bystander cardiopulmonary resuscitation (CPR) with dispatcher-assistance over time and good cerebral function after out-of-hospital cardiac arrests (OHCAs). Methods: This was a retrospective observational study, using a nationwide OHCA database in Japan. The eligible 267,193 witnessed cardiogenic OHCA patients between 2005 and 2016 were analysed. Multivariable logistic regression models were performed to estimate the effect of dispatcher-assisted bystander CPR per year. In addition, we calculated the number of patients with good cerebral function, which was attributed to dispatcher-assisted bystander CPR. Results: Dispatcher-assisted bystander CPR was performed to 84,076 (31.5%), those without dispatcher-assistance were 48,389 (18.1%), and non-bystander CPR were 134,728 (50.4%). The adjusted odds ratio (AOR) of dispatcher-assisted bystander CPR vs. non-bystander CPR was significantly related to good cerebral function, regardless of the year (AOR, 1.47, 1.62; 95%CI, 1.19-1.80, 1.42-1.85, 2005 and 2016, respectively). The association of dispatcher-assisted bystander CPR with good cerebral function tended to increase (AOR, 1.11, 2.97; 95%CI, 0.99-1.24, 2.69-3.28, 2006 and 2016, based on 2005, respectively). Estimating the number of patients with good cerebral function who attributed to dispatcher-assisted bystander CPR was a significant increase from 41 in 2005 to 580 in 2016 (p &lt; .0001, r = 0.98). Furthermore, chest compression consistently contributed to higher number of patients with good cerebral function than that with a combination of chest compression and shock with public-access-defibrillation. Conclusion: We found that the increased dispatcher-assisted bystander CPR rate was related to good cerebral function at 1-month post OHCA. Chest compression without public-access-defibrillation was most helpful to that number, explaining the effects of dispatcher-assistance and sustaining improvement. abstract_id: PUBMED:29703565 Dispatcher-assisted bystander cardiopulmonary resuscitation and survival in out-of-hospital cardiac arrest. Introduction: Bystander cardiopulmonary resuscitation (CPR) is critical to the survival of patients with out-of-hospital cardiac arrest (OHCA). However, it is unknown whether bystander CPR with or without dispatcher assistance is more effective or why. Thus, we evaluated the association between dispatcher-assisted bystander CPR (vs. bystander CPR without dispatcher assistance) and survival of patients with OHCA. Methods: This is a retrospective, nonrandomized, observational study using national registry data for all OHCAs. We performed a propensity analysis. Patients with OHCA of cardiac origin were 18-100 years of age and received bystander chest compression in Japan between 2005 and 2014. Outcome measures were bystander rescue breathing, return of spontaneous circulation (ROSC) before hospital arrival, and survival and Cerebral Performance Category (CPC) 1 or 2 at 1 month after the event. Results: During the study period, 1,176,351 OHCAs occurred, and 87,400 cases met the inclusion criteria. Among propensity-matched patients, a negative association was observed between dispatcher-assisted bystander CPR and outcome measures in a fully-adjusted model [odds ratio (OR) (95% CI) for ROSC = 0.87 (0.78-0.97), P &lt; 0.05; OR (95% CI) for 1-month survival = 0.81 (0.65-1.00), P &lt; 0.05; OR (95% CI) for CPC 1 or 2 = 0.64 (0.43-0.93), P &lt; 0.05]. OR of survival for dispatcher-assisted bystander CPR tended to decrease as the emergency medical services response time increased. Conclusions: Survival benefit was less for dispatcher-assisted bystander CPR with dispatcher assistance than without dispatcher assistance. Low quality is hypothesized to be the cause of the reduced benefit. abstract_id: PUBMED:30111222 Automated External Defibrillator Geolocalization with a Mobile Application, Verbal Assistance or No Assistance: A Pilot Randomized Simulation (AED G-MAP). Objective: Shockable rythms are common among victims of witnessed public out-of-hospital cardiac arrest (OHCA), but bystander defibrillation with a public automated external defibrillator (PAED) is rare. Instructions from the emergency medical dispatcher and mobile applications were developed to expedite the localization of PAEDs, but their effectiveness has not been compared. Methods: Participants were enrolled in a three-armed randomized simulation where they witnessed a simulated OHCA on a university campus, were instructed to locate a PAED and provide defibrillation. Participants were stratified and randomized to: (1) no assistance in finding the PAED, (2) assistance from a geolocalization mobile application (AED-Quebec), or (3) verbal assistance. Data collectors tracked each participant's time elapsed and distance traveled to shock. Results: Of the 52 volunteers participating in the study (46% male, mean age 37), 17 were randomized to the no assistance group, 18 to the mobile application group and 17 to the verbal group. Median (IQR) time to shock was, respectively, 10:00 min (7:49-10:00), 9:44 (6:30-10:00), and 5:23 (4:11-9:08), with statistically significant differences between the verbal group and the other groups (p ≤ 0.01). The success rate for defibrillation in &lt;10 minutes was 35%, 56% and 76%. Multivariate regression of all participants pooled showed that knowledge of campus geography was the strongest predictor of shock in &lt;10 minutes (aOR =14.3, 95% CI 1.85-99.9). Among participants without prior geographical knowledge, verbal assistance provided a trend towards decreased time to shock, but the differences over no assistance (7:28 vs. 10:00, p = 0.10) and over the mobile app (7:28 vs. 10:00, p = 0.11) were not statistically significant. Conclusion: In a simulated environment, verbally providing OHCA bystanders with the nearest PAED's location appeared to be effective in reducing the time to defibrillation in comparison to no assistance and to an AED geolocalizing mobile app, but further research is required to confirm this hypothesis, ascertain the external validity of these results, and evaluate the real-life implications of these strategies. abstract_id: PUBMED:33251980 Assessing knowledge and skills of laypersons bringing victims of road traffic collisions to Queen Elizabeth Central Hospital in Blantyre, Malawi. In Malawi, pre-hospital care assistance is mainly provided by laypersons who witnessed the event. The aim of our study was to determine the knowledge and skills of such persons who bring victims of road traffic crashes to hospital. The study was conducted at Adult Emergency and Trauma Centre at Queen Elizabeth Central Hospital in Blantyre, Malawi. A total of 392 participants were interviewed. Most were merchants (22%) and unskilled labourers (14.5%). Three quarters (75.8%) provided assistance on the scene. The most common assistance provided was transporting victim to the hospital (68.7%), assisting with safe lifting (57.9%) and calling for help (39.7%). Airway protection was provided by only 1% of participants. Therefore, it is recommended to establish some formal pre-hospital assistance to reduce morbidity and mortality from road traffic crashes. Laypersons, especially merchants, students and drivers are potential strong first responders, and training them may help improve pre-hospital care outcome. abstract_id: PUBMED:27658651 A novel protocol for dispatcher assisted CPR improves CPR quality and motivation among rescuers-A randomized controlled simulation study. Background: Emergency dispatchers use protocols to instruct bystanders in cardiopulmonary resuscitation (CPR). Studies changing one element in the dispatcher's protocol report improved CPR quality. Whether several changes interact is unknown and the effect of combining multiple changes previously reported to improve CPR quality into one protocol remains to be investigated. We hypothesize that a novel dispatch protocol, combining multiple beneficial elements improves CPR quality compared with a standard protocol. Methods: A novel dispatch protocol was designed including wording on chest compressions, using a metronome, regular encouragements and a 10-s rest each minute. In a simulated cardiac arrest scenario, laypersons were randomized to perform single-rescuer CPR guided with the novel or the standard protocol. Primary Outcome: a composite endpoint of time to first compression, hand position, compression depth and rate and hands-off time (maximum score: 22 points). Afterwards participants answered a questionnaire evaluating the dispatcher assistance. Results: The novel protocol (n=61) improved CPR quality score compared with the standard protocol (n=64) (mean (SD): 18.6 (1.4)) points vs. 17.5 (1.7) points, p&lt;0.001. The novel protocol resulted in deeper chest compressions (mean (SD): 58 (12)mm vs. 52 (13)mm, p=0.02) and improved rate of correct hand position (61% vs. 36%, p=0.01) compared with the standard protocol. In both protocols hands-off time was short. The novel protocol improved motivation among rescuers compared with the standard protocol (p=0.002). Conclusions: Participants guided with a standard dispatch protocol performed high quality CPR. A novel bundle of care protocol improved CPR quality score and motivation among rescuers. Answer: Yes, untrained laypersons can use a defibrillator with dispatcher assistance. Studies have shown that with telephone-guided instructions, untrained laypersons are capable of safely and effectively using automated external defibrillators (AEDs) during simulated out-of-hospital cardiac arrest scenarios. For instance, a study involving conscripts without previous medical education found that 64% of the teams managed to give the first defibrillatory shock within 5 minutes with dispatcher assistance, and the use of an AED did not compromise the performance of cardiopulmonary resuscitation (CPR) (PUBMED:17541030). Another study with older adults who had received AED training approximately six months prior showed that those who received dispatcher assistance were more likely to correctly deliver a shock with the AED during a simulated ventricular fibrillation cardiac arrest, although it required a longer time interval from collapse to shock (PUBMED:11581083). Furthermore, a study evaluating the use of AEDs with video telephony-directed cellular phone instructions found that nearly all participants (98%) delivered an accurate shock, and the mean time to correct shock delivery was 131.8 seconds, suggesting that real-time communication with visual images can significantly aid untrained laypersons in using AEDs (PUBMED:18723715). These findings indicate that dispatcher assistance can be a valuable tool in guiding untrained laypersons to use AEDs effectively in emergency situations, potentially improving the chances of survival for victims of out-of-hospital cardiac arrest.
Instruction: Is there a relationship between victim and partner alcohol use during an intimate partner violence event? Abstracts: abstract_id: PUBMED:30678604 Lifetime Number of Perpetrators and Victim-Offender Relationship Status Per U.S. Victim of Intimate Partner, Sexual Violence, or Stalking. The aim of this study was to describe the U.S. population-level prevalence of multiple perpetrator types (intimate partner, acquaintance, stranger, person of authority, or family member) per victim and to describe the prevalence of victim-offender relationship status combinations. Authors analyzed U.S. nationally representative data from noninstitutionalized adult respondents with self-reported lifetime exposure to intimate partner violence, sexual violence, or stalking in the 2012 National Intimate Partner and Sexual Violence Survey (NISVS). An estimated 142 million U.S. adults had some lifetime exposure to intimate partner violence, sexual violence, or stalking. An estimated 55 million victims (39% of total victims) had more than one perpetrator type during their lifetimes. A significantly higher proportion of female victims reported more than one perpetrator type compared with male victims (49% vs. 27%). Among both female and male victims with &gt;1 perpetrator type, the most prevalent victim-offender relationship status combinations all included an intimate partner perpetrator. Many victims of interpersonal violence are subject to multiple perpetrator types during their lifetimes. Prevention strategies that address polyvictimization and protect victims from additional perpetrators can have a substantial and beneficial societal impact. Research on victim experiences to inform prevention strategies is strengthened by comprehensively accounting for lifetime victimizations. abstract_id: PUBMED:33827299 The Perpetrator-Victim Relationship: An Important Clue in Understanding Intimate Partner Homicide in China. Intimate partner homicide (IPH) is a global public health issue that has serious consequences for an individual's health. Whereas prior research has made some progress analyzing the perpetrator-victim relationship in understanding the dynamics of IPH, research set in the context of China in this field using the perpetrator-victim relationship as a focal variable in analyzing IPH is still lacking. The current study addresses this gap and builds on existing research by exploring the significance of the perpetrator-victim relationship in order to facilitate a deeper understanding of IPH in China. After examining 979 IPH criminal judgments, the findings revealed that the perpetrator-victim relationship is an important clue in understanding IPH in China today. By analyzing the correlation between intimate relationships as a focal variable and other variables such as IPH characteristics and risk factors, it is possible to understand the prevalence of IPH in China in greater detail. Irrespective of the kind of relationship between the perpetrator and victim, the findings revealed that most of them come from lower socioeconomic backgrounds, have no previous history of intimate partner violence, and are not satisfied with their marriages. Although research has provided detailed explanations, the perpetrator-victim relationship remains neglected, especially in the context of research on contemporary China. The findings showed that over half of the perpetrators and victims had intimate relationships with people outside their marriages. This may be the main reason for the rapid increase in the number of instances of IPH, which seems to be in tandem with China's rapid modernization and urbanization. abstract_id: PUBMED:32630323 The Effect of Prescription Drugs and Alcohol Consumption on Intimate Partner Violence Victim Blaming. Intimate Partner Violence (IPV) is a public health problem with harsh consequences for women's well-being. Social attitudes towards victims of IPV have a big impact on the perpetuation of this phenomenon. Moreover, specific problems such as the abuse of alcohol and drugs by IPV victims could have an effect on blame attributions towards them. The aim of this study was to evaluate whether the external perception (Study 1) and self-perception (Study 2) of blame were influenced by the victims' use and abuse of alcohol or by the victims' use of psychotropic prescription drugs. Results of the first study (N = 136 participants) showed a significantly higher blame attribution towards female victims with alcohol abuse compared to those without it. No significant differences were found on blame attributed to those with psychotropic prescription drugs abuse and the control group. Results of the second study (N = 195 female victims of interpersonal violence) showed that alcohol consumption is associated with higher self-blame and self-blame cognitions among IPV victims. However, results did not show significant differences on self-blame associated to the victims' use of psychotropic prescription drugs. Our findings indicate that alcohol consumption, but not prescription drugs use, plays a relevant role in the attribution of blame by general population and self-blame by victims of IPV. abstract_id: PUBMED:28239508 Empirical Investigation of a Model of Sexual Minority Specific and General Risk Factors for Intimate Partner Violence among Lesbian Women. Objective: This study proposed and tested the first conceptual model of sexual minority specific (discrimination, internalized homophobia) and more general risk factors (perpetrator and partner alcohol use, anger, relationship satisfaction) for intimate partner violence among partnered lesbian women. Method: Self-identified lesbian women (N=1048) were recruited from online market research panels. Participants completed an online survey that included measures of minority stress, anger, alcohol use and alcohol-related problems, relationship satisfaction, psychological aggression, and physical violence. Results: The model demonstrated good fit and significant links from sexual minority discrimination to internalized homophobia and anger, from internalized homophobia to anger and alcohol problems, and from alcohol problems to intimate partner violence. Partner alcohol use predicted partner physical violence. Relationship dissatisfaction was associated with physical violence via psychological aggression. Physical violence was bidirectional. Conclusions: Minority stress, anger, alcohol use and alcohol-related problems play an important role in perpetration of psychological aggression and physical violence in lesbian women's intimate partner relationships. The results of this study provide evidence of potentially modifiable sexual minority specific and more general risk factors for lesbian women's partner violence. abstract_id: PUBMED:37148131 Effects of alcohol problem discrepancy on relationship adjustment: The moderating role of conflict negotiation among couples with alcohol use disorder and intimate partner violence. Alcohol use disorder (AUD) has well-known negative effects on romantic relationship functioning, including the occurrence of intimate partner violence (IPV). A separate literature focused on community couples indicates that relationship functioning is more likely to suffer when partners report greater discrepancies in alcohol consumption. It is important to expand this literature to couples with AUD and to examine the role of impactful AUD domains in dyadic functioning. Furthermore, few studies have examined adaptive, treatment-malleable factors that could potentially offset the negative impact of alcohol discrepancies on relationship functioning. This study examined the association between couples' alcohol problem discrepancies and relationship adjustment, as well as the moderating effect of self-reported adaptive conflict negotiation behaviors. Participants were 100 couples (N = 200 individual participants) with intimate partner violence wherein at least one partner met diagnostic criteria for AUD. Actor-Partner Interdependence Models indicated that greater alcohol problem discrepancy was associated with lower dyadic adjustment. Moderation analyses revealed that the highest level of relationship adjustment was observed among couples with lower alcohol problem discrepancy and greater negotiation behaviors, while relationship adjustment was similar for couples with larger alcohol problem discrepancy, regardless of negotiation behaviors. Although further study is needed to clarify under what specific conditions adaptive negotiation behaviors are most helpful, they appear to be beneficial for some couples in this sample. We found no evidence that negotiation behaviors may be harmful among these high-risk couples. abstract_id: PUBMED:36342211 Partner Alcohol Use and Attitudes Toward Intimate Partner Violence: Independent and Synergistic Effects on Intimate Partner Violence Across 19 Low- and Middle-Income Countries. Although partner alcohol use and acceptance of intimate partner violence against women (IPVAW) are critical determinants of IPVAW, little is known about their interaction. We explored how partner alcohol use and attitudes toward IPVAW act independently and jointly at the individual and community levels to influence women's reports of experiencing IPVAW across low- and middle-income countries. We conducted secondary analyses using a pooled sample of reproductive-aged women (n = 166,621) from 19 Demographic and Health Survey datasets. We fit a series of a priori-defined mixed-effects logistic regression models of the total effects, within- and between-community effects, and contextual effects of past-year IPVAW on partner alcohol use, acceptance of IPVAW, and their multiplicative interaction. We then fit a series of models stratified by community alcohol use and acceptance of IPVAW. Partner alcohol use (odds ratio [OR] = 3.20; 95% confidence interval [CI]: [3.07, 3.33]) and women's acceptance of IPVAW (OR = 1.83; 95% CI: [1.76, 1.89]) were consistently associated with increased odds of experiencing IPVAW. Sub-multiplicative interactions were present for within-community effects (ratio of OR = 0.86; 95% CI: [0.79, 0.94]), whereas supra-multiplicative interactions were present for between-community effects (ratio of OR = 1.002; 95% CI: [1.0002, 1.005]) and contextual effects (ratio of OR = 1.003; 95% CI: [1.0007, 1.005]). The odds of IPVAW associated with partner alcohol use was greatet in communities with lower partner alcohol prevalence and lower acceptance of IPVAW. It is important to consider norms and attitudes toward IPVAW alongside alcohol use when understanding epidemiological patterns of IPVAW and potential opportunities for preventive programs and policymaking. Future studies should focus on the complex interactions, at multiple social levels, between interacting risk factors for IPVAW. abstract_id: PUBMED:34160315 Adding Insult to Injury: The Effects of Intimate Partner Violence Spillover on the Victim's Reputation. This article examined indirect consequences for the victims of intimate partner violence (IPV) in terms of ostracism and reputational threats. Through an experimental vignette survey, we compared bystanders' reactions to either an intimate partner violence episode or a generic violence episode. A victim of IPV (vs. generic violence) received a more negative moral evaluation and was considered as more responsible for the violence perpetrated on her. This made participants not only anticipate a less positive reputation attributed to the victim but also report less willingness to approach and defend the victim and include her in relevant ingroups 1 year after the episode. abstract_id: PUBMED:29230309 Alcohol use and intimate partner violence among women and their partners in sub-Saharan Africa. Background: Alcohol use is a well-documented risk factor for intimate partner violence (IPV); however, the majority of research comes from high-income countries. Methods: Using nationally representative data from 86 024 women that participated in the Demographic and Health Surveys, we evaluated the relationship between male partner alcohol use and experiencing IPV in 14 countries in sub-Saharan Africa (SSA). Using multilevel mixed-effects models, we calculated the within-country, between-country, and contextual effects of alcohol use on IPV. Results: Prevalence of partner alcohol use and IPV ranged substantially across countries (3-62 and 11-60%, respectively). Partner alcohol use was associated with a significant increase in the odds of reporting IPV for all 14 countries included in this analysis. Furthermore, the relationship between alcohol use and IPV, although largely explained by partner alcohol use, was also attributable to overall prevalence of alcohol use in a given country. The partner alcohol use-IPV relationship was moderated by socioeconomic status (SES): among women with a partner who used alcohol those with lower SES had higher odds of experiencing IPV than women with higher SES. Conclusions: Results of this study suggest that partner alcohol use is a robust correlate of IPV in SSA; however, drinking norms may independently relate to IPV and confound the relationship between partner alcohol use and IPV. These findings motivate future research employing experimental and longitudinal designs to examine alcohol use as a modifiable risk factor of IPV and as a novel target for treatment and prevention research to reduce IPV in SSA. abstract_id: PUBMED:36916056 Attribution of Blame in an Intimate Partner Violence Situation: The Effect of Victim Sexuality and Observer Sex. Intimate partner violence (IPV) is an extensive public health concern, largely affecting women aged 20 to 24 years. Research suggests that bisexual women are more likely than heterosexual and homosexual women to be victims of IPV. Bisexual women are also more likely to be blamed for their abuse experiences after disclosing, a phenomenon known as victim blame attribution (VBA). However, very little VBA research recognizes bisexuality as a separate category. Therefore, the main aim of this quasi-experimental study was to investigate the role of female victim sexuality (bisexuality, homosexuality, and heterosexuality) and observer sex in the attribution of blame to the victim and perpetrator of IPV. Participants (N = 232; aged 18-24 years, M = 21.05, SD = 1.73) were randomly assigned into one of four conditions (heterosexual victim, bisexual victim with same-sex partner, bisexual victim with different-sex partner, homosexual victim), each containing a vignette portraying IPV within a relationship. Randomization checks were performed to ensure that participants in the four conditions did not differ significantly on underlying attitudes (institutional heterosexism (IH), aversive heterosexism (AH), heterosexual privilege (HP), sexist attitudes, just world beliefs) that may have affected their responses on outcome measures. Main analyses demonstrated that bisexual victims with a same-sex partner received the highest attribution of blame, whereas perpetrators in this condition received the lowest blame attribution. Male participants attributed significantly higher blame to victims than did female participants, regardless of victim sexuality. These findings substantiate the role of victim sexuality and observer sex in IPV blame attribution patterns. This research aimed to promote equality and rightful treatment to all victims of IPV regardless of their sexuality. abstract_id: PUBMED:32871528 Relationship trajectories of women experiencing alcohol-related intimate partner violence: A grounded-theory analysis of women's voices. Rationale: The association between male partner alcohol use and increased risk and severity of their perpetration of intimate partner violence (IPV) is well-established in quantitative research. However, few studies have explored the nature and trajectory of relationships involving partner drinking and abuse, and how women find pathways to safety. Objective And Method: We conducted in-depth interviews with a community sample of 18 Australian women (aged 20-50 years) who reported feeling afraid when their male partner drank alcohol. Using a constructivist grounded theory approach, we identified key processes underpinning women's experience of alcohol-related IPV and mapped these over four relationship phases. Results: Partner alcohol use played a key role in how women interpreted and dealt with IPV victimisation. In early relationships, women spoke of not seeing or dismissing early warning signs of problem drinking and aggression in settings that normalized men's heavy drinking. Later, women identified patterns of inter-connected drinking and aggression, leading to questioning their reality, trying to 'fix' their partner's drinking to stop the abuse, and in the absence of change, learning to manage daily life around the drinking and abuse. In the third phase, giving up hope that the partner would stop drinking, women ended the relationship. Finally, after leaving the abuser, women attempted to reset normal around drinking behaviour but continued to experience trauma associated with others' drinking in social settings. Discussion And Conclusion: For women who have experienced partners' alcohol use intertwined with violence in their relationship, changing their partners' drinking plays a central role in their journey to safety, possibly obscuring recognition of abuse and complicating their ability to leave. Greater understanding of the stages of the alcohol-IPV relationship can help health providers support women as they navigate these complex relationships, and provide appropriate support depending on the needs of women in their relationship trajectory. Answer: Yes, there is a relationship between victim and partner alcohol use during an intimate partner violence (IPV) event. Research has shown that alcohol use is a well-documented risk factor for IPV, and this association has been observed in various contexts and populations. For instance, studies have found that alcohol use by the perpetrator is associated with an increased risk of IPV (PUBMED:29230309; PUBMED:36342211). Specifically, partner alcohol use has been consistently linked to higher odds of experiencing IPV (PUBMED:36342211), and this relationship is robust across different countries in sub-Saharan Africa (PUBMED:29230309). Moreover, the effect of alcohol use on IPV is not limited to the perpetrator's consumption. Victim alcohol use has also been found to influence the attribution of blame in IPV situations. For example, female victims of IPV who abuse alcohol are attributed significantly higher blame compared to those without alcohol abuse (PUBMED:32630323). Additionally, alcohol consumption by victims is associated with higher self-blame among IPV victims (PUBMED:32630323). Furthermore, the discrepancy in alcohol problems between partners has been shown to negatively affect relationship adjustment, with greater discrepancies associated with lower dyadic adjustment (PUBMED:37148131). This suggests that differences in alcohol use patterns within a couple can exacerbate relationship tensions and potentially contribute to IPV. In the context of sexual minority populations, minority stress, anger, alcohol use, and alcohol-related problems have been identified as important factors in the perpetration of psychological aggression and physical violence in lesbian women's intimate partner relationships (PUBMED:28239508). Overall, the evidence indicates that both victim and partner alcohol use are important factors in the occurrence and dynamics of IPV events, influencing not only the likelihood of violence but also the social and psychological responses to it.
Instruction: A centralized comparison of radical perineal and retropubic prostatectomy specimens: is there a difference according to the surgical approach? Abstracts: abstract_id: PUBMED:26328099 The effects of retropubic and perineal radical prostatectomy techniques on postoperative urinary continence after surgery: Results of 196 patients. Objective: The aim of this study was to examine the effects of the retropubic and perineal radical prostatectomy techniques on postoperative urinary continence. Material And Methods: This study included patients who had undergone radical retropubic (RRP) (n=54) or radical perineal prostatectomies (RPP) (n=142) in our clinic between December 2006 and December 2010 with the indication of localized prostate cancer. Preoperative, and postoperative continence levels of the patients were evaluated on the day of catheter removal, and at 3., 6. or 12. months using University of California-Los Angeles (UCLA) prostate cancer index query forms completed by the patients themselves in outpatient clinics. Results: Mean ages of the perineal, and retropubic groups were 62.3 (48-77), and 62.5 (50-74) years, respectively. The absolute urinary control rates of the RPP patients on the first days, and the first, third, sixth, and twelfth months were 44.3, 63.6, 76.2, 79.5, and 86.1%, respectively. The same rates were 51.4, 75.7, 78.4, 89.2, and 91.9% for the RRP group, respectively. Conclusion: There was no significant difference between RRP groups as for the frequency of urinary incontinence. abstract_id: PUBMED:12187206 A centralized comparison of radical perineal and retropubic prostatectomy specimens: is there a difference according to the surgical approach? Purpose: We performed a central review of pathology specimens from radical perineal and radical retropubic prostatectomies performed by a single surgeon. We determined whether differences exist in the 2 approaches in regard to the ability to obtain adequate surgical margins around the tumor and adequate extracapsular tissue around the prostate, and avoid inadvertent capsular incision. Materials And Methods: The review included whole mount prostates from 60 patients who underwent radical retropubic prostatectomy and 40 who underwent radical perineal prostatectomy. The pathologist (N. S. G.) was blinded to the surgical approach. All prostatectomies were consecutive and performed by the same surgeon (H. J. K.). To ensure consistency of the pathological measurements patients were excluded from analysis if they had undergone preoperative androgen ablation or a nerve sparing procedure, leaving 45 retropubic and 27 perineal prostatectomy specimens for further evaluation. Pertinent clinical parameters were assessed and a detailed pathological analysis of each specimen was performed. Results: In the retropubic and perineal groups 78% of the tumors were organ confined (stage pT2) with extracapsular extension (stage pT3) in the majority of the remaining patients. There was no significant difference in the positive margin rate for the retropubic and perineal procedures (16% and 22%, p = 0.53) or for Gleason 6 and 7 tumors only in the 2 groups (10% and 17%, respectively, p = 0.47). The capsular incision rate was 4% in each group. The distance of the tumor from the posterolateral margins and the amount of extracapsular tissue excised were equivalent in each group. Subgroups of patients with a prostate of less than 50 gm. and containing only low grade, low stage neoplasms were also analyzed. Subgroup analysis showed no difference in any variable. Conclusions: Radical perineal prostatectomy is comparable to radical retropubic prostatectomy for obtaining adequate surgical margins, avoiding inadvertent capsular incisions and excising adequate extracapsular tissue around tumor foci. Additional patient accrual and prostate specific antigen followup would further help validate the similar efficacy of the 2 surgical approaches as treatment for prostate cancer. abstract_id: PUBMED:27638375 A multi-institutional comparison of radical retropubic prostatectomy, radical perineal prostatectomy, and robot-assisted laparoscopic prostatectomy for treatment of localized prostate cancer. To evaluate the pathological stage and margin status of patients undergoing radical retropubic prostatectomy (RRP), radical perineal prostatectomy (RPP) and robot-assisted laparoscopic prostatectomy (RALP). We performed a retrospective analysis of 196 patients who underwent RRP, RPP, and RALP as part of our multi-institution program. Fifty-seven patients underwent RRP, 41 RPP, and 98 RALP. Patient age, preoperative prostate specific antigen (PSA), preoperative Gleason score, preoperative clinical stage, pathological stage, postoperative Gleason score, and margin status were reviewed. The three groups had similar preoperative characteristics, except for PSA (8.4, 6.5, and 6.2 ng/ml) for the retropubic, robotic, and perineal approaches. Margins were positive in 12, 24, and 36% of the specimens from RALP, RRP, and RPP, respectively (P = 0.004). The positive margin rates in patients with pT2 tumors were 4, 14, and 19% in the RALP, RRP, and the RPP groups, respectively (P = 0.03). Controlling for age and pre-operative PSA and Gleason score, the rate of positive margins was statistically lower in the RALP versus both the RRP (P = 0.046) and the RPP groups (P = 0.02). In the patients with pT3 tumors, positive margins were observed in 36% of patients undergoing the RALP and 53 and 90% of those patients undergoing the RRP and RPP, respectively (P = 0.015). Controlling for the same factors, the rate of positive margins was statistically lower in the RALP versus the RPP (P = 0.01) but not compared with the RRP patients (P = 0.32). The percentage of positive margins was lower in RALP than in RPP for both pT2 and pT3 tumors. RRP had a higher percentage of positive margins than RALP in the pT2 tumors but not in the pT3 tumors. abstract_id: PUBMED:12463122 Localization of positive surgical margins after retropubic, perineal and laparoscopic radical prostatectomy Objective: To study the sites of positive surgical margins after radical prostatectomy according to the technique used: retropubic, perineal or laparoscopic. Material And Methods: 538 radical prostatectomies were performed between 1988 and 2001: 184 via a retropubic approach, 119 via a perineal approach and 235 by laparoscopy. Clinical examination, PSA assay (Hybritech, Normal &lt; 4 ng/ml) and transrectal biopsies were performed in all patients. The radical prostatectomy specimen was examined by the same pathologist according to the Stanford protocol. The frequency and site of positive surgical margins were studied as a function of pathological stage. Results: The positive surgical margins rate was 32%, 18.5% and 26.4% for the retropubic, perineal and laparoscopic techniques, respectively. The most frequent site of positive surgical margins was the apex for retropubic (41.1%) and perineal (41.6%) prostatectomy and the posterolateral part of the prostate for laparoscopic prostatectomy (41.9%). The most frequent site of positive surgical margins in pT2 tumours was the apex for the retropubic approach (50%), the base of the prostate (bladder neck) for the perineal approach (41.6%) and the apex and posterolateral part of the prostate for the laparoscopic approach (44.4% and 41.6%). Conclusion: Each radical prostatectomy technique corresponds to a preferential site of positive surgical margins: the apex for the retropubic approach, the bladder neck for the perineal approach and the posterolateral part of the prostate for the laparoscopic approach. abstract_id: PUBMED:22654964 Experience with radical perineal prostatectomy in the treatment of localized prostate cancer. Radical perineal prostatectomy is an alternative to the retropubic, robotic-assisted or the laparoscopic approach and it is one of the oldest surgical procedures for prostate cancer, but its use has declined over the past 30 years. The desire for lower morbidity, short hospital stay, and decreased costs has resulted in a renaissance of radical perineal prostatectomy. The combination of low morbidity, short hospital stay, and low economic costs brought to a renaissance of radical perineal prostatectomy. abstract_id: PUBMED:28216277 Comparison of Oncologic Outcomes and Complications According to Surgical Approach to Radical Prostatectomy: Special Focus on the Perineal Approach. Introduction: The objective of the study was to compare oncologic outcomes and complications in patients with prostate cancer who underwent radical perineal prostatectomy (RPP), radical retropubic prostatectomy (RRP), laparoscopic radical prostatectomy (LRP), or robotic-assisted radical prostatectomy (RARP). Materials And Methods: We retrospectively reviewed 2617 patients who underwent RPP (n = 673), RRP (n = 396), LRP (n = 223), or RARP (n = 1325) between 1995 and 2013. Clinicopathological outcomes were compared according to surgical approach. Kaplan-Meier and Cox regression analyses were carried out to assess oncologic outcomes. Complications were stratified according to the Clavien classification system. Results: The 5-year biochemical recurrence (BCR)-free survival after RPP was 75.3%, which was higher than for RRP (71.4%; P = .007) and comparable with LRP (76.1%; P = .666) and RARP (75.3%; P = .898). In multivariate analysis, RPP was comparable with LRP (P = .591) and RARP (P = .089) whereas RRP was associated with increased BCR (P &lt; .001). No significant difference was seen in 5-year cancer-specific survival (RPP, 99.0%; RRP, 98.7%; LRP, 100.0%; and RARP, 99.8%; P = .071). The 5-year overall survival after RPP was 97.0%, which was lower than for RARP (99.6%; P = .007), but comparable with RRP (96.2%; P = .792) and LRP (99.1%; P = .606). Overall complication rates were 25.1% for RPP, 36.4% for RRP, 16.1% for LRP, and 9.4% for RARP (P &lt; .001), respectively. After RPP, wound dehiscence (10.3%) was the most common complication. However, approximately 75% of complications were minor. Conclusion: RPP showed acceptable oncologic outcomes compared with other surgical approaches. Careful attention is required to prevent wound dehiscence. abstract_id: PUBMED:7655522 Radical perineal and retropubic prostatectomy: comparison of technics Today we may consider radical perineal prostatectomy as an example of mild invasive surgery compared with the retropubic. Technique is found less traumatic account of the precision of the approach, the accuracy of hemostasis and urethral bladder suture and the speed of postoperative handling. The only disadvantage related with the impossibility of transperineal pathological lymph node staging can today be satisfactory overcome after the advent of laparoscopic lymph node methods which permits safe non invasive preoperative hystological examination. The authors show the technique of laparoscopic and surgical therapy, concluding that perineal prostatectomy is a better approach toward retropubic radical prostatectomy, if combined with preoperative laparoscopic pelvic lymphadenectomy. abstract_id: PUBMED:9490143 Retropubic and perineal approach: plea for perineal radical prostatectomy Objectives: Two surgical approaches are proposed for radical prostatectomy: the retropubic route and the perineal route. We compared the surgical, oncological and functional aspects of these two approaches and present arguments suggesting that the perineal approach is the preferred approach for radical prostatectomy. Material And Methods: 55 retropubic radical prostatectomies were retrospectively compared to 55 perineal radical prostatectomies and performed between March 1992 to December 1995. The clinical TNM, preoperative PSA, results of 6 systematized intrarectal biopsies, operating time, intraoperative bleeding, number of patients transfused and number of packed cell units per patient transfused, medical and surgical complications, catheterization time and length of hospital stay, incidence of urethrovesical anastomosis leak and stenosis, analysis of the prostatectomy specimen, course of PSA, continence and erection were studied. Results: Statistically significant differences were observed for the retropubic and perineal approaches, respectively: preoperative PSA (24 vs 15 ng/mL), intraoperative bleeding (2664 vs 1071 mL), number of patients transfused (91% vs 28%), number of packed cell units per patient transfused (3.9 vs 2.7), medical and surgical complications (56.9 vs 29.1%), anastomotic leak (24.1 vs 7.2%), anastomotic stenosis (31.5 vs 1.8%), duration of catheterization (18 vs 13 days) and length of hospital stay (14 vs 8 days). At 2 years, PSA remained less than 0.5 ng/mL in both groups. Conclusion: Even taking the learning period into account, the perineal approach provides the same results as the retropubic approach in terms of functional and oncological parameters, with a simpler postoperative course for patient. abstract_id: PUBMED:21074198 Comparative effectiveness of perineal versus retropubic and minimally invasive radical prostatectomy. Purpose: While perineal radical prostatectomy has been largely supplanted by retropubic and minimally invasive radical prostatectomy, it was the predominant surgical approach for prostate cancer for many years. In our population based study we compared the use and outcomes of perineal radical prostatectomy vs retropubic and minimally invasive radical prostatectomy. Materials And Methods: We identified men diagnosed with prostate cancer from 2003 to 2005 who underwent perineal (452), minimally invasive (1,938) and retropubic (6,899) radical prostatectomy using Surveillance, Epidemiology and End Results-Medicare linked data through 2007. We compared postoperative 30-day and anastomotic stricture complications, incontinence and erectile dysfunction, and cancer therapy (hormonal therapy and/or radiotherapy). Results: Perineal radical prostatectomy comprised 4.9% of radical prostatectomies during our study period and use decreased with time. On propensity score adjusted analysis men who underwent perineal vs retropubic radical prostatectomy had shorter hospitalization (median 2 vs 3 days, p &lt; 0.001), received fewer heterologous transfusions (7.2% vs 20.8%, p &lt; 0.001) and required less additional cancer therapy (4.9% vs 6.9%, p = 0.020). When comparing perineal vs minimally invasive radical prostatectomy men who underwent the former required more heterologous transfusions (7.2% vs 2.7%, p = 0.018) but experienced fewer miscellaneous medical complications (5.3% vs 10.0%, p = 0.045) and erectile dysfunction procedures (1.4 vs 2.3/100 person-years, p = 0.008). The mean and median expenditure for perineal radical prostatectomy in the first 6 months postoperatively was $1,500 less than for retropubic or minimally invasive radical prostatectomy (p &lt; 0.001). Conclusions: Men who undergo perineal vs retropubic and minimally invasive radical prostatectomy experienced favorable outcomes associated with lower expenditure. Urologists may be abandoning an underused but cost-effective surgical approach that compares favorably with its successors. abstract_id: PUBMED:31811370 The evolution and resurgence of perineal prostatectomy in the robotic surgical era. Purpose: To review the recent advances in terms of surgical technique and new robotic platforms applied to radical perineal prostatectomy (RPP). Methods: A literature review was performed focusing on original articles on perineal prostatectomy searching via Medline/Pubmed and Embase. The entire spectrum was covered such as development of surgical technique including pelvic lymphadenectomy, adoption of novel surgical platforms, learning curve and future directions. Results: Surgical removal of the prostate plays a significant role on the treatment of localized prostate cancer (PCa). RPP was the first surgical approach described for radical prostatectomy. This technique declined in popularity secondary to the development of the retropubic approach. Recently, the appearance of novel robotic technology has generated renewed interest in the perineal approach. Conclusion: There has been a recent resurgence on the interest of radical perineal prostatectomy for the treatment of localized PCa driven by the advent of new robotic surgical technologies into the field. Future studies are needed to better determine the learning curve of the perineal approach and its current role in the treatment of prostate cancer. Answer: A centralized comparison of radical perineal and retropubic prostatectomy specimens was conducted to determine if there were differences between the two surgical approaches in terms of obtaining adequate surgical margins around the tumor, adequate extracapsular tissue around the prostate, and avoiding inadvertent capsular incision. The study included whole mount prostates from 60 patients who underwent radical retropubic prostatectomy and 40 who underwent radical perineal prostatectomy, all performed by the same surgeon. The pathologist was blinded to the surgical approach. After excluding patients who had undergone preoperative androgen ablation or a nerve-sparing procedure, 45 retropubic and 27 perineal prostatectomy specimens were evaluated. The results showed that 78% of the tumors were organ confined (stage pT2) with extracapsular extension (stage pT3) in the majority of the remaining patients. There was no significant difference in the positive margin rate for the retropubic and perineal procedures (16% and 22%, respectively, p = 0.53) or for Gleason 6 and 7 tumors only in the two groups (10% and 17%, respectively, p = 0.47). The capsular incision rate was 4% in each group. The distance of the tumor from the posterolateral margins and the amount of extracapsular tissue excised were equivalent in each group. Subgroup analysis showed no difference in any variable. The study concluded that radical perineal prostatectomy is comparable to radical retropubic prostatectomy for obtaining adequate surgical margins, avoiding inadvertent capsular incisions, and excising adequate extracapsular tissue around tumor foci (PUBMED:12187206).
Instruction: Is laparoscopic anterior lumbar interbody fusion a useful minimally invasive procedure? Abstracts: abstract_id: PUBMED:29707553 Minimally invasive anterior, lateral, and oblique lumbar interbody fusion: a literature review. Lumbar interbody fusion involves insertion of a structural graft into an intervertebral disc space to promote bony arthrodesis. It is a well-established surgical strategy for multiple spinal disorders ranging from degenerative conditions to trauma, neoplastic diseases, and deformities requiring correction. Since the inception of lumbar interbody fusion, the most established techniques have been two posterior approaches, the posterior lumbar interbody fusion (PLIF) and the transforaminal lumbar interbody fusion (TLIF). Within the past 15 years, multiple anterolateral approaches to the spine have become widely adopted. These approaches can be performed minimally invasively and spare disruption of the paraspinal muscles and posterior spinal column while enabling wide exposure of the disc space for insertion of interbody grafts much larger than PLIF and TLIF instrumentation. This review highlights three minimally invasive anterolateral approaches: the anterior lumbar interbody fusion (ALIF), the transpsoas lateral lumbar interbody fusion (LLIF), and prepsoas or anterior to the psoas oblique lumbar interbody fusion (OLIF). Relevant topics for discussion and comparison include patient selection, surgical techniques, outcomes, and complications for the three surgical approaches. abstract_id: PUBMED:24757461 Radiographic Results of Minimally Invasive (MIS) Lumbar Interbody Fusion (LIF) Compared with Conventional Lumbar Interbody Fusion. Objective: To evaluate the radiographic results of minimally invasive (MIS) anterior lumbar interbody fusion (ALIF) and transforaminal lumbar interbody fusion (TLIF). Methods: Twelve and nineteen patients who underwent MIS-ALIF, MIS-TLIF, respectively, from 2006 to 2008 were analyzed with a minimum 24-months' follow-up. Additionally, 18 patients treated with single level open TLIF surgery in 2007 were evaluated as a comparative group. X-rays and CT images were evaluated preoperatively, postoperatively, and at the final follow-up. Fusion and subsidence rates were determined, and radiographic parameters, including lumbar lordosis angle (LLA), fused segment angle (FSA), sacral slope angle (SSA), disc height (DH), and foraminal height (FH), were analyzed. These parameters were also compared between the open and MIS-TLIF groups. Results: In the MIS interbody fusion group, statistically significant increases were observed in LLA, FSA, and DH and FH between preoperative and final values. The changes in LLA, FSA, and DH were significantly increased in the MIS-ALIF group compared with the MIS-TLIF group, but SSA and FH were not significantly different. No significant differences were seen between open and MIS-TLIF except for DH. The interbody subsidence and fusion rates of the MIS groups were 12.0±4% and 96%, respectively. Conclusion: Radiographic results of MIS interbody fusion surgery are as favorable as those with conventional surgery regarding fusion, restoration of disc height, foraminal height, and lumbar lordosis. MIS-ALIF is more effective than MIS-TLIF for intervertebral disc height restoration and lumbar lordosis. abstract_id: PUBMED:34603954 Minimally invasive transforaminal lumbar interbody fusion - A narrative review on the present status. Minimally invasive lumbar transforaminal interbody fusion (MIS TLIF) has become the most commonly performed lumbar fusion procedure. There are multiple variables such as bone graft properties, use of rhBMP (recombinant human bone morphogenic protein), interbody cage properties, image guidance techniques, etc., that may impact the outcomes and fusion rates. Radiation exposure to the patient as well as to the operating team is an important concern. The minimally invasive anterior approaches for lumbar fusion with ability to insert larger cages and achieve better sagittal correction have added another option in management of lumbar degenerative deformities. A literature review of recent studies and systematic reviews on different aspects impacting the outcomes of MIS TLIF has been done to define the present status of the procedure in this narrative review. Iliac crest bone graft can help achieve very good fusion rate without significantly increasing the morbidity. RhBMP is most potent enhancer of fusion and the adverse effects can be avoided by surgical technique and using lower dose. The use of navigation techniques has reduced the radiation exposure to patient and the surgeons but the benefit seems to be significant only in long segment fusions. abstract_id: PUBMED:12234443 Is laparoscopic anterior lumbar interbody fusion a useful minimally invasive procedure? Objective: Laparoscopic anterior lumbar interbody fusion (LALIF) has been demonstrated to be safe and effective. Its use as a standard surgical technique has been recommended for arthrodesis in the lumbosacral region. We reviewed our experience with LALIF for safety, effectiveness, and usefulness. Methods: Retrospective review of 14 patients who underwent LALIF was performed. All patients had a diagnosis of degenerative disc disease with medically retractable mechanical back pain. Results: No intraoperative vascular or neurological injury was encountered. An average operating time of 300 minutes with blood loss of 60 ml was found. The average hospital stay was 3.4 days. At 3 to 6 months after surgery, 80% fusion rates were achieved. Conclusion: Although LALIF is a safe and effective procedure, it has many disadvantages, which make it a less than optimal procedure for routine use. Other minimally invasive approaches to the anterior lumbar spine result in similar beneficial results without the drawbacks associated with LALIF. abstract_id: PUBMED:27364428 Minimally invasive L5-S1 oblique lumbar interbody fusion with anterior plate. Lumbar interbody fusion is an important technique for the treatment of degenerative disc disease and degenerative scoliosis. The oblique lumbar interbody fusion (OLIF) establishes a minimally invasive retroperitoneal exposure anterior to the psoas and lumbar plexus. In this video case presentation, the authors demonstrate the techniques of the OLIF at L5-S1 performed on a 69-year-old female with degenerative scoliosis as one component of an overall strategy for her deformity correction. The video can be found here: https://youtu.be/VMUYWKLAl0g . abstract_id: PUBMED:21629486 The clinical and radiological outcomes of minimally invasive transforaminal lumbar interbody single level fusion. Study Design: This is a retrospective study that was done according to clinical and radiological evaluation. Purpose: We analyzed the clinical and radiological outcomes of minimally invasive transforaminal lumbar interbody single level fusion. Overview Of Literature: Minimally invasive transforaminal lumbar interbody fusion is effective surgical method for treating degenerative lumbar disease. Methods: The study was conducted on 56 patients who were available for longer than 2 years (range, 24 to 45 months) follow-up after undergoing minimally invasive transforminal lumbar interbody single level fusion. Clinical evaluation was performed by the analysis of the visual analogue scale (VAS) score and the Oswestry Disability Index (ODI) and the Kirkaldy-Willis score. For the radiological evaluation, the disc space height, the segmental lumbar lordotic angle and the whole lumbar lordotic angle were analyzed. At the final follow-up after operation, the fusion rate was analyzed according to Bridwell's anterior fusion grade. Results: For the evaluation of clinical outcomes, the VAS score was reduced from an average of 6.7 prior to surgery to an average of 1.8 at the final follow-up. The ODI was decreased from an average of 36.5 prior to surgery to an average of 12.8 at the final follow-up. In regard to the clinical outcomes evaluated by the Kirkaldy-Willis score, better than good results were obtained in 52 cases (92.9%). For the radiological evaluation, the disc space height (p = 0.002), and the whole lumbar lordotic angle (p = 0.001) were increased at the final follow-up. At the final follow-up, regarding the interbody fusion, radiological union was obtained in 54 cases (95.4%). Conclusions: We think that if surgeons become familiar with the surgical techniques, this is a useful method for minimally invasive spinal surgery. abstract_id: PUBMED:35283360 Minimally Invasive Deformity Correction Technique: Initial Case Series of Anterior Lumbar Interbody Fusion at L5-S1 for Multilevel Lumbar Interbody Fusion in a Lateral Decubitus Position. Objective: Many surgical options exist for multilevel lumbar interbody fusion, including anterior lumbar interbody fusion (ALIF) and lateral lumbar interbody fusion (LLIF). While current technique of repositioning patients between supine ALIF and lateral decubitus LLIF offers many benefits, intraoperative repositioning can be cumbersome. We present a novel approach that accomplishes both multilevel LLIF and L5-S1 ALIF in a lateral decubitus position. Methods: This case series retrospectively enrolled 12 consecutive patients who underwent L5-S1 ALIF in the lateral decubitus position and anterior-to-psoas LLIF at more cranial levels as part of a multilevel lumbar interbody fusion surgery between September 2020 and December 2021. All surgeries were performed by a single spine-focused neurosurgeon at an urban academic hospital. Results: Radiographic imaging analysis demonstrated significant changes in coronal Cobb angle (-5.43° ± 3.81°; P = 0.0029), global lumbar lordosis (6.77° ± 12.04°; P = 0.0049), segmental lumbar lordosis (8.91° ± 10.21°; P = 0.0005), spinopelvic mismatch (-7.93° ± 7.91°; P = 0.0010), average disc height (5.30 ± 1.64 mm; P = 0.0005), and L5-S1 disc height (6.68 ± 2.10 mm; P = 0.0005). Two patients developed postoperative complications including wound dehiscence and grade I graft subsidence at L4-L5. Conclusions: This case series demonstrated that a combined lateral decubitus L5-S1 ALIF and LLIF at more cranial levels is a safe, efficient approach to multilevel lumbar interbody fusions. This technique significantly restored spinopelvic alignments with a low complication rate and no patient repositioning. Efficacy of this minimally invasive deformity correction technique should be further investigated through a multicenter trial. abstract_id: PUBMED:34102727 Clinical application of minimally invasive lumbar interbody fusion With the popularization of minimally invasive concept in the treatment of lumbar degenerative disease,minimally invasive lumbar interbody fusion has gradually developed into the mainstream technique of lumbar fixation and fusion.At present,there are many types of minimally invasive lumbar interbody fusion. In this paper, four kinds of minimally invasive lumbar interbody fusion (anterior lumbar interbody fusion,minimally invasive transforaminal lumbar interbody fusion,extreme lateral interbody fusion,oblique lumbar interbody fusion),which are widely used in clinical practice,are systematically described from the aspects of indication selection,technical characteristics, clinical efficacy and prevention of complications.In order to obtain the best treatment effect with the least trauma,it is necessary for the surgeons to formulate detailed surgical strategies on the basis of strictly grasping the indications,and choose the operation according to their own clinical experience and skills,so as to maximize the advantages of different minimally invasive lumbar interbody fusion. abstract_id: PUBMED:27683674 Lumbar interbody fusion: techniques, indications and comparison of interbody fusion options including PLIF, TLIF, MI-TLIF, OLIF/ATP, LLIF and ALIF. Degenerative disc and facet joint disease of the lumbar spine is common in the ageing population, and is one of the most frequent causes of disability. Lumbar spondylosis may result in mechanical back pain, radicular and claudicant symptoms, reduced mobility and poor quality of life. Surgical interbody fusion of degenerative levels is an effective treatment option to stabilize the painful motion segment, and may provide indirect decompression of the neural elements, restore lordosis and correct deformity. The surgical options for interbody fusion of the lumbar spine include: posterior lumbar interbody fusion (PLIF), transforaminal lumbar interbody fusion (TLIF), minimally invasive transforaminal lumbar interbody fusion (MI-TLIF), oblique lumbar interbody fusion/anterior to psoas (OLIF/ATP), lateral lumbar interbody fusion (LLIF) and anterior lumbar interbody fusion (ALIF). The indications may include: discogenic/facetogenic low back pain, neurogenic claudication, radiculopathy due to foraminal stenosis, lumbar degenerative spinal deformity including symptomatic spondylolisthesis and degenerative scoliosis. In general, traditional posterior approaches are frequently used with acceptable fusion rates and low complication rates, however they are limited by thecal sac and nerve root retraction, along with iatrogenic injury to the paraspinal musculature and disruption of the posterior tension band. Minimally invasive (MIS) posterior approaches have evolved in an attempt to reduce approach related complications. Anterior approaches avoid the spinal canal, cauda equina and nerve roots, however have issues with approach related abdominal and vascular complications. In addition, lateral and OLIF techniques have potential risks to the lumbar plexus and psoas muscle. The present study aims firstly to comprehensively review the available literature and evidence for different lumbar interbody fusion (LIF) techniques. Secondly, we propose a set of recommendations and guidelines for the indications for interbody fusion options. Thirdly, this article provides a description of each approach, and illustrates the potential benefits and disadvantages of each technique with reference to indication and spine level performed. abstract_id: PUBMED:36847747 Biomechanical evaluation of Percutaneous endoscopic posterior lumbar interbody fusion and minimally invasive transforaminal lumbar interbody fusion: a biomechanical analysis. In order to analyze and evaluate the stability of lumbar spine and the risk of cage subsidence after different minimally invasive fusion operations, two finite element models Percutaneous endoscopic posterior lumbar interbody fusion (PE-PLIF) and minimally invasive transforaminal lumbar interbody Fusion (MIS-TLIF) were established. The results showed that compared with MIS-TLIF, PE-PLIF had better segmental stability, lower pedicle screw rod system stress, and lower risk of cage subsidence. The results suggest that the cage with appropriate height should be selected to ensure the segmental stability and avoid the risk of the subsidence caused by the cage with large height. Answer: Laparoscopic anterior lumbar interbody fusion (LALIF) has been demonstrated to be safe and effective, and its use as a standard surgical technique for arthrodesis in the lumbosacral region has been recommended. A retrospective review of 14 patients who underwent LALIF showed no intraoperative vascular or neurological injury, with an average operating time of 300 minutes, blood loss of 60 ml, and an average hospital stay of 3.4 days. At 3 to 6 months after surgery, 80% fusion rates were achieved. However, despite its safety and effectiveness, LALIF has many disadvantages, which make it a less than optimal procedure for routine use. Other minimally invasive approaches to the anterior lumbar spine result in similar beneficial results without the drawbacks associated with LALIF (PUBMED:12234443). In comparison, other minimally invasive lumbar interbody fusion techniques such as anterior lumbar interbody fusion (ALIF), transforaminal lumbar interbody fusion (TLIF), and oblique lumbar interbody fusion (OLIF) have been widely adopted and are considered mainstream techniques for lumbar fixation and fusion. These techniques are performed with the goal of achieving the best treatment effect with the least trauma, and surgeons are encouraged to choose the operation based on their clinical experience and skills to maximize the advantages of different minimally invasive lumbar interbody fusion approaches (PUBMED:34102727). In summary, while LALIF is a useful minimally invasive procedure, it may not be the optimal choice for routine use due to its disadvantages. Other minimally invasive techniques offer similar benefits and may be preferred based on the specific clinical scenario and the surgeon's expertise.
Instruction: A state-wide review of contemporary outcomes of gastric bypass in Florida: does provider volume impact outcomes? Abstracts: abstract_id: PUBMED:17457162 A state-wide review of contemporary outcomes of gastric bypass in Florida: does provider volume impact outcomes? Objectives: To report contemporary outcomes of gastric bypass for obesity and to assess the relationship between provider volume and outcomes. Background: Certain Florida-based insurers are denying patients access to bariatric surgery because of alleged high morbidity and mortality. Settings And Patients: The prospectively collected and mandatory-reported Florida-wide hospital discharge database was analyzed. Restrictive procedures such as adjustable gastric banding and gastroplasty were excluded. Results: The overall complication and in-hospital mortality rates in 19,174 patients who underwent gastric bypass from 1999 to 2003 were 9.3% (8.9-9.7) and 0.28% (0.21-0.36), respectively. Age and male gender were associated with increased duration of hospital stay (P &lt; 0.001), increased in-hospital complications [age: odds ratio (OR) = 1.11, CI: 1.08-1.13; male: OR = 1.53, CI: 0.36-1.72] and increased in-hospital mortality (age: OR = 1.51, CI: 1.32-1.73; male: CI = 2.66, CI: 1.53-4.63), all P &lt; 0.001. The odds of in-hospital complications significantly increased with diminishing surgeon or hospital procedure volume (surgeon: OR = 2.0, CI: 1.3-3.1; P &lt; 0.001, 1-5 procedures relative to &gt;500 procedures; hospital volume: OR = 2.1, CI: 1.2-3.5; P &lt; 0.001, 1-9 procedures relative to &gt;500 procedures). The percent change of in-hospital mortality in later years of the study was lowest, indicating higher mortality rates, for surgeons or hospitals with fewer (&lt; or =100) compared with higher (&gt; or =500) procedures. Conclusion: Increased utilization of bariatric surgery in Florida is associated with overall favorable short-term outcomes. Older age and male gender were associated with increased morbidity and mortality. Surgeon and hospital procedure volume have an inverse relationship with in-hospital complications and mortality. abstract_id: PUBMED:34471980 Volume-outcome relationships for Roux-en-Y gastric bypass patients in the sleeve gastrectomy era. Background: Sleeve gastrectomy is now the most common bariatric operation performed. With lower volumes of Roux-en-Y gastric bypass (RYGB), it is unclear whether decreasing surgeon experience has led to worsening outcomes for this procedure. Methods: We used State Inpatient Databases from Florida, Iowa, New York, and Washington. Bariatric surgeons were designated as those who performed ten or more bariatric procedures yearly. Patients who had RYGB were included in our analysis. Using multi-level logistic regression, we examined whether surgeon average yearly RYGB volume was associated with RYGB patient 30-day complications, reoperations, and readmissions and 1-year revisions and readmissions. Results: From 2013 to 2017 there were 27,714 patients who underwent laparoscopic RYGB by 311 surgeons. Median surgeon volume was 77 RYGBs per year. The distribution was 10 bypasses yearly at the 5th percentile, 16 bypasses at the 10th percentile, 38 bypasses at the 25th percentile, and 133 bypasses at the 75th percentile. Multi-level regression revealed that patients of surgeons with lower RYGB volumes had small but statistically significant increased risks of 30-day complications and 1-year readmissions. At 30 days, risk for any complication was 6.71%, 6.43%, and 5.55% at 10, 38, and 133 bypasses per year, respectively (p = 0.01). Risk for readmission at 1 year was 13.90%, 13.67%, and 12.90% at 10, 38, and 133 bypasses per year, respectively (p = 0.099). Of note, volume associations with complications and reoperations due to hemorrhage and leak were not statistically significant. There was also no significant association with revisions. Conclusion: This is the first study to examine the association of surgeon RYGB volume with patient outcomes as the national experience with RYGB diminishes. Overall, surgeon RYGB volume does not appear to have a large effect on patient outcomes. Thus, patients can safely pursue RYGB in this early phase of the sleeve gastrectomy era. abstract_id: PUBMED:22542466 Importance of hospital versus surgeon volume in predicting outcomes for gastric bypass procedures. Background: A relationship between surgical volume and improved surgical outcomes has been described in gastric bypass patients but the relative importance of surgeon versus hospital volume is unknown. Our objective was to examine whether in-hospital and 30-day mortality are determined more by surgeon volume or hospital volume or whether each has an independent effect. A retrospective cohort study was performed of all hospitals in Pennsylvania providing gastric bypass surgery from 1999 to 2003. Methods: Data from the Pennsylvania Health Care Cost Containment Council included 14,714 gastric bypass procedures in patients aged &gt;18 years. In-hospital and 30-day mortality were stratified by hospital volume categories (high [≥300], medium [125-299], and low [&lt;125]) and surgeon volume categories (high [≥50] and low [&lt;50]). Multivariate analyses were performed using logistic regression analysis to control for patient demographics and severity. Results: High-volume surgeons at high-volume hospitals had the lowest in-hospital mortality rates of all categories (.12%) and low-volume surgeons at low-volume hospitals had the poorest outcomes (.57%). The same trend was observed for 30-day mortality (.30% versus .98%). After controlling for other covariates, high-volume surgeons at high-volume hospitals also had significantly lower odds of both in-hospital (odds ratio 20, P = .002) and 30-day mortality (odds ratio .30, P = .001). This relationship held true even after excluding surgeons who only performed procedures within a single year. Conclusion: In Pennsylvania, both higher surgeon and hospital volume were associated with better outcomes for bariatric surgical procedures. Although a high-surgeon volume correlated with lowered mortality, we also found that high-volume hospitals demonstrated improved outcomes, highlighting the importance of factors other than surgical expertise in determining the outcomes. abstract_id: PUBMED:37730445 Robotic versus laparoscopic gastric bypass in bariatric surgery: a systematic review and meta-analysis on perioperative outcomes. Background: Robotic-assisted surgery has emerged as a compelling approach to bariatric surgery. However, current literature has not consistently demonstrated superior outcomes to laparoscopic bariatric surgery to justify its higher cost. With its mechanical advantages, the potential gains from the robotic surgical platform are likely to be apparent in more complex cases such as gastric bypass, especially revisional cases. Objective: This systematic review and meta-analysis aimed to summarize the literature and evaluate the peri-operative outcomes of patients with obesity undergoing robotic gastric bypass versus laparoscopic gastric bypass surgery. Setting: Systematic review. Methods: A literature search of Embase, Medline, Pubmed, Cochrane library, and Google Scholar was performed according to Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. Studies comparing outcomes of robotic and laparoscopic gastric bypass for obesity were included. Results: Twenty-eight eligible studies comprised a total of 82,155 patients; 9051 robotic bypass surgery (RBS) versus 73,104 laparoscopic bypass surgery (LBS) were included. All included studies compared Roux-en-Y gastric bypass. RBS was noted to have higher reoperation rate within 30 days (4.4% versus 3.4%; odds ratio 1.31 [95% CI, 1.04-1.66]; P = .027; I2 = 43.5%) than LBS. All other endpoints measured (complication rate, anastomotic leak, anastomotic stricture, surgical site infections, hospital readmission, length of stay, operative time, conversion rate and mortality) did not show any difference between RBS and LBS. Conclusion: This systematic review and meta-analysis showed that there was no significant difference in key outcome measures in robotic versus laparoscopic gastric bypass. RBS was associated with a slightly higher reoperation rate and there was no reduction in overall complication rate with the use of robotic platform. abstract_id: PUBMED:37740112 Evaluating outcomes among surgeons who changed their technique for gastric bypass: a state-wide analysis from 2011 to 2021. Introduction: Technical variation exists when performing the gastrojejunostomy during Roux-en-Y gastric bypass (RYGB). However, it is unclear whether changing technique results in improved outcomes or patient harm. Methods: Surgeons participating in a state-wide bariatric surgery quality collaborative who completed a survey on how they perform a typical RYGB in 2011 and again in 2021 were included in the analysis (n = 31). Risk-adjusted 30-day complication rates and case characteristics for cases in 2011 were compared to those in 2021 among surgeons who changed their gastrojejunostomy technique from end-to-end anastomosis (EEA) to either a linear staple or handsewn anastomosis (LSA/HSA). In addition, case characteristics and outcomes among surgeons who maintained an EEA technique throughout the study period were assessed. Results: A total of 15 surgeons (48.3%) changed their technique from EEA to LSA/HSA while 7 surgeons (22.3%) did not. Nine surgeons did LSA or HSA the entire period and therefore were not included. Surgeons who changed their technique had significantly lower rates of surgical complications in 2021 when compared to 2011 (1.9% vs 5.1%, p = 0.0015), including lower rates of wound complications (0.5% vs 2.1%, p = 0.0030) and stricture (0.1% vs 0.5%, p = 0.0533). Likewise, surgeons who did not change their EEA technique, also experienced a decrease in surgical complications (1.8% vs 5.8%, p &lt; 0.0001), wound complications (0.7% vs 2.1%, p &lt; 0.0001) and strictures (0.2% vs 1.2%, p = 0.0006). Surgeons who changed their technique had a significantly higher mean annual robotic bariatric volume in 2021 (30.0 cases vs 4.9 cases, p &lt; 0.0001) when compared to those who did not. Conclusions: Surgeons who changed their gastrojejunostomy technique from circular stapled to handsewn demonstrated greater utilization of the robotic platform than those who did not and experienced a similar decrease in adverse events during the study period, despite altering their technique. Surgeons who chose to modify their operative technique may be more likely to adopt newer technologies. abstract_id: PUBMED:35099628 Comparative effectiveness of sleeve gastrectomy vs Roux-en-Y gastric bypass in patients giving birth after bariatric surgery: reinterventions and obstetric outcomes. Background: Women of childbearing age comprise approximately 65% of all patients who undergo bariatric surgery in the USA. Despite this, data on maternal reintervention and obstetric outcomes after surgery are limited especially with regard to comparative effectiveness between sleeve gastrectomy and Roux-en-Y gastric bypass, the most common procedures today. Methods: Using IBM MarketScan claims data, we performed a retrospective cohort study of women ages 18-52 who gave birth after undergoing laparoscopic sleeve gastrectomy or Roux-en-Y gastric bypass with 2-year continuous follow-up. We balanced the cohort on observable characteristics using inverse probability weighting. We utilized multivariable logistic regression to examine the association between procedure selection and outcomes, including risk of reinterventions (revisions, enteral access, vascular access, reoperations, other) or adverse obstetric outcomes (pregnancy complications, severe maternal morbidity, and delivery complications). In all analyses, we controlled for age, U.S. state, and Elixhauser or Bateman comorbidities. Results: From 2011 to 2016, 1,079 women gave birth within the first two years after undergoing bariatric surgery. Among these women, we found no significant difference in reintervention rates among those who had gastric bypass compared to sleeve gastrectomy (OR 1.41, 95% CI 0.91-2.21, P = 0.13). We then examined obstetric outcomes in the patients who gave birth after bariatric surgery. Compared to patients who underwent sleeve gastrectomy, those who had Roux-en-Y gastric bypass were not significantly more likely to experience any adverse obstetric outcomes. Conclusion: In this first national cohort of females giving birth following bariatric surgery, no significant difference was observed in persons who underwent Roux-en-Y gastric bypass versus sleeve gastrectomy with respect to either reinterventions or obstetric outcomes. This suggests possible equipoise between these two procedures with regards to safety within the first two years following a bariatric procedure among women who may become pregnant, but more research is needed to confirm these findings in larger samples. abstract_id: PUBMED:32309410 Laparoscopic Roux en Y gastric bypass in the super obese. Bariatric surgery in super obese patients presents technical, metabolic and risk related challenges. Moreover, there is concern that weight loss and health outcomes of surgery, including gastric bypass, may be lesser than in non super obese (morbidly obese) patients. This may drive clinicians toward more aggressive forms of surgery at the risk of greater morbidity. This review examines outcomes pertaining to laparoscopic Roux en Y gastric bypass (LRYGB) in the super obese and determines the role of such surgery in the current day. Whilst a minor increase in morbidity and mortality risk exists, weight loss outcomes when measured as percentage total body weight loss are equivalent to non super obese patients. Final BMI is not an appropriate indicator of benefit in such patients and may lead to escalation surgery inappropriately. Surgeons employing the use of LRYGB in the super obese should have adequate training and expertise in the technique and operating upon super obese patients should be avoided during the learning curve phase to minimise morbidity risk. abstract_id: PUBMED:34298449 Maternal and neonatal outcomes in women undergoing Roux-en-Y gastric bypass: A systematic review and meta-analysis. Background: To summarize evidence of benefits and risks of maternal and neonatal outcomes among pregnant women after Roux-en-Y gastric bypass surgery. Methods: A systematic search was conducted in MEDLINE, EMBASE, Web of Science, Ovid and Cochrane Library until 24th May 2021. Inclusion criteria were randomized trails or observational studies including at least one of maternal or neonatal outcomes from pregnant women who had a history of Roux-en-Y gastric bypass surgery with a control group of pregnant women with no history of bariatric surgery. Non-English studies were excluded. Both fixed-effect and random-effect models were used in the meta-analyses. Newcastle-Ottawa Scale was used to assess quality of studies. Results: Twelve retrospective articles were eligible including 13 848 pregnant women with a history of Roux-en-Y gastric bypass surgery and 255 008 pregnant women without a history of bariatric surgery. The results of meta-analysis showed a lower incidence of preterm premature rupture of membranes (OR 0.53, 95% CI [0.47, 0.60], p &lt; 0.00001), large gestational age infants (OR 0.28, 95%CI [0.22, 0.37], p &lt; 0.00001) or macrosomia (OR 0.26, 95%CI [0.23, 0.30], p &lt; 0.00001) in women after Roux-en-Y gastric bypass surgery. On the other hand, infants born to mothers with a history of Roux-en-Y gastric bypass surgery had an increased risk of small gestational age (OR 2.24, 95% CI [1.55, 3.24], p &lt; 0.00001). Conclusion: Roux-en-Y gastric bypass surgery reduces significantly of risks of preterm premature rupture of membranes and large for gestational age/ macrosomia, but increase the risk of small for gestational age. abstract_id: PUBMED:27864536 Influence of dietary protein and its amino acid composition on postoperative outcomes after gastric bypass surgery: a systematic review. Context: Bariatric surgery is an effective method to reduce morbid obesity. Nutritional counseling is essential to achieve maximal treatment success and to avoid long-term complications. Increased dietary protein intake may improve various postoperative results. Objective: The aim of this systematic review is to examine the relationship between intake of dietary protein or supplementation with amino acids and postoperative outcomes after gastric bypass surgery. Data Sources: A systematic literature search was conducted in 4 electronic databases: Cochrane, Embase, PubMed, and Scopus. Study Selection: The initial search retrieved 7333 hits, which included 2390 duplicates. Data Extraction: Tweny-three studies with varying study designs, interventions, and outcomes were included. Results: Studies did not provide convincing evidence of a beneficial effect on any postoperative outcome. Conclusions: The study of the influence of protein and its amino acid composition represents an important developing domain of knowledge and warrants further attention considering the popularity of bariatric surgery. Future studies should include a clear description of the quantity and composition of proteins and amino acids in the diet or supplement. abstract_id: PUBMED:37989890 One anastomosis gastric bypass as a primary bariatric surgery: MBSAQIP database analysis of short-term safety and outcomes. Background: One anastomosis gastric bypass (OAGB) is described as a simpler, potentially safe, and effective bariatric-metabolic procedure that has been recently endorsed by the American Society of Metabolic and Bariatric Surgery. Objectives: First, we aim to compare the 30-day outcomes between OAGB and other bypass procedures: Roux-en-Y gastric bypass (RYGB) and single anastomosis duodenoileal bypass with sleeve gastrectomy (SADI-S). Second, identify the odds between postoperative complications and each surgical procedure. Methods: Patients who underwent primary OAGB, RYGB, and SADI-S were identified using the MBSAQIP database of 2020 and 2021. An analysis of patient demographics and 30-day outcomes were compared between these three bypass procedures. In addition, a multilogistic regression for overall complications, blood transfusions, unplanned ICU admissions, readmission, reoperation, and anastomotic leak stratified by surgical procedure was performed. Results: 1607 primary OAGBs were reported between 2020 and 2021. In terms of patient demographics, patients who underwent RYGB and SADI-S showed a higher incidence of comorbidities. On the other hand, OAGB had shorter length of stay (1.39 ± 1.10 days vs 1.62 ± 1.42 days and 1.90 ± 2.04 days) and operative times (98.79 ± 52.76 min vs 125.91 ± 57.76 min and 139.85 ± 59.20 min) than RYGB and SADI-S. Similarly, OAGB showed lower rates of overall complications (1.9% vs 4.5% and 6.4%), blood transfusions (0.4% vs 1.1% and 1.8%), unplanned ICU admission (0.3% vs 0.8% and 1.4%), readmission (2.4% vs 4.9% and 5.0%), and reoperation (1.2% vs 1.9% and 3.1%). A multilogistic regression analysis was performed, RYGB and SADI-S demonstrated higher odds of 30-day complications. Conclusion: The incidence of primary OAGB has increased since its approval by ASMBS, from 0.05% reported between 2015 and 2019 to 0.78% between 2020 and 2021. OAGB had better 30-day outcomes and shorter operative times than RYGB and SADI-S and therefore, could be considered a viable alternative. Answer: The relationship between provider volume and outcomes in gastric bypass surgery has been investigated in several studies. A state-wide review of gastric bypass outcomes in Florida found that both surgeon and hospital procedure volume have an inverse relationship with in-hospital complications and mortality. Specifically, the odds of in-hospital complications significantly increased with diminishing surgeon or hospital procedure volume. The study also noted that older age and male gender were associated with increased morbidity and mortality (PUBMED:17457162). Another study examining the volume-outcome relationships for Roux-en-Y gastric bypass (RYGB) patients in the sleeve gastrectomy era revealed that patients of surgeons with lower RYGB volumes had small but statistically significant increased risks of 30-day complications and 1-year readmissions. However, the study concluded that overall, surgeon RYGB volume does not appear to have a large effect on patient outcomes, suggesting that patients can safely pursue RYGB even as national experience with the procedure diminishes (PUBMED:34471980). Further supporting the importance of provider volume, a retrospective cohort study in Pennsylvania showed that both higher surgeon and hospital volume were associated with better outcomes for bariatric surgical procedures. High-volume surgeons at high-volume hospitals had significantly lower odds of both in-hospital and 30-day mortality (PUBMED:22542466). These findings indicate that provider volume does impact outcomes in gastric bypass surgery, with higher volumes generally associated with better outcomes. However, the extent of this impact may vary, and other factors such as the type of procedure and the individual surgeon's experience also play a role in patient outcomes.
Instruction: Is a 2-night polysomnographic study necessary in childhood sleep-related disordered breathing? Abstracts: abstract_id: PUBMED:15539714 Is a 2-night polysomnographic study necessary in childhood sleep-related disordered breathing? Background And Objectives: There are limited data on the night-to-night variability of childhood sleep-related disordered breathing (SDB). We aim to assess for the presence of first-night effect (FNE) and to examine whether a single-night sleep study is adequate in the assessment of childhood SDB. Design: In a case-control study investigating whether obesity is a risk factor for childhood SDB, the night-to-night variability of sleep and respiratory variables were studied. Participants And Setting: Forty-six obese children from a pediatric obesity clinic and 44 age- and sex-matched normal weight control subjects from local schools. Interventions: All subjects underwent two consecutive overnight polysomnographic studies. An obstructive apnea index (OAI) &gt;/= 1/h was considered diagnostic of SDB. Results: The mean age of the children was 11.21 years (SD 2.21). Forty-four obese children and 43 control subjects completed the 2-night study. Based on the criterion of the worst OAI over the 2 nights, 13 subjects were found to have SDB, 12 subjects were primary snorers, and 62 were normal subjects. In all subjects, the sleep efficiency improved and sleep-onset latency was reduced on the second night. While there was a rebound of rapid eye movement sleep with the associated worsening of respiratory indexes (mainly accounted for by an increase in central apneas and hypopneas) evident in normal subjects, there was a significant improvement of respiratory disturbances in the SDB group on the second night. The first-night polysomnography would have correctly identified 84.6% of cases as defined by the criteria of the worst OAI over the 2 nights. All cases missed by the first-night study had only borderline OAI. Conclusions: The phenomenon of FNE in children was well demonstrated in our study. We proposed that a single-night sleep study is adequate and more cost-effective in assessing for childhood SDB. abstract_id: PUBMED:33008732 Night-time frequency of urination as a manifestation of sleep-disordered breathing: the Nagahama study. Aims: Sleep-disordered breathing (SDB) is a well-known risk factor for cardiovascular outcomes. Studies of patients with SDB have identified frequent night-time urination as a manifestation related to SDB. We aimed to clarify whether night-time frequency of urination is independently associated with SDB in a general population. We also investigated whether night-time frequency of urination can help presumptive diagnose SDB. Methods: Study participants consisted of 7151 community residents. Oxygen saturation during sleep was measured for four nights using a pulse oximeter. SDB was defined as ≥15 events per hour in which oxygen desaturation exceeded or equal to 3% during an actigraphy-determined sleep period. Night-time frequency of urination was recorded for one week using a sleep diary. Results: Significant positive correlations were evident between night-time frequency of urination and SDB (none, 5.8%; once/night, 14.1%; twice/night, 20.1%; thrice/night, 28.7%; &gt;thrice/night, 44.1%, P &lt; 0.001). This association was independent of possible covariates, including sleep duration (adjusted odds ratio: once/night = 1.50, twice/night = 2.15, thrice/night = 3.07, &gt;thrice/night = 3.73, P &lt; 0.001). Other factors significantly associated with SDB were age, sex, obesity, observation of sleep apnea, and short sleep duration. The area under the curve of the risk score for SDB consisting of these conventional six items (0.834) significantly improved (0.842, P = 0.001) when night-time frequency of urination was considered as a risk score item. Conclusion: Night-time frequency of urination was associated with SDB. Our findings suggest that the urination frequency should be considered a manifestation of SDB even in a general population. abstract_id: PUBMED:30689097 Polysomnographic characteristics of adolescents with asthma and low risk for sleep-disordered breathing. Purpose: The aim of this study was to describe the polysomnographic characteristics of adolescents with asthma who are at low risk for sleep-disordered breathing (SDB) based on the Pediatric Sleep Questionnaire (PSQ). Methods: Overnight polysomnography was performed on 85 adolescents with asthma and a score &lt; 0.33 on the PSQ. The Asthma Control Questionnaire was used to define "well-controlled" versus "inadequately controlled" asthma. Results: Mean age of participants was 14.5 ± 1.6 years (range, 11 to 17 years), 63.5% were girls, 57.6% were Caucasians, and the mean body mass index percentile was 65.1 ± 26.5. Asthma was well-controlled in 51.7% of the adolescents and inadequately controlled in 15.3%. Mean sleep efficiency (SE) was 88.0 ± 11.1%, and 24.7% had SE &lt; 85%. Mean wakefulness after sleep onset (WASO) was 40.9 ± 44.0 min, and the mean arousal index was 10.8 ± 5.6 per hour. The mean apnea/hypopnea index (AHI) was 2.3 ± 4.2, and 29.4% of participants had SDB (defined by an AHI ≥ 2). Compared with normative values, adolescents with asthma had more nocturnal awakenings and WASO, and less REM sleep. SDB risk was higher in boys [odds ratio = 4.6 (confidence interval 1.4-14.7), p = 0.01]. Asthma control did not impact sleep and respiratory parameters, with no differences found between youth with well-controlled and inadequately controlled asthma. Conclusions: Adolescents with asthma are at increased risk of sleep-disordered breathing and suffer from disturbances in sleep continuity with more arousals and sleep fragmentation. Study results highlight the importance of proper screening for sleep-disordered breathing in adolescents with asthma. abstract_id: PUBMED:38415722 Long-term night-to-night variability of sleep-disordered breathing using a radar-based home sleep apnea test: a prospective cohort study. Study Objectives: Night-to-night variability of sleep-disordered breathing (SDB) limits the diagnostic accuracy of a single measurement. Multiple recordings using a reliable, affordable method could reduce the uncertainty and avoid misdiagnosis, which could be possible with radar-based home sleep apnea testing (HSAT). Methods: We recruited consecutive patients with suspected SDB and performed contactless radar-based HSAT with automated scoring (Sleepiz One) over ten nights. During the first night, patients were simultaneously measured with peripheral-arterial tonometry. Results: Twenty-four of the 28 included patients could achieve a minimum of four measurements. The failure rate was 16% (201 of 238 measurements). The apnea-hypopnea index (AHI) and oxygen desaturation index were consistently lower with radar-based HSAT compared to peripheral arterial tonometry. The variability of the AHI was considerable, with a standard error of measurement of 5.2/h (95%CI 4.6-5.7/h) and a minimal detectable difference of 14.4/h (95%CI 12.7-15.9/h). Alcohol consumption partially accounted for the variability, with an AHI increase of 1.7/h (95%CI 0.6-2.8/h) for each standard drink. Based on a single measurement, 17% of patients were misdiagnosed, and 32% were misclassified for SDB severity. After five measurements, the mean AHI of the measured nights stabilized with no evidence of substantial changes with additional measurements. Conclusions: Night-to-night variability is considerable and stable over ten nights. HSAT using radar-based methods over multiple nights is feasible and well tolerated by patients. It could offer lower costs and allow for multiple-night testing to increase accuracy. However, validation and reducing the failure rate are necessary for implementation in the clinical routine. Clinical Trial Registration: "Recording of Multiple Nights Using a New Contactless Device (Sleepiz One Connect) in Obstructive Sleep Apnea", ClinicalTrials.gov, NCT05134402. abstract_id: PUBMED:32191115 Opioid-Related Sleep-Disordered Breathing: An Update for Clinicians. Opioids are an effective treatment for patients with intractable pain. Long-term administration of opioids for pain relief is being delivered by an increasing number of medical providers in the United States including primary care physicians and nonspecialists. One common complication of chronic opioid use is sleep-disordered breathing which can result in various morbidities as well as an increase in all-cause mortality. It is important for providers to understand the relationship between opioids and sleep-disordered breathing as well as methods to improve diagnosis and strategies for treatment. This review aims to update clinicians on the mechanism, diagnosis, and treatment of opioid-related sleep-disordered breathing in order to improve the quality of care for patients with chronic pain. abstract_id: PUBMED:35554583 Neurobehavioral morbidity of pediatric mild sleep-disordered breathing and obstructive sleep apnea. Study Objectives: Obstructive sleep apnea is associated with neurobehavioral dysfunction, but the relationship between disease severity as measured by the apnea-hypopnea index and neurobehavioral morbidity is unclear. The objective of our study is to compare the neurobehavioral morbidity of mild sleep-disordered breathing versus obstructive sleep apnea. Methods: Children 3-12 years old recruited for mild sleep-disordered breathing (snoring with obstructive apnea-hypopnea index &lt; 3) into the Pediatric Adenotonsillectomy Trial for Snoring were compared to children 5-9 years old recruited for obstructive sleep apnea (obstructive apnea-hypopnea 2-30) into the Childhood Adenotonsillectomy Trial. Baseline demographic, polysomnographic, and neurobehavioral outcomes were compared using univariable and multivariable analysis. Results: The sample included 453 participants with obstructive sleep apnea (median obstructive apnea-hypopnea index 5.7) and 459 participants with mild sleep-disordered breathing (median obstructive apnea-hypopnea index 0.5). By polysomnography, participants with obstructive sleep apnea had poorer sleep efficiency and more arousals. Children with mild sleep-disordered breathing had more abnormal executive function scores (adjusted odds ratio 1.96, 95% CI 1.30-2.94) compared to children with obstructive sleep apnea. There were also elevated Conners scores for inattention (adjusted odds ratio 3.16, CI 1.98-5.02) and hyperactivity (adjusted odds ratio 2.82, CI 1.83-4.34) in children recruited for mild sleep-disordered breathing. Conclusions: Abnormal executive function, inattention, and hyperactivity were more common in symptomatic children recruited into a trial for mild sleep-disordered breathing compared to children recruited into a trial for obstructive sleep apnea. Young, snoring children with only minimally elevated apnea-hypopnea levels may still be at risk for deficits in executive function and attention. Trial Registration: Pediatric Adenotonsillectomy for Snoring (PATS), NCT02562040; Childhood Adenotonsillectomy Trial (CHAT), NCT00560859. abstract_id: PUBMED:35277358 Health disparities in pediatric sleep-disordered breathing. Sleep-disordered breathing reflects a continuum of overnight breathing difficulties, ranging from mild snoring to obstructive sleep apnea syndrome. Sleep-disordered breathing in childhood is associated with significant adverse outcomes in multiple domains of functioning. This review summarizes the evidence of well-described ethnic, racial, and socioeconomic disparities in pediatric sleep-disordered breathing, from its prevalence to its treatment-related outcomes. Research on potential socio-ecological contributors to these disparities is also reviewed. Critical future research directions include the development of interventions that address the modifiable social and environmental determinants of these health disparities. abstract_id: PUBMED:23350718 Sleep-disordered breathing in major depressive disorder. Individuals with major depressive disorder often experience obstructive sleep apnea. However, the relationship between depression and less severe sleep-disordered breathing is unclear. This study examined the rate of sleep-disordered breathing in depression after excluding those who had clinically significant sleep apnea (&gt;5 apneas∙h⁻¹). Archival data collected between 1991 and 2005 were used to assess the prevalence of sleep-disordered breathing events in 60 (31 depressed; 29 healthy controls) unmedicated participants. Respiratory events were automatically detected using a program developed in-house measuring thermal nasal air-flow and chest pressure. Results show that even after excluding participants with clinically significant sleep-disordered breathing, individuals with depression continue to exhibit higher rates of sleep-disordered breathing compared with healthy controls (depressed group: apnea-hypopnea index mean = 0.524, SE = 0.105; healthy group: apnea-hypopnea index mean = 0.179, SE = 0.108). Exploratory analyses were also conducted to assess for rates of exclusion in depression studies due to sleep-disordered breathing. Study exclusion of sleep-disordered breathing was quantified based on self-report during telephone screening, and via first night polysomnography. Results from phone screening data reveal that individuals reporting depression were 5.86 times more likely to report a diagnosis of obstructive sleep apnea than presumptive control participants. Furthermore, all of the participants excluded for severe sleep-disordered breathing detected on the first night were participants with depression. These findings illustrate the importance of understanding the relationship between sleep-disordered breathing and depression, and suggest that screening and quantification of sleep-disordered breathing should be considered in depression research. abstract_id: PUBMED:33556778 Prevalence, clinical symptoms and polysomnographic findings of REM-related sleep disordered breathing in Japanese population. Background: Rapid eye movement sleep -predominant sleep-disordered breathing (REM-SDB) is a condition in which sleep apneas or hypopneas predominantly occur during REM sleep. This study aimed to investigate the prevalence, clinical findings including daytime sleepiness or comorbid cardio-metabolic diseases, and detailed polysomnographic features of REM-SDB in Japanese population. Methods: Total of 1458 patients diagnosed as SDB were selected for this study. Prevalence of REM-SDB was estimated in the groups of mild, moderate and severe SDB. Polysomnographic findings, daytime sleepiness self-measured with Epworth Sleepiness Scale (ESS) and history of comorbid cardio-metabolic diseases were obtained and compared between the groups with REM-SDB and non-stage specific SDB. Results: Prevalence of REM-SDB according to the criteria #2 was 22.1% in total SDB subjects, 46,1% in mild SDB and 18.9% in moderate SDB. None was found in severe SDB. Patients with REM-SDB showed milder SDB, higher BMI and higher female predominance than those with non-stage specific SDB. They also showed lower arousal index, respiratory arousal index per total AHI, percentage of N1 sleep, total AHI, ODI, and higher sleep efficiency. Meanwhile, maximum event durations of obstructive apnea and hypopnea were significantly longer, and the nadir SpO2 was lower in REM-SDB. The ESS score and percentages of comorbid cardio-metabolic diseases were roughly equivalent between REM-SDB and non-stage specific SDB. Conclusion: REM-SDB was associated with high obesity, female gender, and mild severity of SDB. They showed daytime sleepiness and comorbid cardio-metabolic diseases quite equally to patients with non-stage specific SDB in spite of their lower AHI and better sleep quality. abstract_id: PUBMED:28673796 Sleep disordered breathing in enuretic children and controls. Introduction: Nocturnal enuresis and sleep disordered breathing are common childhood problems that are reported to be associated with each other. Sleep disordered breathing is often found in children with upper airway obstruction and, according to some studies, its presence is associated with an increased risk of nocturnal enuresis. Respiration during sleep in children with therapy-resistant enuresis, but no history of snoring or sleep apneas, has previously been investigated, and subclinical signs of disordered respiration were found in this group. However, sleep disordered breathing in enuretic children without a history of snoring or sleep apneas has not been thoroughly studied before. Aim: To evaluate sleep disordered breathing in enuretic children and compare them with healthy control children. Subjects And Methods: Children aged 8-13 years with nocturnal enuresis were included. Exclusion criteria were: daytime incontinence, on-going anti-enuretic treatment, and concomitant urological, endocrinological, nephrological or psychiatric disorders. Twenty children (19 boys and 1 girl) suffering from therapy-resistant nocturnal enuresis, and 21 healthy controls (18 boys and 3 girls) underwent one night of polygraphic sleep registration focused on respiratory variables. The registration included electroencephalography as well as assessment of respiratory movements, nasal airflow and oxygen saturation; it was performed with a portable sleep device at the subjects' homes. In addition to this, OSA 18, a health-related quality of life instrument, was used to evaluate subjective issues related to sleep and breathing. Results: The mean apnea hypopnea index values were 0.96 ± 0.8 for the patient group and 0.46 ± 0.4 for the control group. The oxygen desaturation index was slightly higher for the children with nocturnal enuresis compared with the healthy controls (P = 0.05). No other differences were found in the respiratory variables. Both groups of children showed low levels of arousals (Summary Table). The enuretic children reported significantly more subjective sleep disturbances and a lower quality of life than their healthy peers. Discussion: This was the first controlled study of sleep disordered breathing in children with nocturnal enuresis. One limitation of the study was that some variables were known to be underestimated when scoring polygraphic data. The apnea hypopnea index was such a variable and was indeed lower than in a previous study. Conclusion: No major differences in respiration during sleep were found between enuretic children and controls. Answer: Based on the available abstracts, a 2-night polysomnographic study may not be necessary for all cases of childhood sleep-related disordered breathing (SDB). In a study assessing the night-to-night variability of childhood SDB, it was found that a single-night sleep study was adequate and more cost-effective in assessing for childhood SDB. The first-night polysomnography correctly identified 84.6% of cases as defined by the criteria of the worst obstructive apnea index (OAI) over the 2 nights. All cases missed by the first-night study had only borderline OAI (PUBMED:15539714). This suggests that while there is a first-night effect (FNE) demonstrated in children, with sleep efficiency improving and sleep-onset latency reducing on the second night, a single-night study may be sufficient for most cases. However, it is important to note that night-to-night variability of SDB can limit the diagnostic accuracy of a single measurement. A study on long-term night-to-night variability using radar-based home sleep apnea testing (HSAT) over ten nights found considerable variability in the apnea-hypopnea index (AHI), with a standard error of measurement of 5.2/h. After five measurements, the mean AHI of the measured nights stabilized with no evidence of substantial changes with additional measurements (PUBMED:38415722). This suggests that while multiple-night testing could increase diagnostic accuracy, it may not be necessary beyond a certain number of nights. In conclusion, while there is variability in SDB across different nights, for many cases, a single-night polysomnographic study may be adequate for assessing childhood SDB. However, in cases where the diagnosis is uncertain or borderline, additional nights of testing could be considered to improve diagnostic accuracy. Clinicians should consider the individual patient's circumstances, the presence of borderline OAI, and the potential for variability when deciding on the necessity of a second night of polysomnography.
Instruction: Could driving safety be compromised by noise exposure at work and noise-induced hearing loss? Abstracts: abstract_id: PUBMED:18836961 Could driving safety be compromised by noise exposure at work and noise-induced hearing loss? Objective: A study was conducted to verify if there is an association between occupational noise exposure, noise-induced hearing loss and driving safety expanding on previous findings by Picard, et al. (2008) that the two factors did increase accident risk in the workplace. Methods: This study was made possible when driving records of all Quebec drivers were made available by the Societe de l'assurance automobile du Quebec (SAAQ is the state monopoly responsible for the provision of motor vehicle insurance and the compensation of victims of traffic accidents). These records were linked with personal records maintained by the Quebec National Institute of Public Health as part of its mission to prevent noise induced hearing loss in the workplace. Individualized information on occupational noise exposure and hearing sensitivity was available for 46,030 male workers employed in noisy industries who also held a valid driver's permit. The observation period is of five years duration, starting with the most recent audiometric examination. The associations between occupational noise exposure levels, hearing status, and personal driving record were examined by log-binomial regression on data adjusted for age and duration of exposure. Daily noise exposures and bilateral average hearing threshold levels at 3, 4, and 6 kHz were used as independent variables while the dependent variables were 1) the number of motor vehicle accidents experienced by participants during the study period and 2) participants' records of registered traffic violations of the highway safety code. The findings are reported as prevalence ratios (PRs) with their 95% confidence intervals (CIs). Attributable numbers of events were computed with the relevant PRs, lesser-noise, exposed workers and those with normal hearing levels making the group of reference. Results: Adjusting for age confirmed that experienced workers had fewer traffic accidents. The data show that occupational noise exposure and hearing loss have the same effect on driving safety record than that reported on the risk of accident in noisy industrial settings. Specifically, the risk of traffic accident (PR = 1.07 (CI 95% [1.01; 1.15]) is significantly associated with the daily occupational noise exposures &gt;or= 100 dBA. For participants having a bilateral average hearing loss ranging from 16 to 30 dB, the PR of traffic accident is 1.06 (CI 95% [1.01; 1.11]) and reaches 1.31 (CI 95% [1.2; 1.42]) when the hearing loss exceeds of 50 dB. A reduction in the number of speeding violations occurred among workers occupationally exposed to noise levels &gt;or= 90 dBA and those with noise-induced hearing loss &gt;or=16 dB. By contrast, the same individuals had an increase in other violations of the Highway safety code. This suggests that noise-exposed workers might be less vigilant to other traffic hazards. Conclusion: Daily occupational noise exposures &gt;or= 100 dBA and noise-induced hearing losses-even when just barely noticeable-may interfere with the safe operation of motor vehicles. abstract_id: PUBMED:21106136 Noise exposure in convertible automobiles. Objective: To quantify the noise exposure received while driving a convertible automobile with the top open, compared with the top closed. Methods: Five different convertible automobiles were driven, with the top both closed and open, and noise levels measured. The cars were tested at speeds of 88.5, 104.6 and 120.7 km/h. Results: When driving with the convertible top open, the mean noise exposure ranged from 85.3 dB at 88.5 km/h to 89.9 dB at 120.7 km/h. At the tested speeds, noise exposure increased by an average of 12.4-14.6 dB after opening the convertible top. Conclusion: Driving convertible automobiles at speeds exceeding 88.5 km/h, with the top open, may result in noise exposure levels exceeding recommended limits, especially when driving with the convertible top open for prolonged periods. abstract_id: PUBMED:26011417 Patrol Officer Daily Noise Exposure. Previous research shows that police officers are at a higher risk for noise induced hearing loss (NIHL). Little data exists on the occupational tasks, outside of the firing range, that might lead to the increased risk of NIHL. The current study collected noise dosimetry from patrol officers in a smaller department and a larger department in southern Wisconsin, United States. The noise dosimeters simultaneously measured noise in three virtual dosimeters that had different thresholds, criterion levels, and exchange rates. The virtual dosimeters were set to: the Occupational Safety and Health Administration (OSHA) hearing conservation criteria (OSHA-HC), the OSHA permissible exposure level criteria (OSHA-PEL), and the American Conference of Governmental Industrial Hygienists (ACGIH). In addition to wearing a noise dosimeter during their respective work days, officers completed a log form documenting the type of task performed, the duration of that task, if the task involved the use of a siren, and officer characteristics that may have influenced their noise exposure, such as the type of dispatch radio unit worn. Analysis revealed that the normalized 8-hour time weighted averages (TWA) for all officers fell below the recommended OSHA and ACGIH exposure limits. The tasks involving the use of the siren had significantly higher levels than the tasks without (p = 0.005). The highest noise exposure levels were encountered when patrol officers were assisting other public safety agencies such as a fire department or emergency medical services (79 dBA). Canine officers had higher normalized 8-hr TWA noise exposure than regular patrol officers (p = 0.002). Officers with an evening work schedule had significantly higher noise exposure than the officers with a day or night work schedule (p = 0.023). There were no significant differences in exposure levels between the two departments (p = 0.22). Results suggest that this study population is unlikely to experience NIHL as established by the OSHA or ACGIH occupational exposure levels from the daily occupational tasks that were monitored. abstract_id: PUBMED:34228059 Noise Exposure and Evaluation at Tire- Changing Facilities. Thirty (30) personal noise-exposure samples were collected on 20 tire-changing and repair technicians in three tire-changing facilities to determine their personal noise exposures and to estimate the maximum number of tire changes that could be performed without exceeding occupational exposure limits. Of the 30 projected 8-h time-weighted average noise samples, none exceeded the Occupational Safety and Health Administration's Permissible Exposure Limit, 1 (3%) exceeded the Occupational Safety and Health Administration's Action Level, and 18 (60%) exceeded the American Conference for Governmental Industrial Hygienists Threshold limit Value of 85 dBA, indicating the need for a hearing loss prevention program. The average shift time for the technicians was 6 h and 42 min and the average number of tire changes was 18. Based on the projected 8-h noise exposure 95% upper confidence limits, the estimated maximum number of tires that could be changed without exceeding the Occupational Safety and Health Administration's noise action level was 32 tires, the permissible exposure limit greater than 40 tires, and the American Conference of Industrial Hygienists' Threshold Limit Value was less than 20 tires. In addition, area noise samples of tire-changing equipment were taken with a sound-level meter to identify the noise sources that contributed to the tire technicians' exposures. The air ratchet, tire-changing machine, and tire-bead seater were measured at noise levels &gt;85 dBA, increasing the risk of noise-induced hearing loss to the technicians. abstract_id: PUBMED:11010710 Health standards for occupational noise exposure. Mine Safety and Health Administration (MSHA), Labor. Final rule. This final comprehensive rule replaces MSHA's existing standards for occupational noise exposure in coal mines and metal and nonmetal mines. The final rule establishes uniform requirements to protect the Nation's miners from occupational noise-induced hearing loss. The rule is derived in part from existing MSHA noise standards, and from the Department of Labor's existing occupational noise exposure standard for general industry promulgated by the Occupational Safety and Health Administration (OSHA). As a result of the Agency's ongoing review of its safety and health standards, MSHA determined that its existing noise standards, which are more than twenty years old, do not adequately protect miners from occupational noise-induced hearing loss. A significant risk to miners of material impairment of health from workplace exposure to noise over a working lifetime exists when miners' exposure exceeds an 8-hour time-weighted average (TWA8) of 85 dBA. MSHA expects that the final rule will significantly reduce the risk of material impairment within the mining industry as a whole. abstract_id: PUBMED:24639292 Occupational noise exposure and noise-induced hearing loss are associated with work-related injuries leading to admission to hospital. Objective: This study focuses on work-related injuries that required admission to hospital in a population of male workers exposed to occupational noise (≥80 dBA) which some displayed a hearing loss due to their exposure. Methods: The study population count 46 550 male workers, 1670 (3.6%) of whom incurred at least one work-related injury requiring admission to hospital within a period of 5 years following hearing tests conducted between 1987 and 2005. The noise exposure and hearing loss-related data were gathered during occupational noise-induced hearing loss (NIHL) screening. The hospital data were used to identify all members of the study population who were admitted, and the reason for admission. Finally, access to the death-related data made it possible to identify participants who died during the course of the study. Cox proportional hazards model taking into account hearing status, noise levels, age and cumulative duration of noise exposure at the time of the hearing test established the risk of work-related injuries leading to admission to hospital. Results: For each dB of hearing loss, a statistically significant risk increase was observed (HR=1.01 dB 95% CI 1.006 to 1.01). An association (HR=2.36 95% CI 2.01 to 2.77) was also found between working in an occupational ambient noise ≥100 dBA and the risk of injury. Conclusions: From a safety perspective, this issue is highly relevant; especially when workers are exposed to intense ambient noise and NIHL. abstract_id: PUBMED:23241520 Effect of noise exposure on occupational injuries: a cross-sectional study. Background: Noise exposure is the most frequent occupational factor which may increase the risk of work-related injuries. The purpose of this study was to estimate the association between occupational injuries and noise exposure as well as hearing loss. Methods: This study was conducted from April 2008 to March 2009 on 1062 workers in the Tabriz Tractor Manufacturing Plant. Sound pressure level (SPL) ≥85 dB in the workplace was considered as the independent variable (exposure) and physical occupational injuries as the dependent variable (outcome). Data were extracted from the workers' medical records using a checklist. Results: Of 1062 volunteers, 392 (36.9%) were exposed (with SPL≥85 dB) and 670 (63.1%) were unexposed (with SPL&lt;85 dB). One hundred and seventy-six injuries occurred during the study period. Frequency of injuries was about 16 per 200,000 hours, severity of injuries was about 75 per 200,000 hours and incidence rate of injuries was 16%. The exposed group had 1.52 fold increased risk of injury compared to unexposed group (P=0.012). Compared to group with normal hearing status, the risk of occupational injuries increased 1.72 fold in group with partial hearing loss (P=0.062), 7.87 fold in group with mild hearing loss (P=0.003), and 4.58 in group with moderate hearing loss (P=0.049). Conclusion: Occupational noise exposure and hearing impairment have adverse effect on work safety and can increase the probability of work-related injuries. This means reducing noise exposure can contribute to increase safety in workplaces where noise is a factor. Furthermore, using assistive listening devices may reduce risk of work injuries among hearing-impaired workers. abstract_id: PUBMED:37139364 Occupational noise exposure at sea: A socio-legal study on fish harvesters' perceptions in Newfoundland and Labrador, Canada. Introduction: Noise is a significant health hazard for fish harvesters. Chronic exposure to hazardous noise levels of 85 dB (A) for an 8-h work shift can have adverse health impacts, including both auditory and non-auditory health problems such as noise-induced hearing loss, stress, hypertension, sleeping disorders, and impaired cognitive performance. Methods: A review of legislation and policies governing workplace noise exposure, as well as qualitative, semi-structured interviews, were conducted to assess how fish harvesters in Newfoundland and Labrador (NL) manage onboard occupational noise exposure and perceive noise-induced health problems, as well as the barriers and challenges associated with preventing and controlling noise exposure. Results: The legal review shows no compulsory noise preventive measure at the fishing vessel design stage in Canada. Limited implementation of Occupational Health and Safety (OHS) regulations to control and prevent onboard noise by employers in Newfoundland and Labrador. Fishers reported that their workplace is noisy. Over time, fish harvesters adapted to the environment and learned to tolerate loud noise, displaying fatalistic behavior. Fish harvesters reported avoiding using hearing protection onboard due to navigation safety concerns. Fishers reported hearing loss as well as other non-auditory health problems. Inadequate noise control measures adopted by employers, a limited supply of hearing protection onboard, and a lack of regular hearing testing, training, and education were identified as the main barriers to preventing and controlling noise exposure. Conclusion: Proper implementation of NL OHS regulations and the development of hearing conservation initiatives by employers are necessary. All stakeholders, including the federal and provincial governments, WorkplaceNL, and not-for-profit fishing organizations in the province, are strongly recommended to initiate training and education campaigns to help fish harvesters understand noise exposure and adopt preventive measures. abstract_id: PUBMED:27330303 Assessment of Occupational Noise Exposure among Groundskeepers in North Carolina Public Universities. Groundskeepers may have increased risk to noise-induced hearing loss due to the performance of excessively noisy tasks. This study assessed the exposure of groundskeepers to noise in multiple universities and determined the association between noise exposure and variables (ie, university, month, tool used). Personal noise exposures were monitored during the work shift using noise dosimetry. A sound level meter was used to measure the maximum sound pressure levels from groundskeeping equipment. The mean Occupational Safety and Health Administration (OSHA) and National Institute for Occupational Safety and Health (NIOSH) time-weighted average (TWA) noise exposures were 83.0 ± 9.6 and 88.0 ± 6.7 dBA, respectively. About 52% of the OSHA TWAs and 77% of the NIOSH TWAs exceeded 85 dBA. Riding mower use was associated with high TWA noise exposures and with having OSHA TWAs exceeding 85 and 90 dBA. The maximum sound pressure levels of equipment and tools measured ranged from 76 to 109 dBA, 82% of which were &gt;85 dBA. These findings support that groundskeepers have excessive noise exposures, which may be effectively reduced through careful scheduling of the use of noisy equipment/tools. abstract_id: PUBMED:36053031 Evaluating Occupational Noise Exposure as a Contributor to Injury Risk among Miners. Objectives: This study: (i) assessed the relationship between noise exposure and injury risk, comprehensively adjusting for individual factors, psychosocial stressors, and organizational influences; (ii) determined the relative importance of noise on injuries; (iii) estimated the lowest observed adverse effect level (LOAEL) of noise on injury risk to determine the threshold of noise considered hazardous to injuries; and (iv) quantified the fraction of injuries that could be attributed to hazardous noise exposure. Methods: In this cross-sectional study at 10 US surface mine sites, traditional mixed effects, Poisson regression, and boosted regression tree (BRT) models were run on the number of reported work-related injuries in the last year. The LOAEL of noise on injuries was identified by estimating the percent increase in work-related injuries at different thresholds of noise exposure using a counterfactual estimator through the BRT model. A population attributable fraction (PAF) was quantified with this counterfactual estimator to predict reductions in injuries at the LOAEL. Results: Among 18 predictors of work-related injuries, mine site, perceived job safety, age, and sleepiness were the most important predictors. Occupational noise exposure was the seventh most important predictor. The LOAEL of noise for work-related injuries was a full-shift exposure of 88 dBA. Exposure ≥88 dBA was attributed to 20.3% (95% CI: 11.2%, 29.3%) of reported work-related injuries in the last year among the participants. Conclusions: This study further supports hypotheses of a dose-response relationship between occupational noise exposure and work-related injuries, and suggests that exposures ≥88 dBA may increase injury risk in mining. Answer: Yes, driving safety could be compromised by noise exposure at work and noise-induced hearing loss. A study conducted on 46,030 male workers employed in noisy industries who also held a valid driver's permit found that daily occupational noise exposures ≥ 100 dBA and noise-induced hearing losses, even when just barely noticeable, may interfere with the safe operation of motor vehicles. Specifically, the risk of traffic accidents was significantly associated with daily occupational noise exposures ≥ 100 dBA, with a prevalence ratio (PR) of 1.07. For participants with a bilateral average hearing loss ranging from 16 to 30 dB, the PR of traffic accidents was 1.06, and it reached 1.31 when the hearing loss exceeded 50 dB. Additionally, while a reduction in the number of speeding violations occurred among workers occupationally exposed to noise levels ≥ 90 dBA and those with noise-induced hearing loss ≥16 dB, there was an increase in other violations of the Highway Safety Code, suggesting that noise-exposed workers might be less vigilant to other traffic hazards (PUBMED:18836961). This finding is consistent with other research indicating that occupational noise exposure and noise-induced hearing loss are associated with work-related injuries, including those leading to hospital admission (PUBMED:24639292), and that noise exposure can increase the risk of work-related injuries (PUBMED:23241520). Moreover, noise exposure in specific occupational settings, such as driving a convertible automobile with the top open, can result in noise exposure levels exceeding recommended limits (PUBMED:21106136), which could potentially contribute to compromised driving safety. Therefore, the evidence suggests that occupational noise exposure and the resultant hearing loss can indeed have a detrimental effect on driving safety.
Instruction: The Rusch ViewMax vs Macintosh laryngoscopy in human cadavers: impoved vocal cord visualization? Abstracts: abstract_id: PUBMED:19683119 The Rusch ViewMax vs Macintosh laryngoscopy in human cadavers: impoved vocal cord visualization? Objectives: Adequate vocal cord visualization via laryngoscopy is a key component of successful endotracheal intubation. Several tools exist to facilitate laryngoscopy in difficult airways. We compared one such device, the Rusch "ViewMax" (Rusch, Duluth, Ga), to a standard Macintosh laryngoscope blade (Heine USA Ltd, Dover, NH) using human cadaver models. The purpose of this study was to determine if the ViewMax improved vocal cord visualization. Methods: Emergency medicine residents and faculty (N = 26) attempted vocal cord visualization with both ViewMax and Macintosh laryngoscope blades for each of 6 cadaver airways at an airway laboratory training exercise. Percentage of glottic opening (POGO) score was estimated for each laryngoscopy attempt. Data were analyzed by nonparametric statistics. Results: Of 6 cadaver airways, 4 had median POGO score estimates of 100% for both ViewMax and Macintosh laryngoscope blades. Two cadavers had median POGO estimates of less than 50% for both blades. No difference was found in POGO score estimates between the ViewMax and the Macintosh blades in any of the 6 cadavers including those with more difficult vocal cord visualization (P = .27, .35, .61, .40, .39, .31). Conclusion: The Rusch "ViewMax" was not shown to improve vocal cord visualization over standard Macintosh blade laryngoscopy in these cadaver models. Further study with cadaver models known to have more difficult airways may improve power to detect a small difference in vocal cord visualization, though the clinical relevance of any slight difference remains uncertain. abstract_id: PUBMED:28094479 Retromolar laryngoscopy: a randomized crossover vocal cords visualization study. Background: Vocal cords visualization is a major determinant for successful tracheal intubation. The aim of our study was to compare vocal cord visualization by using conventional direct laryngoscopy with retromolar direct laryngoscopy in patients with an existing retromolar gap at the right mandible. Methods: We enrolled 100 adults needing endotracheal intubation for elective surgery. In each patient, the vocal cords were visualized and scored according to Cormack and Lehane with a Macintosh blade #3 for conventional technique and with a Miller blade #4 for the retromolar technique in a randomized sequence. Finally, tracheal intubation was performed primarily by conventional laryngoscopy and in the case of failing retromolar laryngoscopy was used as the rescue method. Results: Overall 100 laryngoscopies with the conventional method and 100 laryngoscopies with the retromolar method were scored according to Cormack and Lehane. The retromolar technique achieved significant (P=0.000003) lower Cormack and Lehane scores compared to the conventional technique. In eleven patients, intubation by conventional laryngoscopy failed and seven of those patients were successfully intubated by the retromolar technique. A BURP-maneuver significantly improved vocal cord visualization during both methods. Conclusions: In summary, laryngoscopy via the retromolar method by using a Miller blade #4 lead to a significantly better vocal cord visualization compared to the conventional method performed with a Macintosh blade #3 in patients with an existing retromolar gap on the right side. abstract_id: PUBMED:34412901 Perioperative transcutaneous laryngeal ultrasonography to assess vocal cord function in thyroid surgery. Background: Early diagnosis of vocal cord iatrogenic injury is crucial, as is perioperative vocal cord evaluation. Methods: Vocal cord mobility detected via transcutaneous laryngeal ultrasonography was compared with that detected via laryngoscopy (the reference). The vocal cord visualization rate of ultrasonography for evaluation of mobility was explored. Results: The diagnostic odds ratio of transcutaneous laryngeal ultrasonography was 303.2212 (95% CI, [86.7944; 1059.3198]). The area under the summary receiver operating characteristic curve was 0.944. The sensitivity, specificity, and negative predictive value were 0.9154 [0.8471; 0.9548], 0.9771 [0.9541; 0.9887], and 0.9915 [0.9868; 0.9946], respectively. The vocal cord visualization of ultrasonography used to evaluate vocal cord mobility was high (0.9572 [0.9091; 0.9804]). Conclusions: Since transcutaneous laryngeal ultrasonography has the advantage in vocal cord visualization, it can be considered when laryngoscopy is unavailable or patients refuse laryngoscopy. Also, it is diagnostically accurate regardless of the used landmarks, VCP definition, and timing for application. abstract_id: PUBMED:20969690 Laryngoscopic techniques to assess vocal cord mobility following thyroid surgery. Background: Assessment of vocal cord mobility using Macintosh laryngoscope is frequently performed after extubation following thyroid surgery to rule out laryngeal nerve palsy. This study compared patient comfort and assessment accuracy of post-operative vocal fold mobility with Macintosh laryngoscope and fibreoptic endoscope. Methods: One hundred four physically fit patients undergoing thyroid surgery were included for the study. Tele-laryngoscopy was done to rule out pre-existing vocal cord palsy. Direct laryngoscopy (DL) was performed to each patient after extubation, followed by nasal fibreoptic endoscopy (NFE) to assess the vocal cord mobility. Tele-laryngoscopic assessment was repeated after 1 week to compare the DL and NFE findings. Patient reactivity score (PRS) and haemodynamic parameters were recorded with each technique. Results: Macintosh laryngoscope could pick up 4 (50% sensitivity and 88% specificity) and fibreoptic endoscope 7 (87.9% sensitivity and 98.9% specificity) out of the 8 vocal cord palsies identified by tele-laryngoscopy. Patients had significant discomfort during DL (PRS median 3) when compared with NFE and tele-laryngoscopy (PRS median 2), P&lt;0.05. Grade 1 view of larynx in 92.1% patients during intubation worsened to grade 2 (76.3%) and grade 3 (15.8%) during extubation with DL, and a significant rise in mean arterial pressure and heart rate was observed from the baseline value till 5 min and when compared with NFE (P&lt;0.05). Conclusion: NFE provides accurate assessment of vocal fold mobility with reasonable patient comfort in the immediate post-operative period. Macintosh laryngoscope fails to give optimum visualization and predisposes the patient to significant discomfort and stress. abstract_id: PUBMED:35724492 Feasibility and accuracy of laryngeal ultrasound for the assessment of vocal cord mobility in children. Objective: Transcutaneous laryngeal ultrasonography has been utilized to document vocal cord mobility in the adult perioperative thyroidectomy setting with variable success. The aims of this study were to assess the feasibility of using transcutaneous ultrasound in vocal cord visualization in children, to determine the accuracy in detecting vocal cord immobility compare to flexible laryngoscopy, and to identify any barriers that may affect its utility. Methods: This is a prospective blinded-assessor study. Transcutaneous laryngeal ultrasound and flexible laryngoscopy were performed in two pediatric settings: perioperative thyroid surgery and inpatient consultation for airway, voice, or swallowing concerns. The video recordings of ultrasound and laryngoscopy were assessed by two fellowship-trained pediatric otolaryngologists independently. Results: A total of 83 paired laryngoscopy and ultrasound assessments were performed, 39 for perioperative thyroidectomy and 44 for inpatient consultation. The majority had normal vocal cord mobility (65/83, 78%), while 16 had unilateral cord palsy and 2 had bilateral palsy. The vocal cords were successfully visualized on ultrasound in 82/83 (99%) evaluations. Compared to gold standard laryngoscopy, the sensitivity for diagnosing vocal cord palsy on ultrasound was 53-83%, and the specificity was 97-98%. The sensitivity for detecting asymmetric vocal cord movement was 56-88%, and the specificity was 99%. The agreement between ultrasound and laryngoscopy diagnoses were 87-94% (κ = 0.58-0.83, moderate to near perfect agreement). The intra-rater agreement was 95-100% (κ = 0.64-1.0), and inter-rater agreement was 90% (κ = 0.66). Both subjects with bilateral vocal cord palsy had incorrect ultrasound diagnoses by both observers. Conclusion: We were able to utilize transcutaneous laryngeal ultrasound to successfully visualize vocal cord mobility in almost all pediatric patients with a high degree of specificity. The modality has limitations in the assessment of bilateral vocal cord palsy, and its sensitivity was observer-dependent. Level Of Evidence: 4: abstract_id: PUBMED:31680502 GlideScope® versus Macintosh laryngoscope for assessment of post-thyroidectomy vocal cord dysfunction: prospective randomized study. Background: Early detection of post-thyroidectomy vocal cord dysfunction is crucial. This study compared GlideScope® with Macintosh direct laryngoscope (MDL) regarding the accuracy of assessment of post-thyroidectomy vocal cord dysfunction. Methods: One hundred and twenty patients scheduled for elective thyroidectomy, aged 18 - 50 years, of either sex were enrolled in the study. Standard general anesthesia technique was used in all patients. At the end of operation, patients were randomly allocated into one of the following sequences: 1) examined with MDL then GlideScope®, or 2) examined with GlideScope® then MDL. Postoperative tele-laryngoscopic examination was done after 12 hours. The primary outcome of the study was the sensitivity and the specificity of GlideScope® to detect vocal cord dysfunction while the secondary outcomes were the Cormack-Lehane grade of the glottis visualization and the incidence of complications. Results: The incidence of vocal cord dysfunction was 15 (12.5%); all cases were unilateral. The GlideScope® was more sensitive (86.7%) and specific (95.2%) with higher diagnostic accuracy (94.2%) for detecting vocal cord dysfunction than MDL. There was a high agreement between the GlideScope® findings and the final diagnosis (kappa 0.754 [95% CI: 0.581-0.928]). Cormack-Lehane score was significantly better by the GlideScope® than by the MDL (P&lt;0.001). The incidence of hoarseness of voice was 32 (26.7%) with eight cases among the diagnosed patients. No cases of aspiration or stridor were recorded. Conclusions: This study demonstrates that GlideScope® is a better alternative to MDL for an accurate detection of post-thyroidectomy vocal cord dysfunction. abstract_id: PUBMED:11994114 The potential of virtual laryngoscopy in the assessment of vocal cord lesions. Virtual laryngoscopy is a useful adjunctive radiological tool in the assessment of laryngeal lesions. A total of 10 patients requiring direct laryngoscopy for the investigation of laryngeal lesions underwent preoperative virtual laryngoscopy using three-dimensional reconstruction of two-dimensional computerized tomography (CT) images. All lesions were correctly diagnosed on virtual laryngoscopy before direct laryngoscopy. Its main advantages are that it does not require general anaesthesia, it allows three-dimensional visualization of the airway beyond areas of narrowing and it gives a highly accurate representation of vocal cord lesions, both in terms of definition and spatial representation. Its disadvantages are that it does not provide histology, it requires an air-mucosa interface to produce an image and it cannot identify functional lesions of the vocal cords. abstract_id: PUBMED:16643225 Evaluation of the new Viewmax laryngoscope in a simulated difficult airway. Background: In this study, we evaluated the learning curve and performance of the Viewmax laryngoscope during simulated difficult laryngoscopy in an intubation manikin (Laerdal Airway Management Trainer). Methods: To determine the learning curve, 25 anaesthesiologists without previous experience with the Viewmax laryngoscope performed 10 successive intubations in an intubation manikin with a normal airway. Time to intubation and failed intubation attempts were recorded. Another manikin was modified to enable comparison of the Viewmax laryngoscope with Macintosh and McCoy laryngoscopes. The time to intubation, number of failed intubation attempts, modified Cormack and Lehane (MCL) laryngeal view grading, percentage of glottic opening (POGO score), use of gum elastic bougie and subjective rating of degree of difficulty were recorded. Results: The learning curve for the Viewmax laryngoscope showed a progressive decrease in time to successful intubation and reached a plateau at the sixth attempt. In simulated difficult laryngoscopy, the Viewmax laryngoscope demonstrated significantly better laryngeal view than the Macintosh and McCoy laryngoscopes in terms of MCL grading (Macintosh, P = 0.01; McCoy, P &lt; 0.01) and POGO score (Macintosh, P &lt; 0.01; McCoy, P &lt; 0.01). The time required for intubation in simulated difficult laryngoscopy for the Viewmax laryngoscope was significantly longer than that for the Macintosh (P = 0.02) and McCoy (P &lt; 0.01) laryngoscopes. There was no significant difference in the degree of difficulty, number of failed intubations and use of gum elastic bougie. Conclusion: When compared with the Macintosh and McCoy laryngoscopes in a manikin, the Viewmax laryngoscope appears to improve the view of the larynx but requires a longer time for tracheal intubation. abstract_id: PUBMED:34314296 Laryngoscopic characteristics related to the risk of cancerization of vocal cord leukoplakia. Background: The diagnosis of vocal cord leukoplakia mainly relies on laryngoscopy. The morphology of vocal cord leukoplakia under laryngoscope is closely related to the pathological nature of leukoplakia. The specific manifestations associated with high-risk vocal cord leukoplakia remain to be explored. Objective: To investigate the characteristics of low-risk and high-risk vocal cord leukoplakia under conventional white light imaging (WLI) laryngoscopy and its correlations with narrow band imaging (NBI) laryngoscopy. Methods: One hundred and seventy-five cases of vocal cord leukoplakia were divided into low-risk and high-risk groups. The characteristics of low-risk and high-risk vocal cord leukoplakia under WLI laryngoscopy and its correlations with NBI laryngoscopy were analyzed. Results: Logistic regression analysis showed that thickness and hyperemia were independent factors (p &lt; .05). Hyperemia had a strong consistency with the visualization of spots under NBI laryngoscopy (kappa = 0.758). The sign of hyperemia and the NBI classification had equivalent diagnostic efficacy for predicting the risk of cancerization of vocal cord leukoplakia. Conclusion: The sign of hyperemia under WLI laryngoscopy is significantly correlated with the visualized spots under NBI laryngoscopy. Hyperemia is an important feature for predicting malignant potential of vocal cord leukoplakia. abstract_id: PUBMED:30488290 Transcutaneous laryngeal ultrasonography (TLUS) as an alternative to direct flexible laryngoscopy (DFL) in the perioperative evaluation of the vocal cord mobility in thyroid surgery. Purpose: Direct flexible laryngoscopy (DFL) is the golden standard to evaluate the vocal cord (VC) function in thyroid and parathyroid surgery pre- and postoperatively. Transcutaneous laryngeal ultrasonography (TLUS) could represent an alternative to the DFL and has been evaluated in the present study comparing the results of both methods performed at two referral centers for endocrine pathologies. Methods: In the setting of a retrospective study, 668 patients (560 female, 118 male; mean age 50.3 ± 14.2) were included from two tertiary referral centers of endocrine surgery. In all patients, TLUS was performed pre- and postoperatively prior to transnasal DFL, which served as a golden standard. TLUS was performed by B-scan (probe 5-13 MHz, aperture 40 mm). Results: Preoperative visualization of the vocal cords by TLUS was possible in 526 patients (78.7%). Due to the frequent thyroid cartilage calcification (TCC) in male patients, a significant difference in the visualization rate was found between female and male (88.7% vs. 26.8%) [p &lt; 0.0001]. Additionally, the visualization rate was inversely related to the patient's age [p &lt; 0.001]. The sensitivity of preoperative TLUS was found to be 66.7%, the specificity 100%. DFL confirmed a postoperative palsy in 34 out of 40 patients with supposed abnormal vocal cord mobility at TLUS and demonstrated a palsy in four more cases with supposed regular mobility at TLUS. Therefore, the sensitivity of postoperative TLUS was 86%, the specificity of 99.1%, positive predictive value 89.4%, negative predictive value 98.7%. Conclusions: TLUS could represent an alternative for the evaluation of vocal cords mobility. This method has the potential to replace the DFL in the majority of cases, especially in female patients. Nevertheless, DFL is still necessary in about 20% of the patients with failed visualization at TLUS. Answer: The study comparing the Rusch ViewMax to the standard Macintosh laryngoscope blade using human cadaver models did not show that the ViewMax improved vocal cord visualization. Emergency medicine residents and faculty attempted vocal cord visualization with both laryngoscope blades across six cadaver airways, and the Percentage of Glottic Opening (POGO) score was estimated for each attempt. The results indicated that for four out of the six cadaver airways, both the ViewMax and Macintosh blades had median POGO score estimates of 100%. For the remaining two cadavers, both blades had median POGO estimates of less than 50%. There was no significant difference in POGO score estimates between the ViewMax and the Macintosh blades in any of the six cadavers, including those with more difficult vocal cord visualization. The study concluded that the Rusch "ViewMax" did not improve vocal cord visualization over the standard Macintosh blade laryngoscopy in these cadaver models (PUBMED:19683119).
Instruction: AIDS in Colorado Springs: is there an epidemic? Abstracts: abstract_id: PUBMED:1734230 Street outreach for STD/HIV prevention--Colorado Springs, Colorado, 1987-1991. Strategies to identify and influence persons at increased risk for infection with human immunodeficiency virus (HIV) and other sexually transmitted diseases (STDs) require outreach to provide prevention messages directly to persons at risk, assist them to obtain prevention services, and encourage them to reduce risks. Street outreach programs complement CDC's information and education campaign "America Responds to AIDS" by providing persons with specific risk-reduction messages and materials. This report describes efforts to implement such approaches in Colorado Springs, Colorado, and presents preliminary results regarding the prevalence of Neisseria gonorrhoeae and Chlamydia trachomatis infections among female street prostitutes in that community. abstract_id: PUBMED:12083437 Risk network structure in the early epidemic phase of HIV transmission in Colorado Springs. This study describes the risk network structure of persons with HIV infection during its early epidemic phase in Colorado Springs, USA, using analysis of community-wide HIV/AIDS contact tracing records (sexual and injecting drug partners) from 1985 to 1999. Paired partner information from other STD/HIV programme records was used to augment network connections. Analyses were conducted with and without this supplemental information. The results suggest that a combined dendritic and cyclic structural network pattern is associated with low to moderate HIV propagation in Colorado Springs, and may account for the absence of intense propagation of the virus. abstract_id: PUBMED:10293567 Followup in HIV testing. Partner notification: the Colorado Department of Health approach. The Colorado Department of Health finds that partner notification, a procedure that has been beneficial in stemming epidemics of communicable diseases (most notably of syphilis), has proved useful and cost beneficial when applied to the current human immunodeficiency virus (HIV) epidemic. Of key importance is the extremely high level of confidentiality maintained by the department. Highly trained disease intervention specialists search for contacts, advise them to seek HIV testing, and counsel them on safer sex and I.V. practices. abstract_id: PUBMED:8280420 AIDS in Colorado Springs: is there an epidemic? Objective: To analyze trends and patterns of HIV infection in a medium-sized community in the United States. Methods: Surveillance for AIDS and HIV infection was conducted by private physicians, military and public clinics, and blood and plasma donation centers. HIV-positive individuals were contacted and asked to refer their sex and injection partners for HIV-antibody testing. Prostitutes, injecting drug users and their sex partners were studied. Selected physicians were surveyed to assess under-reporting. Results: The 740 HIV-infected adults (67 with documented seroconversion) included 506 with no evidence of AIDS, 58 living with AIDS, and 176 who had died. Of the 126 patients cared for by local physicians, 107 (85%) had been reported. No major changes in behavioral risk factors or increases in the number of HIV-infected individuals occurred between 1986 (128) and 1992 (95). Conclusions: Characteristics of individuals at risk and incidence of HIV infection have remained stable from 1981 to 1992. Analysis of data from the comprehensive surveillance and control program established in Colorado Springs in response to the AIDS epidemic suggests that, unlike the nation's epicenters, HIV incidence in this location is neither widespread nor rapidly increasing. The age distribution of reported cases is slowly increasing, and the ratio of newly reported cases to deaths is declining, implying stable or decreasing incidence; deaths may soon exceed new cases. Using data routinely available to public health officials, we conclude that the epidemiologic picture of AIDS--like the clinical one--must be heterogeneous, and that rational planning for the impact of AIDS should be based on the collection and analysis of local data. abstract_id: PUBMED:7882107 Comparison of partner notification at anonymous and confidential HIV test sites in Colorado. We compared health department-initiated partner notification at a single anonymous human immunodeficiency virus (HIV) test site in Denver to 13 confidential HIV test sites throughout Colorado over an 18-month period. The average number of named, notified, and counseled in-state partners was from 30 to 50% greater among confidential site index cases than anonymous site index cases, and the seropositivity rate in newly tested partners of confidential site index cases was more than twice the rate in partners of anonymous test site index cases. When analyses were restricted to gay/bisexual male index cases, the results were the same as for the total group. We recommend that state and local health departments collect data to evaluate and improve the delivery of partner notification services. abstract_id: PUBMED:10570659 Costs of medical injuries in Utah and Colorado. Patient injuries are thought to have a substantial financial impact on the health care system, but recent studies have been limited to estimating the costs of adverse drug events in teaching hospitals. This analysis estimated the costs of all types of patient injuries from a representative sample of hospitals in Utah and Colorado. We detected 459 adverse events (of which 265 were preventable) by reviewing the medical records of 14,732 randomly selected 1992 discharges from 28 hospitals. The total costs (all results are discounted 1996 dollars) were $661,889,000 for adverse events, and $308,382,000 for preventable adverse events. Health care costs totaled $348,081,000 for all adverse events and $159,245,000 for the preventable adverse events. Fifty-seven percent of the adverse event health care costs, and 46% of the preventable adverse event costs were attributed to outpatient medical care. Surgical complications, adverse drug events, and delayed or incorrect diagnoses and therapies were the most expensive types of adverse events. The costs of adverse events were similar to the national costs of caring for people with HIV/AIDS, and totaled 4.8% of per capita health care expenditures in these states. abstract_id: PUBMED:3256545 The obligation of Colorado dentists for reporting AIDS/HIV infection under Colorado law. N/A abstract_id: PUBMED:3359741 AIDS in Colorado. N/A abstract_id: PUBMED:2941456 AIDS 1986. Colorado update. N/A abstract_id: PUBMED:6653038 AIDS in Colorado: rumor and reality. N/A Answer: Based on the available abstracts, the situation of AIDS in Colorado Springs does not appear to be characterized as an epidemic. The surveillance and analysis conducted by private physicians, military and public clinics, and blood and plasma donation centers from 1981 to 1992 showed that the characteristics of individuals at risk and the incidence of HIV infection have remained stable (PUBMED:8280420). The study found no major changes in behavioral risk factors or increases in the number of HIV-infected individuals between 1986 and 1992. The data suggested that, unlike the nation's epicenters, HIV incidence in Colorado Springs was neither widespread nor rapidly increasing. The age distribution of reported cases was slowly increasing, and the ratio of newly reported cases to deaths was declining, implying stable or decreasing incidence. It was concluded that the epidemiologic picture of AIDS in Colorado Springs must be heterogeneous and that rational planning for the impact of AIDS should be based on the collection and analysis of local data (PUBMED:8280420). Additionally, the risk network structure of persons with HIV infection during its early epidemic phase in Colorado Springs, as described in another study, suggested that a combined dendritic and cyclic structural network pattern was associated with low to moderate HIV propagation in the area. This pattern may account for the absence of intense propagation of the virus (PUBMED:12083437). Therefore, based on the information provided in the abstracts, it does not seem that Colorado Springs was experiencing an AIDS epidemic during the time frames covered by the studies.
Instruction: Do peritoneal macrophages play an essential role in the progression of acute pancreatitis in rats? Abstracts: abstract_id: PUBMED:14508132 Do peritoneal macrophages play an essential role in the progression of acute pancreatitis in rats? Introduction: Macrophages are considered to play an essential role in the events leading to systemic inflammatory response. Some are known to reside in the peritoneal cavity but there are no reports defining the participation of peritoneal macrophages (PMs) in the progression of acute pancreatitis. Aim: To clarify the role of PMs in the progression of acute pancreatitis. Methodology: Acute pancreatitis was induced in rats from which macrophages other than PMs were greatly depleted, and in rats greatly depleted of macrophages including PMs. Macrophages were depleted by the injection of liposome encapsulated dichloromethylene bisphosphonate. After the induction of acute pancreatitis, local pancreatic inflammation, intraperitoneal inflammation and lung injury were compared between the 2 groups. Results: Local pancreatic inflammation did not differ between the 2 groups. However, intraperitoneal inflammation was clearly improved by the depletion of PMs. Serum cytokine level and lung injury were also improved by the depletion of PMs. Conclusion: Peritoneal macrophages extend inflammation from the pancreas to the peritoneal cavity and subsequently induce lung injury in acute pancreatitis. Peritoneal macrophages play an essential role in the systemic inflammatory response and the progression of acute pancreatitis in the rat. abstract_id: PUBMED:8283075 The role of tumor necrosis factor-alpha in the aggravation of cerulein-induced pancreatitis in rats. Severe acute pancreatitis is often complicated by intraperitoneal infection, resulting in multiple organ failure (MOF). It is known to elevate serum tumor necrosis factor (TNF-alpha) in patients with sepsis and/or MOF. In order to study the role of TNF-alpha in the aggravation of acute pancreatitis, we investigated TNF-alpha production by peritoneal macrophages in acute pancreatitis rat using the cerulein-induced pancreatitis model. TNF-alpha production by isolated peritoneal macrophages following lipopolysaccharide (LPS) stimulation was significantly increased in pancreatitis rats as compared with nonpancreatitis control rats (p &lt; 0.001). Serum TNF-alpha activity was elevated following intraperitoneal administration of LPS as the septic challenge both in pancreatitis rats and in control rats, being significantly higher in the former (p &lt; 0.05). Histological findings and liver function tests revealed that LPS induced more severe liver damage in pancreatitis rats than in control rats within 24 h after LPS administration. These results indicate that increased TNF-alpha production by peritoneal macrophages in acute pancreatitis augmented LPS-induced liver injury and suggest the possibility that TNF-alpha may play a role in the development of MOF during acute pancreatitis complicated by intraabdominal sepsis. abstract_id: PUBMED:20632444 Pancreatic and pulmonary mast cells activation during experimental acute pancreatitis. Aim: To study the activation of pancreatic and pulmonary mast cells and the effect of mast cell inhibition on the activation of peritoneal and alveolar macrophages during acute pancreatitis. Methods: Pancreatitis was induced by intraductal infusion of 5% sodium taurodeoxycholate in rats. The mast cell inhibitor cromolyn was administered intraperitoneally (i.p.) 30 min before pancreatitis induction. The pancreatic and pulmonary tissue damage was evaluated histologically and mast cells and their state of activation were evaluated. Peritoneal and alveolar macrophages were obtained and the expression of tumor necrosis factor alpha was determined. Myeloperoxidase activity was measured to evaluate the effect of mast cell inhibition on the progression of the inflammatory process. Finally, the effect of plasma on cultured mast cells or macrophages was evaluated in vitro. Results: The mast cell stabilizer significantly reduced inflammation in the pancreas and lung and the activation of alveolar macrophages but had no effect on peritoneal macrophages. Mast cell degranulation was observed in the pancreas during pancreatitis but no changes were observed in the lung. Plasma from rats with pancreatitis could activate alveolar macrophages but did not induce degranulation of mast cells in vitro. Conclusion: Pancreatic mast cells play an important role in triggering the local and systemic inflammatory response in the early stages of acute pancreatitis. In contrast, lung mast cells are not directly involved in the inflammatory response related to pancreatic damage. abstract_id: PUBMED:9558045 Intracellular calcium affects neutrophil chemoattractant expression by macrophages in rats with cerulein-induced pancreatitis. Pancreatitis complicated with infection often results in the development of multiple organ failure. We investigated the role of altered intracellular calcium as a priming signal for cytokine-induced neutrophil chemoattractant expression in this process. Agents modulating cytosolic Ca2+ were utilized to study the in vivo and in vitro cytokine-induced neutrophil chemoattractant expression for macrophages in rats with cerulein-induced pancreatitis after intraperitoneal administration of lipopolysaccharide as a septic challenge. Pretreatment with the calcium channel blocker verapamil significantly reduced serum cytokine-induced neutrophil chemoattractant concentrations in rats with cerulein-induced pancreatitis after septic challenge. Lipopolysaccharide-stimulated in vitro cytokine-induced neutrophil chemoattractant (CINC) production by peritoneal macrophages was significantly enhanced by pretreatment with thapsigargin (an inhibitor of the endoplasmic reticulum-resident Ca2+-ATPase), but not by A23187 (a calcium-specific ionophore, extracellular Ca2+ influx). Pretreatment with U73122 (a phospholipase C inhibitor) inhibited lipopolysaccharide-stimulated but not basal cytokine-induced neutrophil chemoattractant production, while verapamil (a calcium channel blocker), TMB-8 (an inhibitor of calcium release from endoplasmic reticulum), and W7 (calmodulin antagonist) completely abrogated the chemoattractant production. Altered intracellular calcium, due to Ca2+ efflux from intracellular stores, may be involved in the "priming" of macrophages to release cytokine-induced neutrophil chemoattractant following triggering with lipopolysaccharide during acute cerulein pancreatitis. abstract_id: PUBMED:22870264 Differences in the inflammatory response induced by acute pancreatitis in different white adipose tissue sites in the rat. Background: There is increasing evidence of the role of adipose tissue on the systemic effects of acute pancreatitis. Patients with higher body mass index have increased risk of local and systemic complications and patients with android fat distribution and higher waist circumference are at greater risk for developing the severe form of the disease. Here we evaluated the changes on different areas of adipose tissue and its involvement on the inflammatory response in an experimental model of acute pancreatitis. Methods: Pancreatitis was induced in male Wistar rats by intraductal administration of sodium taurocholate. Orlistat was administered to inhibit lipase activity. Activation of peritoneal macrophages was evaluated by measuring IL1β and TNFα expression. Inflammation was evaluated by measuring myeloperoxidase activity in mesenteric, epididymal and retroperitoneal areas of adipose tissue. Changes in the expression of inflammatory mediator in these areas of adipose tissue were also evaluated by RT-PCR. Results: Pancreatitis induces the activation of peritoneal macrophages and a strong inflammatory response in mesenteric and epididymal sites of adipose tissue. By contrast, no changes were found in retroperitoneal adipose tissue. Inhibition of lipase prevented the activation of macrophages and the local inflammation in adipose tissue. Conclusions: Our results confirm the involvement of adipose tissue on the progression of systemic inflammatory response during acute pancreatitis. However, there is a considerable diversity in different adipose tissue sites. These differences need to be taken into account in order to understand the progression from local pancreatic damage to systemic inflammation during acute pancreatitis. abstract_id: PUBMED:30568390 Abdominal paracentesis drainage ameliorates severe acute pancreatitis in rats by regulating the polarization of peritoneal macrophages. Aim: To investigate the role of peritoneal macrophage (PM) polarization in the therapeutic effect of abdominal paracentesis drainage (APD) on severe acute pancreatitis (SAP). Methods: SAP was induced by 5% Na-taurocholate retrograde injection in Sprague-Dawley rats. APD was performed by inserting a drainage tube with a vacuum ball into the lower right abdomen of the rats immediately after the induction of SAP. To verify the effect of APD on macrophages, PMs were isolated and cultured in an environment, with the peritoneal inflammatory environment simulated by the addition of peritoneal lavage in complete RPMI 1640 medium. Hematoxylin and eosin staining was performed. The levels of pancreatitis biomarkers amylase and lipase as well as the levels of inflammatory mediators in the blood and peritoneal lavage were determined. The polarization phenotypes of the PMs were identified by detecting the marker expression of M1/M2 macrophages via flow cytometry, qPCR and immunohistochemical staining. The protein expression in macrophages that had infiltrated the pancreas was determined by Western blot. Results: APD treatment significantly reduced the histopathological scores and levels of amylase, lipase, tumor necrosis factor-α and interleukin (IL)-1β, indicating that APD ameliorates the severity of SAP. Importantly, we found that APD treatment polarized PMs towards the M2 phenotype, as evidenced by the reduced number of M1 macrophages and the reduced levels of pro-inflammatory mediators, such as IL-1β and L-selectin, as well as the increased number of M2 macrophages and increased levels of anti-inflammatory mediators, such as IL-4 and IL-10. Furthermore, in an in vitro study wherein peritoneal lavage from the APD group was added to the cultured PMs to simulate the peritoneal inflammatory environment, PMs also exhibited a dominant M2 phenotype, resulting in a significantly lower level of inflammation. Finally, APD treatment increased the proportion of M2 macrophages and upregulated the expression of the anti-inflammatory protein Arg-1 in the pancreas of SAP model rats. Conclusion: These findings suggest that APD treatment exerts anti-inflammatory effects by regulating the M2 polarization of PMs, providing novel insights into the mechanism underlying its therapeutic effect. abstract_id: PUBMED:21577304 Role of macrophages in the progression of acute pancreatitis. In addition to pancreatic cells, other inflammatory cell populations contribute to the generation of inflammatory mediators during acute pancreatitis. In particular, macrophages could be activated by mediators released during pancreatitis by a damaged pancreas. It has been reported that peritoneal macrophages, alveolar macrophages and Kupffer cells become activated in different stages of severe acute pancreatitis. However, macrophages display remarkable plasticity and can change their physiology in response to environmental cues. Depending on their microenvironmental stimulation, macrophages could follow different activation pathways resulting in marked phenotypic heterogeneity. This ability has made these cells interesting therapeutical targets and several approaches have been assayed to modulate the progression of inflammatory response secondary to acute pancreatitis. However, despite the recent advances in the modulation of macrophage function in vivo, the therapeutical applications of these strategies require a better understanding of the regulation of gene expression in these cells. abstract_id: PUBMED:9873955 Beneficial effect of octreotide treatment in acute pancreatitis in rats. Conclusions: Octreotide treatment contributes to the regulation of tumor necrosis factor (TNF) production in sodium taurocholate-induced acute necrotizing pancreatitis in rats. Owing to its complex effect, octreotide can partially ameliorate the deleterious consequences of acute necrotizing pancreatitis. Elevated TNF and interleukin-6 (IL-6) levels in the peritoneal fluid may be considered a consequence of the activation of peritoneal macrophages. Background: The effects of octreotide on exocrine pancreatic function have been investigated in numerous studies, but little attention has been paid to its influence on cytokine production in acute pancreatitis. Methods: Acute pancreatitis was induced by the retrograde injection of taurocholic acid into the pancreatic duct in male Wistar rats. Serum amylase activity, wet pancreatic weight/body weight (pw/bw) ratio, and TNF and IL-6 levels were measured. Four micrograms/kg of octreotide was administered subcutaneously at the time of induction of pancreatitis and 24 or 48 h later. Rats were sacrificed 6, 24, 48, or 72 h after the operation. Results: The serum amylase level and pancreatic weight to body weight ratio were decreased significantly in the octreotide-treated group. The serum TNF level was decreased significantly in the octreotide-treated group as compared with the control group at 6, 24, and 48 h (0.6 +/- 1.5, 2.0 +/- 3.3, and 0 vs 50 +/- 15.5, 37.5 +/- 18.4, and 13.1 +/- 12.5 U/mL, respectively). The ascites TNF level was decreased to 0 in the octreotide-treated group and was elevated in the control group at 72 h (28.0 +/- 49.0 U/mL). IL-6 production in ascites was extremely high in both groups at 6 h (80,000 +/- 43,817 pg/mL and 58,500 +/- 33,335 pg/mL), but the difference was not significant. abstract_id: PUBMED:22964727 Fat necrosis generates proinflammatory halogenated lipids during acute pancreatitis. Objective: To evaluate the generation of halogenated fatty acids in the areas of fat necrosis during acute pancreatitis and to evaluate the effects of these molecules on the ensuing inflammatory process. Background: Lipid mediators derived from adipose tissue have been implicated in the progression of acute pancreatitis, although their precise role remains unknown. Methods: Acute pancreatitis was induced in rats by intraductal infusion of 3.5% sodium taurocholate. Fatty acid chlorohydrins (FA-Cl) were measured in adipose tissue, ascitic fluid, and plasma by mass spectrometry. Chlorohydrins were also instilled in the rats' peritoneal cavity, and their effects on peritoneal macrophages activation and in systemic inflammation were evaluated. Finally, they have also been measured in plasma from human patients with acute pancreatitis. Results: Induced acute pancreatitis results in a substantial release not only of free fatty acids but also of the chlorohydrins of both oleic and linoleic acids from adipose tissue. In plasma, only the chlorohydrin of oleic acid was detected. Administration of 250-μM lipid chlorohydrins, which is the concentration found in ascitic fluid, induces the expression of TNFα and interleukin-1β in peritoneal macrophages and increases the systemic inflammatory response in pancreatitis. Finally, increased concentrations of oleic acid chlorohydrin have been found in plasma of human patients with pancreatitis. Conclusions: During acute pancreatitis, adipose tissue release FA-Cl, which exacerbate the systemic inflammatory response. abstract_id: PUBMED:19787284 Role of triggering receptor expressed on myeloid cells-1 in experimental severe acute pancreatitis. Background/purpose: Triggering receptor expressed on myeloid cells-1 (TREM-1) is a regulator of immunity and an amplifier of inflammatory signaling. The aim was to clarify the role of TREM-1 in the pathophysiology of experimental severe acute pancreatitis (SAP). Methods: SAP was induced by retrograde injection of 3 and 20% sodium deoxycholate (DCA) into the biliopancreatic ducts in rats (DCA pancreatitis). Soluble TREM-1 levels in serum, ascitic fluid, pancreas, liver and kidney were determined with an established available enzyme-linked immunosorbent assay (ELISA) kit. To clarify the source of soluble TREM-1 in serum and ascitic fluid, peritoneal macrophage depletion was done. Moreover, the effect of blockade of TREM-1 pathway was examined using LP17 (a synthetic TREM-1 inhibitor). Results: Soluble TREM-1 levels in serum and ascitic fluid were higher in SAP. Membrane-bound TREM-1 protein was increased in pancreas, liver and kidney in SAP. Peritoneal macrophage depletion resulted in the reduction of soluble TREM-1 levels in serum and ascitic fluid. Pretreatment with LP17 improved the hepatic and renal dysfunction (serum aspartate aminotransferase and blood urea nitrogen levels) in SAP. Conclusions: TREM-1 may act as an important mediator for inflammation and organ injury in SAP. TREM-1 may be a potential therapeutic target for the development of SAP and associated organ dysfunction. Answer: Yes, peritoneal macrophages play an essential role in the progression of acute pancreatitis in rats. Studies have shown that depletion of peritoneal macrophages (PMs) leads to an improvement in intraperitoneal inflammation and a reduction in serum cytokine levels and lung injury, indicating that PMs extend inflammation from the pancreas to the peritoneal cavity and subsequently induce lung injury in acute pancreatitis (PUBMED:14508132). Additionally, increased tumor necrosis factor-alpha (TNF-alpha) production by PMs in acute pancreatitis has been linked to augmented lipopolysaccharide-induced liver injury, suggesting that TNF-alpha may play a role in the development of multiple organ failure during acute pancreatitis complicated by intraabdominal sepsis (PUBMED:8283075). Furthermore, abdominal paracentesis drainage (APD) has been shown to ameliorate the severity of severe acute pancreatitis (SAP) by regulating the polarization of PMs towards an anti-inflammatory M2 phenotype, which reduces the levels of pro-inflammatory mediators and increases the levels of anti-inflammatory mediators (PUBMED:30568390). These findings collectively suggest that PMs are crucial in the systemic inflammatory response and the progression of acute pancreatitis in rats.
Instruction: Women with epilepsy and post partum bleeding--Is there a role for vitamin K supplementation? Abstracts: abstract_id: PUBMED:25771158 Women with epilepsy and post partum bleeding--Is there a role for vitamin K supplementation? Purpose: Guidelines for women with epilepsy (WWE) are advising those on enzyme inducing drugs EIAEDs to take vitamin K the last month of pregnancy. The primary aim of this study was to investigate whether WWE have a higher frequency of large post partum hemorrhage. Secondary we wanted to see if this was more severe in women taking EIAEDs, and also to evaluate whether those receiving prenatal vitamin K supplementation have a less pronounced risk. Methods: All patients (n=109), with the diagnosis of epilepsy giving birth at OUS Rikshospitalet from 2006 to 2011 were selected to be in the epilepsy group. They were compared to controls with regard to the amount of post partum hemorrhage, gestational age for the mother, birth weight and APGAR score in the newborns. Results: No significant difference between the groups regarding post partum hemorrhage, gestational age, birthweight or APGAR score in the newborn was found. Also, comparing the WWE using EIAED who received prenatal vitamin K with those who did not receive vitamin K, no significant difference in post partum hemorrhage could be demonstrated. Conclusion: In this study, WWE was not found to have increased risk of post partum hemmorrhage including those using EIAED with/without vitamin K supplementation. abstract_id: PUBMED:30061633 Vitamin K supplementation during pregnancy for improving outcomes: a systematic review and meta-analysis. To study supplementation effect of vitamin K (VK) alone or combined with other nutrients administered to pregnant women, we searched Cochrane Pregnancy and Childbirth Group's Trials Register (till 22 January 2016, updated on 28 February 2018) including other resources. Two review authors independently assessed randomised or quasi-randomised controlled trials for inclusion, data extraction, accuracy, and risk of bias. We included older trials from high-income countries (six; 21,493 women-newborns), judged mostly as high or unclear bias risk. We could not assess high-risk e.g. epileptic women, but healthy women (different gestational ages) received varying VK dosages and duration. We meta-analysed neonatal bleeding (RR 1.16, 95% CI 0.59 to 2.29; P = 0.67) and maternal plasma VK1 (MD 2.46, 95% CI 0.98 to 3.93; P = 0.001). We found many outcomes were un-assessed e.g. perinatal death, maternal bleeding, healthcare utilization. Mostly newborns were included where VK found significantly effective for e.g. serum VK (mother-newborn), maternal breast milk VK. Few trials reported neonatal adverse side effects. The GRADE evidence quality was very low i.e. neonatal bleeding, neonatal jaundice, maternal plasma VK1. The intervention was favourable for maternal sera VK1 but remained uncertain for neonatal bleeding and other outcomes. The existing literature gaps warrant future investigations on un-assessed or inadequately reported outcomes. abstract_id: PUBMED:32919204 Clinical characteristics and outcomes in pregnant women with epilepsy. Background: Epilepsy in pregnancy can lead to substantial maternal and neonatal morbidity and mortality. Early intervention in pregnant women with epilepsy (WWE), accurate assessment of the severity of their condition, and effective treatment are required to improve maternal and neonatal prognosis. Many obstetricians lack experience in monitoring and treating pregnant WWE. Aims: The aim of this study was to describe the demographic and clinical characteristics of pregnant WWE and examine maternal and neonatal outcomes. Methods: Medical records of 75 pregnant women with a history of epilepsy who delivered at Beijing Tiantan Hospital, China between January 2006 and December 2019 were retrospectively reviewed. Pregnant women with a history of epilepsy were matched 1:2 with a control group of 150 pregnant women without epilepsy who delivered at Beijing Tiantan Hospital during the same time period. Information including type and frequency of epilepsy and seizures, maternal complications, medication, delivery mode, newborn weight, and newborn Apgar score were recorded. In subgroup analyses, pregnant WWE were stratified according to presence or absence of seizures during pregnancy and generalized seizure vs. nongeneralized seizure. Results: The incidence of anemia, hypertensive disorder of pregnancy, premature rupture of membranes (PROM), cesarean section, and postpartum hemorrhage was significantly higher (p &lt; 0.05), and mean newborn weight and newborn Apgar score were significantly lower (p &lt; 0.05) in pregnant WWE compared with pregnant women without epilepsy. The incidence of premature delivery was significantly higher (p &lt; 0.05), and mean newborn weight was significantly lower (p &lt; 0.05) in pregnant WWE with seizures vs. without seizures. Mean newborn weight was significantly lower (p = 0.01) in pregnant WWE with nongeneralized seizures vs. generalized seizures. Conclusion: Pregnant WWE are at high risk of anemia, gestational hypertension, PROM, cesarean section, postpartum hemorrhage, and low newborn weight and Apgar score. Women with epilepsy who experience seizures during pregnancy are at high risk of preterm birth and having low birth weight infants. Pregnant WWE who experience nongeneralized seizures are at high risk of having low birth weight infants. These data emphasize the need to routinely monitor fetal weight on ultrasound and offer appropriate intervention. These findings highlight the need for healthcare providers to take a multidisciplinary approach to the management of pregnant WWE. Synopsis: Pregnant WWE are at high risk of obstetric complications. Women with epilepsy who experience seizures during pregnancy are at high risk of preterm birth and having low birth weight infants. Pregnant WWE who experience nongeneralized seizures are at high risk of having low birth weight infants. These data highlight the need for healthcare providers to take a multidisciplinary approach to the management of pregnant WWE. abstract_id: PUBMED:33524872 Peripartum hemorrhage and other obstetric and neonatal outcomes in pregnant women with epilepsy: A single-center study. Objective: There are inconsistent results in different studies on maternal and fetal complications of pregnant women with epilepsy (PWWE). The differences may be in part due to racial characteristics. The aim of this study was to investigate if there is an increase in obstetric and neonatal complications, especially postpartum hemorrhage, in pregnant women with epilepsy in the Turkish population. Methods: In a tertiary center, PWWE and the control group were compared in terms of obstetric and neonatal complications as well as postpartum hemorrhage. Among 46,789 deliveries, 154 eligible PWWE were matched 1: 3 with the control group. Results: There was no increase in the possibility of peripartum hemorrhage and blood transfusion in PWWE. However, the probability of cesarean delivery was higher in PWWE (adjusted OR: 3.24 CI: 1.95-5.42, p &lt; 0.001). Moreover, an increase in the possibility of fetal growth restriction, fetal death, admission to the neonatal intensive care unit and lower APGAR scores were found in PWWE. Conclusion: Although the risk of peripartum hemorrhage does not appear to be increased in PWWE, cesarean rates and fetal complications do increase. There is also a need for prospective studies examining long-term neonatal outcomes in pregnancies complicated by epilepsy. abstract_id: PUBMED:31765408 Maternal complications in pregnancy and childbirth for women with epilepsy: Time trends in a nationwide cohort. Objective: Obstetric trends show changes in complication rates and maternal characteristics such as caesarean section, induced labour, and maternal age. To what degree such general time trends and changing patterns of antiepileptic drug use influence pregnancies of women with epilepsy (WWE) is unknown. Our aim was to describe changes in maternal characteristics and obstetric complications in WWE over time, and to assess changes in complication risks in WWE relative to women without epilepsy. Methods: This was a nationwide cohort study of all first births in the Medical Birth Registry of Norway, 1999-2016. We estimated maternal characteristics, complication rates, and risks for WWE compared to women without epilepsy. Main maternal outcome measures were hypertensive disorders, bleeding in pregnancy, induction of labour, caesarean section, postpartum hemorrhage, preterm birth, small for gestational age, and epidural analgesia. Time trends were analyzed by logistic regression and comparisons made with interaction analyses. Results: 426 347 first births were analyzed, and 3077 (0.7%) women had epilepsy. In WWE there was an increase in proportions of induced labour (p&lt;0.005) and use of epidural analgesia (p&lt;0.005), and a reduction in mild preeclampsia (p = 0.006). However, the risk of these outcomes did not change over time. Only the risk of severe preeclampsia increased significantly over time relative to women without epilepsy (p = 0.006). In WWE, folic acid supplementation increased significantly over time (p&lt;0.005), and there was a decrease in smoking during pregnancy (p&lt;0.005), but these changes were less pronounced than for women without epilepsy (p&lt;0.005). Conclusions: During 1999-2016 there were important changes in maternal characteristics and complication rates among WWE. However, outcome risks for WWE relative to women without epilepsy did not change despite changes in antiepileptic drug use patterns. The relative risk of severe preeclampsia increased in women with epilepsy. abstract_id: PUBMED:12661386 The impact of national clinical guidelines on obstetricians in Scotland. Objective: To audit reported clinical practice in relation to four national obstetric guidelines on The Preparation of the Foetus for Preterm Delivery, The Management of Mild, Non-proteinuric Hypertension in Pregnancy, The Management of Pregnancy in Women with Epilepsy and The Management of Postpartum Haemorrhage. Design: Questionnaire surveys before and after dissemination of the guidelines. Subjects: One hundred and sixty one consultants and senior specialist registrars in Scotland. Results: The response rates to the baseline and follow-up surveys were 85% and 74% respectively. Over 90% of the obstetricians kept the guidelines for reference and 85% had been prompted to change or reconsider their practice. Reported compliance improved significantly for six out of twenty nine recommendations covering: the use of tocolysis in women at risk of pre-term labour; the use of prophylactic antibiotics or entry to a clinical trial for pre-term, pre-labour rupture of the membranes; the initiation of steroid therapy in women with insulin-dependent diabetes mellitus; and the prescribing of periconceptual folic acid and vitamin K to women with epilepsy. There were no significant improvements in relation to mild, non-proteinuric hypertension or post-partum haemorrhage. Conclusions: There were significant improvements in the reported management of women at risk of preterm labour and those with epilepsy. However, reported practice in relation to mild, non-proteinuric hypertension and post-partum haemorrhage has improved little. This is possibly because the guidelines for these topics were relatively complicated to understand and apply, and established patterns of practice more resistant to change. abstract_id: PUBMED:20716254 Delivery outcome of women with epilepsy: a population-based cohort study. Objective: To investigate whether women with epilepsy have increased risks of complications during labour, and to explore the impact of antiepileptic drug use. Design: Population-based cohort study. Setting: Data from the Medical Birth Registry of Norway 1999-2005. Population: All births (n=372,128) delivered in Norway, ensured through linkage with the National Population Registry run by Statistics Norway. All singleton births and the first child in multiple pregnancies were included, leaving 365,107 pregnancies for analysis. Methods: Data from the Medical Birth Registry of Norway 1999-2005 were analysed. Main Outcome Measures: Induction, caesarean section, use of forceps and vacuum, abnormal presentation, placental abruption, mechanical disproportion, postpartum haemorrhage, atony and Apgar score &lt;7 after 5 minutes. Results: We compared 2805 pregnancies in women with a current or past history of epilepsy (0.8%) and 362,302 pregnancies in women without a history of epilepsy. Antiepileptic drugs were used in 33.6% (n=942) of pregnant women with epilepsy. Women with epilepsy had an increased risk of induction [odds ratio (OR), 1.3; 95% confidence interval (CI), 1.1-1.4], caesarean section (OR, 1.4; 95% CI, 1.3-1.6) and postpartum haemorrhage (OR, 1.2; 95% CI, 1.1-1.4) compared with women without epilepsy. These rates were even higher for women with epilepsy and antiepileptic drug use, with ORs (95% CIs) of 1.6 (1.4-1.9), 1.6 (1.4-1.9) and 1.5 (1.3-1.9), respectively. In addition, the risk of an Apgar score &lt;7 was higher (OR, 1.6; 95% CI, 1.1-2.4). For women with epilepsy without antiepileptic drug use, only a slightly increased risk of caesarean delivery was observed and no increased risk for any other complications studied. Conclusions: Pregnant women with epilepsy have a low complication rate; however, they have a slightly increased risk of induction, caesarean section and postpartum haemorrhage. It is not possible to ascertain on the basis of this study whether this is a result of more severe epilepsy or antiepileptic drug use. abstract_id: PUBMED:21557799 Obstetric outcome in women with epilepsy: a hospital-based, retrospective study. Objective: To report the complications during pregnancy and delivery in women with epilepsy, compared with a control group without epilepsy, with special focus on potential risk factors, such as epilepsy severity and dosage of antiepileptic drugs. Design: Hospital-based retrospective study. Setting: Data from pregnancy notification forms and hospital case records. Population: Women with a past or present history of epilepsy (n = 205) delivered in Bergen, Norway, in the period 1999-2006, and a matched control group of women (n = 205) without epilepsy. Methods: Data were compared and odds ratios (ORs) with 95% confidence intervals (CIs) were calculated by multiple logistic regression models. Main Outcome Measures: Pre-eclampsia (mild and severe), gestational hypertension, vaginal bleeding (early and late), caesarean section, vaginal operative delivery, postpartum haemorrhage and major malformations. Results: Women with epilepsy using antiepileptic drugs had an increased risk of severe pre-eclampsia (OR, 5.0; 95% CI, 1.3-19.9), bleeding in early pregnancy (OR, 6.4; 95% CI, 2.7-15.2), induction (OR, 2.3; 95% CI, 1.2-4.3) and caesarean section (OR, 2.5; 95% CI, 1.4-4.7) adjusted for maternal age, parity, education, smoking, medical conditions and body mass index ≥30 kg/m(2) . There was also an increased risk of malformations in the offspring (OR, 7.1; 95% CI, 1.4-36.6). Women without antiepileptic drug use had increased risks of forceps delivery and preterm birth. Active epilepsy (seizures during the last 5 years) versus nonactive epilepsy did not discriminate for any of these complications; 84.5% of women with epilepsy and antiepileptic drug use were using folate. Conclusion: Women with epilepsy using antiepileptic drugs had an increased risk of pregnancy and delivery complications, whereas women not using antiepileptic drugs had few complications. Seizures, high doses of antiepileptic drugs, obesity and lack of folate could not explain these increased risks. abstract_id: PUBMED:29719929 The effect of parity on risk of complications in pregnant women with epilepsy: a population-based cohort study. Introduction: Women with epilepsy have increased risk of complications in pregnancy with consequences for the mother and child. There are no studies on the influence of parity on complications in women with epilepsy. MATERIAL AND METHODS: This was a population-based cohort study of all first and second births in the Medical Birth Registry of Norway 1999-2013. Risks were estimated and complication rates were compared in distinct women with epilepsy treatment categories. Outcomes were any hypertensive disorder, bleeding in pregnancy, induction of labor, cesarean section, postpartum hemorrhage and preterm birth. Results: We examined 361 588 women, of whom 211 248 had a second birth and 1074 (0.5%) of these had a diagnosis of epilepsy in both births. Of these, 406 used antiepileptic drugs in both pregnancies with lamotrigine (n = 118), carbamazepine (n = 83), valproate (n = 44) and levetiracetam (n = 27) being the four most common monotherapies. In the second birth, only risk of elective cesarean section (adjusted odds ratio 1.7, 95% confidence interval 1.4-2.0) and induction of labor (adjusted odds ratio 1.5, 95% confidence interval 1.2-1.7) were increased in women with epilepsy compared with women without epilepsy. There was a significant reduction in any hypertensive disorder, mild preeclampsia, emergency cesarean section, postpartum hemorrhage (&gt;500 mL) and preterm birth from first to second birth in women with epilepsy, and also a significant increase in elective cesarean section. Conclusions: Second births in women with epilepsy do not represent an increased risk of non-iatrogenic complications, independent of antiepileptic drug use. There is a significant reduction in complications from first to second births in women with epilepsy. abstract_id: PUBMED:28591672 Global variation in pregnancy complications in women with epilepsy: A meta-analysis. Objective: Women with epilepsy are at risk of pregnancy complications. Whether these vary globally is unknown. We undertook a systematic review to assess the overall rates of obstetric outcomes in women with epilepsy, and variations in estimates across geographical region, economic status of country, and over time. Study Design: We searched Medline, Embase, Cochrane, AMED and CINAHL, (January 1, 1990 and November 29, 2016), without any language restrictions for studies reporting core maternal and fetal outcomes in women with epilepsy. We pooled the results using Freeman-Tukey Transformation random effectsanalysis, and reported our findings as rates of complications per 100 pregnancies with 95% confidence intervals (CI). We assessed for differences in risk across World Health Organisation (WHO) regions, income status, and year of publication. Results: From the 7420 articles screened, we included 60 studies (62 articles). In women with epilepsy (116,105 pregnancies), the commonest complications were caesarean section (29.2 per 100 pregnancies; 95% CI 25.4-33.1, I2=98.44%), and admission to the neonatal intensive care unit (12.5 per 100 pregnancies; 95% CI 9.6-15.8, I2=60.63%). There were significant differences across the WHO regions, with the highest rates of caesarean section (37%, 95% CI 32%-42%); spontaneous miscarriage (39%, 95% CI 35%-44%) and preterm birth (10%, 95% CI 8%-12%) in the Americas; postpartum haemorrhage (9%, 95% CI 7%-12%), hypertensive disorders (14%, 95% CI 8%-21%) and perinatal death (2%, 95% CI 1%-7%) in the Western Pacific; induction of labour (30%, 95% CI 19%-41%) in South East Asia and antepartum haemorrhage (10%, 95% CI 5%-17%) in the Eastern Mediterranean. The reported rates of antepartum haemorrhage, caesarean section, gestational diabetes and spontaneous miscarriage were highest in high income and high-middle income countries. Over time, there was a significant increase in caesarean section, and reduction in stillbirths, perinatal deaths and admission to the neonatal intensive care unit in women with epilepsy. Conclusion: There is significant variation in reported maternal and offspring outcomes in pregnant women with epilepsy across geographical regions, economic status of country and over time, which needs to be considered in setting priorities for clinical management and research. Answer: The role of vitamin K supplementation in women with epilepsy (WWE) concerning postpartum bleeding has been explored in several studies, but the evidence is not conclusive. One study specifically investigating WWE and postpartum hemorrhage found no significant difference in the risk of postpartum hemorrhage between WWE and controls, nor between WWE using enzyme-inducing antiepileptic drugs (EIAEDs) who received prenatal vitamin K and those who did not (PUBMED:25771158). This suggests that vitamin K supplementation may not have a pronounced effect on reducing the risk of postpartum hemorrhage in WWE. A systematic review and meta-analysis looking at vitamin K supplementation during pregnancy for improving outcomes did not specifically assess high-risk groups such as epileptic women. However, it found very low-quality evidence for neonatal bleeding and other outcomes, indicating that the role of vitamin K supplementation remains uncertain for these outcomes (PUBMED:30061633). Another study examining the clinical characteristics and outcomes in pregnant WWE found that WWE are at higher risk for several obstetric complications, including postpartum hemorrhage. However, this study did not specifically address the impact of vitamin K supplementation on these risks (PUBMED:32919204). A single-center study on obstetric and neonatal outcomes in pregnant WWE in the Turkish population also found no increase in the risk of peripartum hemorrhage in WWE, suggesting that vitamin K supplementation may not be necessary for this specific outcome (PUBMED:33524872). Overall, while WWE may have an increased risk of certain obstetric complications, including postpartum hemorrhage, the available evidence does not clearly support a role for vitamin K supplementation in reducing this risk. Further research may be needed to fully understand the potential benefits of vitamin K supplementation in WWE, particularly those on EIAEDs (PUBMED:25771158; PUBMED:30061633; PUBMED:32919204; PUBMED:33524872).
Instruction: Do nurses wish to continue working for the UK National Health Service? Abstracts: abstract_id: PUBMED:26568595 Development of a Provisional Core Domain Set for Polymyalgia Rheumatica: Report from the OMERACT 12 Polymyalgia Rheumatica Working Group. Objective: The Outcome Measures in Rheumatology (OMERACT) polymyalgia rheumatica (PMR) working group aims to develop a core set of outcome measures to be used in clinical trials for PMR. Previous reports from OMERACT 11 included a qualitative study of the patient experience and a preliminary literature review. Methods: A 3-round Delphi survey of clinicians and patients with PMR was undertaken to identify a candidate core domain set for PMR research. Additionally, a literature review of outcome measures and their respective measurement instruments was undertaken. Meetings of patient research partners and clinicians were convened to review face validity of the provisional core domain set, which was subsequently presented and discussed at the OMERACT 12 congress. Results: Of the 60 clinicians taking part in round 1, 55 took part in round 2 and 51 in round 3. Of the 55 patients who took part in round 1, 46 and 35 took part in subsequent rounds. In total, 91% of participants in round 3 deemed the resulting draft core domain set reasonable. The literature review identified 28 studies for full review. Measurement instruments for each proposed domain were identified. Clinicians are highly aware of glucocorticoid-related adverse effects, but there is relatively little evidence about their true prevalence and severity, especially in PMR. Conclusion: A provisional core domain set, presented for clinical trials in PMR, comprises acute phase markers, physical function, death, glucocorticoid-related adverse events, and development of giant cell arteritis. Measurement instruments are suggested that may cover each domain, but these require formal validation for clinical trials in PMR. abstract_id: PUBMED:29858345 Prologue: 2017 Annual Meeting of the Group for Research and Assessment of Psoriasis and Psoriatic Arthritis (GRAPPA). The 2017 Annual Meeting of the Group for Research and Assessment of Psoriasis and Psoriatic Arthritis (GRAPPA) was held in Amsterdam, the Netherlands, and was attended by rheumatologists, dermatologists, representatives of biopharmaceutical companies, and patients. As in previous years, GRAPPA members held a symposium for trainees to discuss their research in psoriatic disease with experts in the field. Other subjects featured during the annual meeting included a discussion of the history, clinical features, controversies, and immunogenetics of juvenile psoriatic arthritis; updates from working groups in Outcome Measures in Rheumatology and International Dermatology Outcome Measures; a discussion of the benefits and challenges of setting up a longitudinal psoriatic arthritis (PsA) database; 3 separate discussions of the effects of the microbiome on skin and joints in psoriasis and PsA; a discussion of options for assessing joints and entheses in PsA by ultrasonography and magnetic resonance imaging; an update on GRAPPA's research and educational projects; a discussion of patient centricity, including the incorporation of patient research partners (PRP) into psoriasis and PsA research and educational efforts, from GRAPPA's PRP; and a discussion of the GRAPPA-Collaborative Research Network's inaugural meeting. In this prologue, we introduce the papers that summarize that meeting. abstract_id: PUBMED:28864650 OMERACT Endorsement of Patient-reported Outcome Instruments in Antineutrophil Cytoplasmic Antibody-associated Vasculitis. Objective: The antineutrophil cytoplasmic antibody-associated vasculitides (AAV) are multiorgan diseases. Patients with AAV report impairment in their health-related quality of life (HRQOL) and have different priorities regarding disease assessment compared with physicians. The Outcome Measures in Rheumatology (OMERACT) Vasculitis Working Group previously received endorsement for a core set of domains in AAV. Two approaches to measure patient-reported outcomes (PRO) were presented at OMERACT 2016. Methods: A novel 5-step tool was used to facilitate assessment of the instruments by delegates: the OMERACT Filter 2.0 Instrument Selection Algorithm, with a red-amber-green checklist of questions, including (1) good match with domain (face and content validity), (2) feasibility, (3) do numeric scores make sense (construct validity)?, (4) overall ratings of discrimination, and (5) can individual thresholds of meaning be defined? Delegates gave an overall endorsement. Three generic Patient-Reported Outcomes Measurement Information System (PROMIS) instruments (fatigue, physical functioning, and pain interference) and a disease-specific PRO, the AAV-PRO (6 domains related to symptoms and HRQOL), were presented. Results: OMERACT delegates endorsed the use of the PROMIS instruments for fatigue, physical functioning, and pain interference (87.6% overall endorsement) and the disease-specific AAV-PRO instrument (89.4% overall endorsement). Conclusion: The OMERACT Vasculitis Working Group gained endorsement by OMERACT for use of the PROMIS and the AAV-PRO in clinical trials of vasculitis. These instruments are complementary to each other. The PROMIS and the AAV-PRO need further work to assess their utility in longitudinal settings, including their ability to discriminate between treatments of varying efficacy in the setting of a randomized controlled trial. abstract_id: PUBMED:28461520 Prologue: 2016 Annual Meeting of the Group for Research and Assessment of Psoriasis and Psoriatic Arthritis (GRAPPA). The 2016 annual meeting of the Group for Research and Assessment of Psoriasis and Psoriatic Arthritis (GRAPPA) was held in Miami, Florida, USA, and attended by rheumatologists, dermatologists, and representatives of biopharmaceutical companies and patient groups. As in previous years, GRAPPA members held a symposium for trainees to discuss their research in psoriatic disease with experts in the field. A strategic planning session was convened by the Steering Committee this year to review the work of GRAPPA since its inception in 2003. Other subjects featured during the annual meeting included a partnership with KPMG LLP (UK) to conduct interviews at research centers worldwide to analyze the process of care in psoriasis and psoriatic arthritis (PsA); a discussion of the effects of interleukin 17-related pathways on the skin and joints in psoriasis and PsA; summaries of recently published treatment recommendations and related guides; 4 separate discussions of psoriasis patient examinations; updates from working groups in the Outcome Measures in Rheumatology and the International Dermatology Outcome Measures; a discussion of patient centricity from GRAPPA's patient research partners; and an update of research and educational projects from GRAPPA. In this prologue, we introduce the papers that summarize that meeting. abstract_id: PUBMED:28148697 Updating the Psoriatic Arthritis (PsA) Core Domain Set: A Report from the PsA Workshop at OMERACT 2016. Objective: To include the patient perspective in accordance with the Outcome Measures in Rheumatology (OMERACT) Filter 2.0 in the updated Psoriatic Arthritis (PsA) Core Domain Set for randomized controlled trials (RCT) and longitudinal observational studies (LOS). Methods: At OMERACT 2016, research conducted to update the PsA Core Domain Set was presented and discussed in breakout groups. The updated PsA Core Domain Set was voted on and endorsed by OMERACT participants. Results: We conducted a systematic literature review of domains measured in PsA RCT and LOS, and identified 24 domains. We conducted 24 focus groups with 130 patients from 7 countries representing 5 continents to identify patient domains. We achieved consensus through 2 rounds of separate surveys with 50 patients and 75 physicians, and a nominal group technique meeting with 12 patients and 12 physicians. We conducted a workshop and breakout groups at OMERACT 2016 in which findings were presented and discussed. The updated PsA Core Domain Set endorsed with 90% agreement by OMERACT 2016 participants included musculoskeletal disease activity, skin disease activity, fatigue, pain, patient's global assessment, physical function, health-related quality of life, and systemic inflammation, which were recommended for all RCT and LOS. These were important, but not required in all RCT and LOS: economic cost, emotional well-being, participation, and structural damage. Independence, sleep, stiffness, and treatment burden were on the research agenda. Conclusion: The updated PsA Core Domain Set was endorsed at OMERACT 2016. Next steps for the PsA working group include evaluation of PsA outcome measures and development of a PsA Core Outcome Measurement Set. abstract_id: PUBMED:38283613 A new clinical challenge: supporting patients coping with the long-term effects of COVID-19. Mental Health Practitioners (MHPs) have a unique opportunity to provide resources and support to those suffering from Long COVID (LC), the post infectious illness that often follows an acute SARS-CoV-2 infection. In working with these individuals, MHPs can learn from the experiences of patients with another post-infectious disease known as myalgic encephalomyelitis/chronic fatigue syndrome (ME/CFS). ME/CFS was once thought to be a psychologically mediated disorder caused by deconditioning and the fear of exertion following a precipitating event such as a viral infection. Research now shows that LC and ME/CFS are biomedical, multisystem, complex physiologic diseases. This article provides a framework to MHPs for the treatment of LC patients using knowledge derived from three decades of research on ME/CFS. abstract_id: PUBMED:36466119 Modelling the effect of first-wave COVID-19 on mental health services. During the first wave of the COVID-19 pandemic it emerged that the nature and magnitude of demand for mental health services was changing. Considerable increases were expected to follow initial lulls as treatment was sought for new and existing conditions following relaxation of 'lockdown' measures. For this to be managed by the various services that constitute a mental health system, it would be necessary to complement such projections with assessments of capacity, in order to understand the propagation of demand and the value of any consequent mitigations. This paper provides an account of exploratory modelling undertaken within a major UK healthcare system during the first wave of the pandemic, when actionable insights were in short supply and decisions were made under much uncertainty. In understanding the impact on post-lockdown operational performance, the objective was to evaluate the efficacy of two considered interventions against a baseline 'do nothing' scenario. In doing so, a versatile and purpose-built discrete time simulation model was developed, calibrated and used by a multi-disciplinary project working group. The solution, representing a multi-node, multi-server queueing network with reneging, is implemented in open-source software and is freely and publicly available. abstract_id: PUBMED:35459950 Modelling the effect of COVID-19 mass vaccination on acute hospital admissions. Background: Managing high levels of acute COVID-19 bed occupancy can affect the quality of care provided to both affected patients and those requiring other hospital services. Mass vaccination has offered a route to reduce societal restrictions while protecting hospitals from being overwhelmed. Yet, early in the mass vaccination effort, the possible impact on future bed pressures remained subject to considerable uncertainty. Objective: The aim of this study was to model the effect of vaccination on projections of acute and intensive care bed demand within a 1 million resident healthcare system located in South West England. Methods: An age-structured epidemiological model of the susceptible-exposed-infectious-recovered type was fitted to local data up to the time of the study, in early March 2021. Model parameters and vaccination scenarios were calibrated through a system-wide multidisciplinary working group, comprising public health intelligence specialists, healthcare planners, epidemiologists and academics. Scenarios assumed incremental relaxations to societal restrictions according to the envisaged UK Government timeline, with all restrictions to be removed by 21 June 2021. Results: Achieving 95% vaccine uptake in adults by 31 July 2021 would not avert the third wave in autumn 2021 but would produce a median peak bed requirement ∼6% (IQR: 1-24%) of that experienced during the second wave (January 2021). A 2-month delay in vaccine rollout would lead to significantly higher peak bed occupancy, at 66% (11-146%) of that of the second wave. If only 75% uptake was achieved (the amount typically associated with vaccination campaigns), then the second wave peak for acute and intensive care beds would be exceeded by 4% and 19%, respectively, an amount which would seriously pressure hospital capacity. Conclusion: Modelling influenced decision-making among senior managers in setting COVID-19 bed capacity levels, as well as highlighting the importance of public health in promoting high vaccine uptake among the population. Forecast accuracy has since been supported by actual data collected following the analysis, with observed peak bed occupancy falling comfortably within the inter-quartile range of modelled projections. abstract_id: PUBMED:15495093 Prophylactic platelet transfusion for haemorrhage after chemotherapy and stem cell transplantation. Background: Platelet transfusions are used in modern clinical practice to prevent and treat bleeding in thrombocytopenic patients with bone marrow failure. Although considerable advances have been made in platelet transfusion therapy in the last 30 years, some areas continue to provoke debate, especially the use of prophylactic platelet transfusions for the prevention of thrombocytopenic bleeding. Objectives: To determine the optimal use of platelet transfusion for the prevention of haemorrhage (prophylactic platelet transfusion) in patients with haematological malignancies undergoing chemotherapy or stem cell transplantation. Search Strategy: Randomised controlled trials (RCTs) were searched for in the Cochrane Central Register of Controlled Trials (CENTRAL). Searching was also undertaken on the OVID versions of MEDLINE and EMBASE using an RCT search filter strategy. Selection Criteria: Randomised controlled trials involving transfusions of platelet concentrates, prepared either from individual units of whole blood or by apheresis, and given prophylactically to prevent bleeding in patients with haematological malignancies and receiving treatment with chemotherapy and/or stem cell transplantation. Data Collection And Analysis: All electronically derived citations and abstracts of papers identified by the review search strategy were initially screened for relevancy by one reviewer. Studies clearly irrelevant were excluded at this stage. The full text of all potentially relevant trials was then formally assessed for eligibility by two reviewers independently. Two reviewers completed data extraction independently. Missing data were requested from the original investigators, as appropriate. Disagreements were resolved by discussion with the other reviewers. Main Results: Eight completed published trials, with a total of 390 participants in the intervention groups and 362 participants in the control groups, were included in the review for further analysis. The eight studies were classified as: * three trials relevant to prophylactic platelet transfusions versus therapeutic platelet transfusions; * three trials relevant to prophylactic platelet transfusion with one trigger level versus prophylactic platelet transfusion with another trigger level; * two trials relevant to prophylactic platelet transfusion with one dose schedule versus prophylactic platelet transfusion with another dose schedule. The few reports of controlled trials addressing prophylactic versus therapeutic transfusions contained small numbers of patients and were all undertaken over 25 years ago. None of these three studies explicitly clarified whether the lack of a reported difference was a reflection of insufficient power in the trials. The findings of the meta-analyses for this group of three small studies must be interpreted with caution. In contrast, more contemporary trials addressed the question of what platelet count thresholds should apply for prophylactic transfusion; three identified studies broadly compared platelet transfusion thresholds of 10 versus 20 x 109/litre for different clinical groups of patients. There were no statistically significant differences between the groups with regards to mortality, remission rates, number of participants with severe bleeding events or red cell transfusion requirements. However, it was unclear whether the studies had sufficient power to demonstrate in combination non-inferiority in terms of safety of the lower threshold, 10 x 109/litre. Insufficient randomised trials have been undertaken to make clinically relevant conclusions about the effect of different platelet doses. Reviewers' Conclusions: There are no reasons to change current practice but uncertainty about the practice of prophylactic transfusion therapy should be recognised, particularly in the light of concerns about the scenario that blood products, including platelets, could become an increasingly scarce resource in the future and for which adequate alternatives do not exist. Consideration should be given to developing adequately powered trials comparing strategies of prophylaxis versus therapeutic platelet transfusion. abstract_id: PUBMED:30709947 The OMERACT Core Domain Set for Clinical Trials of Shoulder Disorders. Objective: To reach consensus on the core domains to be included in a core domain set for clinical trials of shoulder disorders using the Outcome Measures in Rheumatology (OMERACT) Filter 2.1 Core Domain Set process. Methods: At OMERACT 2018, the OMERACT Shoulder Working Group conducted a workshop that presented the OMERACT 2016 preliminary core domain set and its rationale based upon a systematic review of domains measured in shoulder trials and international Delphi sessions involving patients, clinicians, and researchers, as well as a new systematic review of qualitative studies on the experiences of people with shoulder disorders. After discussions in breakout groups, the OMERACT core domain set for clinical trials of shoulder disorders was presented for endorsement by OMERACT 2018 participants. Results: The qualitative review (n = 8) identified all domains included in the preliminary core set. An additional domain, cognitive dysfunction, was also identified, but confidence that this represents a core domain was very low. The core domain set that was endorsed by the OMERACT participants, with 71% agreement, includes 4 "mandatory" trial domains: pain, function, patient global - shoulder, and adverse events including death; and 4 "important but optional" domains: participation (recreation/work), sleep, emotional well-being, and condition-specific pathophysiological manifestations. Cognitive dysfunction was voted out of the core domain set. Conclusion: OMERACT 2018 delegates endorsed a core domain set for clinical trials of shoulder disorders. The next step includes identification of a core outcome measurement set that passes the OMERACT 2.1 Filter for measuring each domain. Answer: The provided abstracts do not contain information regarding the desires of nurses to continue working for the UK National Health Service (NHS). The abstracts focus on various topics related to clinical trials, outcome measures, and patient experiences in rheumatology and other medical conditions, as well as the impact of COVID-19 on mental health services and the healthcare system. None of the abstracts specifically address the job satisfaction, retention, or employment preferences of nurses within the NHS or any other healthcare system. Therefore, based on the provided abstracts, it is not possible to answer the question about whether nurses wish to continue working for the UK National Health Service.
Instruction: Off-label use of medicines in children: can available evidence avoid useless paediatric trials? Abstracts: abstract_id: PUBMED:24329556 Prescribing for off-label use and unauthorized medicines in three paediatric wards in Finland, the status before and after the European Union Paediatric Regulation. What Is Known And Objective: The European Paediatric Regulation aims to reduce off-label use of medicines in paediatric pharmacotherapy. Prescribing for off-label use and unauthorized medicines was common in the paediatric wards of the Kuopio University Hospital in 2001. To evaluate the possible impact of the Regulation on the prevalence and the frequency on such prescribing, we repeated the study in 2011 as it was conducted 10 years earlier. Methods: In this prospective study, the prescriptions for patients below 18 years of age were reviewed during a 2-week period in each of the three wards; neonatal intensive care unit, general paediatric ward and paediatric surgical ward in April and May 2011. The medicine's authorizing status of all prescriptions was determined according to the approved summary of product characteristics valid during the study in Finland. Data concerning unauthorized medicines were also recorded and classified. Results: Out of the entire study population of 123 patients, 119 received a total of 1054 prescriptions in 2011. The proportion of patients with at least one prescription for off-label use or for an unauthorized medicine was significantly higher, 79% (n = 97) in 2011, compared to 58% in 2001 (P &lt; 0·001). For newborns, significantly more prescriptions were for off-label use in 2011 than in 2001 (51% vs. 22%; P &lt; 0·001). The proportion of prescriptions for unauthorized medicines was significantly higher in children below 2 years of age than in older children in both years (21% vs. 5% in 2011 and 24% vs. 3% in 2001, P &lt; 0·001). What Is New And Conclusion: The prescribing for off-label use and unauthorized medicines was more prevalent in 2011 than in 2001. This indicates that the recent legislation has had only minor or no impact on the authorizing status of medicines commonly used in paediatric inpatients in specialized care. abstract_id: PUBMED:33276598 Cardiovascular Paediatric Medicines Development: Have Paediatric Investigation Plans Lost Heart? This work aimed to establish whether paediatric needs in cardiovascular diseases have been met by paediatric investigation plans (PIPs) produced since the development of the European Union Paediatric Regulation in 2007. The European Medicines Agency repository was searched for patterns in the development of paediatric medicines in general. Next, positive PIPs related to cardiovascular diseases were scrutinized for outcomes and compared to specific paediatric cardiovascular needs. In total, 1866 PIPs were identified with 12% corresponding to decisions taken for cardiovascular medicines. However, despite this therapeutic area having the greatest number of overall PIPs, only 14% of established needs in paediatric cardiovascular diseases were addressed by PIPs with positive decisions. Further, 71.9% of PIPs with decisions in cardiovascular disease corresponded to full waivers, so the product would not be studied in paediatrics. Despite the progress found in overall numbers of PIPs published, cardiovascular products are still commonly used off-label in paediatrics. Particularly, there is a need to develop products to treat heart failure and hypertension, two areas with clear unmet clinical needs in paediatrics. A case study on valsartan showed that industry, regulators, health technology assessment bodies, and prescribers should work together to reduce off-label use of paediatric cardiovascular diseases (CVD). abstract_id: PUBMED:27317085 Insufficient access to oral paediatric medicines in Ghana: A descriptive study. Background: Among the most vulnerable people in society are children and this is especially so in their access to health care Off-label prescription of paediatric medicines is known to be associated with safety outcomes some of which may be serious. This study identifies frequently prescribed children's medicines that are not readily available in Ghana and are prepared extemporaneously. Method: All prescriptions for extemporaneous oral preparations for children presented to the local production unit of the Korle-Bu Teaching Hospital from November, 2013 were eligible for the study. Information from such prescriptions was recorded in a systematic format. Presence of the prescribed medicine on the World Health Organization Children's Medicine List was ascertained in addition to the anatomical and therapeutic classification code. The registration of the prescribed medicine for paediatric use by the Food and Drugs Authority, Ghana was also checked. Descriptive statistics of the data was presented. Results: In all 622 prescriptions for 35 different paediatric formulations were served. Prescriptions from several health facilities including government hospitals (6.6 %, N = 622), private hospitals (2.4 %, N = 622) and the University of Ghana hospital (1.1 %, N = 622) were all honoured. Some of the prescribed medicines (Baclofen, Clonazepam, Hydroxyurea and Lamotrigine) were neither on the World Health Organization Children's Medicine list nor registered with the Food and Drugs Authority, Ghana. Most prescribed medicines (88.6 %, N = 35) were for non-communicable diseases. Conclusion: Paediatric prescriptions including off-label medicines are prescribed and formulated extemporaneously in this setting. Steps should be taken to improve access and monitor benefit-risk profiles of paediatric medicines in order to improve treatment outcomes among children. abstract_id: PUBMED:27142871 Maltese prescribers use of off-label and unlicensed medicines in children: perceptions and attitudes. Background Reviews of paediatric prescriptions in the community setting have quantified off-label use to reach 52 % and unlicensed use to reach 17 %. Objective To investigate the attitudes and perceptions of a sample of paediatricians and family doctors practising in primary care on off-label and unlicensed prescribing in children in Malta. Methods A validated de novo 18-item questionnaire was used to conduct face-to-face, semi-structured interviews with 24 family doctors and 6 paediatricians during 2014. Results Although only 10 % of participants correctly defined off-label/unlicensed use, after the definitions were provided, 97 % admitted to knowingly prescribe medicines in this manner. Such use primarily involved prescribing to younger age groups and different indications to those recommended in the product literature. The main contributing factor for prescribing in an off-label/unlicensed manner was a lack of appropriately licensed paediatric medicines. The most commonly implicated class of medicines was cough/cold medicines. The principal concerns were medico-legal and safety concerns. Conclusion Participants knowingly prescribed medicines in an off-label/unlicensed manner. The perceived reasons were prescribing for a younger age and for indications outside the Summary of Product Characteristics. Divergent prescribing recommendations in different sources of information, prescribers' personal experience and reliance on medical representatives contribute to inadvertent off-label/unlicensed prescribing. abstract_id: PUBMED:18797857 Off-label use of medicines in children: can available evidence avoid useless paediatric trials? The case of proton pump inhibitors for the treatment of gastroesophageal reflux disease. Purpose: In some cases of drug therapy, the available evidence might be sufficient to extend the indications to children without further clinical studies. Methods: We reviewed the available evidence for one of the categories of drugs most frequently used off-label in children: proton pump inhibitors (PPIs) used for the treatment of gastroesophageal reflux disease (GERD). A classification of the appropriateness of off-label use of PPIs in children with GERD was also performed. Results: Of the five PPIs evaluated, only omeprazole has a paediatric indication in Europe. Overall, 19 clinical trials were retrieved and evaluated on the basis of pharmacokinetics, efficacy and safety data. The off-label use of omeprazole, esomeprazole and lansoprazole in children was evaluated as appropriate given the consistent available evidence retrieved in literature. Conclusion: This study demonstrates the existence of a large body of clinical evidence on the use of PPIs in children. Regulatory agencies and ethical committees should cope with this issue for ethical reasons to avoid unnecessary trial replication. abstract_id: PUBMED:37208845 Optimizing therapeutic decision-making for off-label medicines use: A scoping review and consensus recommendations for improving practice and research. Purpose: Off-label medicines use is a common and sometimes necessary practice in many populations, with important clinical, ethical and financial consequences, including potential unintended harm or lack of effectiveness. No internationally recognized guidelines exist to aid decision-makers in applying research evidence to inform off-label medicines use. We aimed to critically evaluate current evidence informing decision-making for off-label use and to develop consensus recommendations to improve future practice and research. Methods: We conducted a scoping review to summarize the literature on available off-label use guidance, including types, extent and scientific rigor of evidence incorporated. Findings informed the development of consensus recommendations by an international multidisciplinary Expert Panel using a modified Delphi process. Our target audience includes clinicians, patients and caregivers, researchers, regulators, sponsors, health technology assessment bodies, payers and policy makers. Results: We found 31 published guidance documents on therapeutic decision-making for off-label use. Of 20 guidances with general recommendations, only 35% detailed the types and quality of evidence needed and the processes for its evaluation to reach sound, ethical decisions about appropriate use. There was no globally recognized guidance. To optimize future therapeutic decision-making, we recommend: (1) seeking rigorous scientific evidence; (2) utilizing diverse expertise in evidence evaluation and synthesis; (3) using rigorous processes to formulate recommendations for appropriate use; (4) linking off-label use with timely conduct of clinically meaningful research (including real-world evidence) to address knowledge gaps quickly; and (5) fostering partnerships between clinical decision-makers, researchers, regulators, policy makers, and sponsors to facilitate cohesive implementation and evaluation of these recommendations. Conclusions: We provide comprehensive consensus recommendations to optimize therapeutic decision-making for off-label medicines use and concurrently drive clinically relevant research. Successful implementation requires appropriate funding and infrastructure support to engage necessary stakeholders and foster relevant partnerships, representing significant challenges that policy makers must urgently address. abstract_id: PUBMED:27334565 Off-label and unlicensed medicines to hospitalised children in Norway. Objectives: The aim of this study was to investigate the use of off-label (OL) and unlicensed (UL) medicines to hospitalised children in Norway, to add to the current knowledge on use of medicines in this vulnerable patient group. Methods: The study was performed as a cross-sectional prospective study. Medication was classified as on- or off-label based on the comparison with the SmPC regarding age, indication, dosage, route of administration and handling of the product. UL products were classified as imported or pharmacy produced. Key Findings: More than 90% of children receiving medicines in our study were given OL or UL medicines. More patients received OL (83%) than UL (59%). Route of administration was the most frequently observed OL category. The vast majority of the OL prescriptions were for 'off-patent' products. One-third of products prescribed were UL. Conclusions: The study confirms that medicines to children in hospital to a significant degree are being used outside or without authorisation, in spite of recent paediatric regulatory initiatives. More data are still needed on efficacy and safety of medicines used in children, data to be incorporated in the SmPC. In addition, suitable formulations are needed to ensure optimal dosing and adherence without risky manipulations. abstract_id: PUBMED:23279160 Off-label use of medicines in paediatric inpatients at an Australian teaching hospital. Aim: The study aims to investigate the prevalence of off-label prescribing in the general paediatric ward at a major teaching hospital in Tasmania, Australia. Method: The drug charts and medical records from two groups of 150 consecutive paediatric patients, admitted 6 months apart in July 2009 and January 2010, were studied retrospectively. Patients were required to spend at least one night in hospital and be aged less than 12 years. Each prescribed drug was compared with the approved product information to determine if the usage was off-label. Data concerning documented informed consent and adverse drug reactions were also recorded. Results: Three hundred patients were prescribed a total of 887 medicines. Of these, 31.8% were off-label and 57.3% of children received an off-label medication. There was no significant seasonal variation in patient characteristics or prescriptions. Drugs were most commonly off-label due to their dosage or frequency of administration. Of the 106 different drugs used, the use of 51 was off-label on at least one occasion, and for 30 drugs their use was off-label on more than 75% of occasions. The drugs most commonly used off-label were oxycodone, salbutamol and paracetamol. No informed consent documentation was identified, and two of five recorded adverse drug reactions were associated with off-label drug use. Conclusion: Off-label use of medicines occurred frequently in paediatric inpatients. The available evidence often supported off-label medication use. An improved system for the revision of approved drug information and an Australian guideline for paediatric prescribing are needed. abstract_id: PUBMED:21243345 Healthcare professional experiences and attitudes on unlicensed/off-label paediatric prescribing and paediatric clinical trials. Objectives: To investigate the knowledge and views of a range of healthcare professionals (consultant paediatricians, general practitioners (GPs), community pharmacists and paediatric nurses) regarding the use of unlicensed/off-label medicines in children and the participation of children in clinical trials. Methods: A regional study in which a survey instrument with 39 items was issued to 500 randomly selected GPs, all community pharmacists (n = 512), 50 hospital consultants and 150 paediatric nurses in Northern Ireland. Results: Approximately half (46.5%) of the 1,212 healthcare professionals approached responded to the questionnaire. The majority of respondents indicated their familiarity with the term unlicensed (82.9%) or off-label (58.6%) prescribing with the most frequently quoted reason for such prescribing being younger age (33.6%). Apart from community pharmacists, most respondents reported having gained their knowledge through personal experience. Even though a large percentage of respondents expressed concerns about the safety (77.8%) or efficacy (87.9%) of unlicensed/off-label prescribing in children, only 30.7% reported informing parents/guardians of these concerns on the use of such medicines in children. In addition, only 56% of respondents believed that unlicensed/off-label medicines should undergo clinical trials in children. Overall, 28.4% of respondents (20.1% of GPs, 41.4% of community pharmacists, 27.7% of paediatric nurses and 94% of consultant paediatricians) indicated their willingness to be actively involved in, and recruit their patients for paediatric clinical research. Conclusion: The use of unlicensed and off-label medicines remains a major issue in paediatric medicine. Until such times as more licensed medicines are available for children, clear guidance should be developed to allow consistency in practice across the spectrum of healthcare professionals who are involved with such medicines in their routine practice. abstract_id: PUBMED:19840958 Respiratory medicines for children: current evidence, unlicensed use and research priorities. This European Respiratory Society task force has reviewed the evidence for paediatric medicines in respiratory disease occurring in adults and children. We describe off-licence use, research priorities and ongoing studies. Off-licence and off-label prescribing in children is widespread and potentially harmful. Research areas in asthma include novel formulations and regimens, and individualised prescribing. In cystic fibrosis, future studies will focus on screened infants and robust outcome measures are needed. Other areas include new enzyme and antibiotic formulations and the basic defect. Research into pneumonia should include evaluation of new antibacterials and regimens, rapid diagnostic tests and, in pleural infection, antibiotic penetration, fibrinolytics and surveillance. In uncommon conditions, such as primary ciliary dyskinesia, congenital pulmonary abnormalities or neuromuscular disorders, drugs indicated for other conditions (e.g. dornase alfa) are commonly used and trials are needed. In neuromuscular disorders, the beta-agonists may enhance muscle strength and are in need of evaluation. Studies of antibiotic prophylaxis, immunoglobulin and antifungal drugs are needed in immune deficiency. We hope that this summary of the evidence for respiratory medicines in children, highlighting gaps and research priorities, will be useful for the pharmaceutical industry, the paediatric committee of the European Medicines Agency, academic investigators and the lay public. Answer: The off-label use of medicines in children is a significant issue, as it often occurs due to the lack of appropriately licensed paediatric medicines. In some cases, the available evidence might be sufficient to extend the indications to children without further clinical studies, potentially avoiding unnecessary paediatric trials. For instance, a review of the use of proton pump inhibitors (PPIs) for the treatment of gastroesophageal reflux disease (GERD) in children found that the off-label use of omeprazole, esomeprazole, and lansoprazole could be considered appropriate given the consistent available evidence in the literature (PUBMED:18797857). This suggests that regulatory agencies and ethical committees should consider existing clinical evidence to avoid replicating trials that may not be needed. However, despite the potential to use existing evidence, the prescribing of off-label and unauthorized medicines remains prevalent. Studies have shown that off-label prescribing was more common in 2011 than in 2001, indicating that recent legislation, such as the European Paediatric Regulation, has had only minor or no impact on the authorizing status of medicines commonly used in paediatric inpatients in specialized care (PUBMED:24329556). Moreover, the development of paediatric medicines, particularly in the area of cardiovascular diseases, has not sufficiently met paediatric needs, with many cardiovascular products still commonly used off-label in paediatrics (PUBMED:33276598). The use of off-label and unlicensed medicines is not limited to specific regions but is a global concern. For example, in Norway, more than 90% of hospitalized children were given off-label or unlicensed medicines, with the route of administration being the most frequently observed off-label category (PUBMED:27334565). Similarly, in Australia, off-label use of medicines occurred frequently in paediatric inpatients, with no significant seasonal variation in patient characteristics or prescriptions (PUBMED:23279160). In conclusion, while available evidence can sometimes avoid unnecessary paediatric trials, the widespread and persistent off-label use of medicines in children highlights the need for more licensed paediatric medicines, better access to appropriate formulations, and clear guidance for healthcare professionals. Regulatory agencies, healthcare providers, and the pharmaceutical industry must work together to improve the situation and ensure the safe and effective use of medicines in children.
Instruction: Is oxidative stress causally linked to unstable angina pectoris? Abstracts: abstract_id: PUBMED:9534853 Is oxidative stress causally linked to unstable angina pectoris? A study in 100 CAD patients and matched controls. Objective: Unstable angina pectoris often leads to acute myocardial infarction. Since lipid peroxidation is thought to be causally related to chronic and acute events in atherosclerosis and coronary artery disease, we measured lipid peroxidation products and vitamin E in 100 patients with coronary artery disease and compared them to a matched control group. Methods: 50 consecutive patients with stable angina pectoris (SAP) and 50 consecutive patients with unstable angina pectoris (UAP) were studied and compared to 100 clinically healthy individuals. In addition to conventional lipid and lipoprotein analysis, malondialdehydes were measured as thiobarbituric acid reactive substances (TBARS). Lipid hydroperoxides were assayed with the colorimetric methylene blue method. alpha-Tocopherol was quantitated by HPLC after extraction of serum with hexane-ethanol. In the patient group conjugated dienes were also measured. Results: As expected, patients had significantly higher cholesterol, triglyceride LDL-C and Lp(a) values and lower HDL-C values than controls. When patients were divided into groups with SAP and UAP respectively, peroxides and TBARS were significantly higher in the latter group as compared to patients with SAP and to controls. Conjugated dienes were also significantly higher in patients with UAP as compared to patients with SAP. Total plasma alpha-tocopherol was comparable in all three groups, whereas the alpha-tocopherol content per LDL particle was lowest in patients with UAP, followed by patients with SAP and then controls. Conclusion: It is concluded that lipid peroxidation parameters are increased in patients with UAP and discriminate SAP from UAP patients. abstract_id: PUBMED:21780037 Association between oxidative stress parameters and inflammation markers according to the gravity of the acute coronary syndrome Background: Cardiovascular disease is the consequence of appearance and development of atherosclerosis lesions of associated with a inflammatory complication. Aim: To elucidate a possible association between several inflammation and oxidative stress markers according to the severity of coronary artery disease. Methods: This study was carried on 93 coronary subjects with: unstable angina (UA; n=42); stable angina (SA; n=15) and acute myocardial infarction (AMI; n=36) and 140 control subjects to whom lipidic, oxidative and inflammatory parameters were determined. Results: In addition to a moderate hyperhomocysteinemia observed in the coronary artery disease, a significant higher levels of the oxidized LDL (ox-LDL) were found among these patients (p&lt; 0.001). A positive correlation was found between the markers of the inflammation and the gravity of the acute coronary syndrome. One note a significant increase of the rate of ox-LDL and high sensitive CRP to AMI by reports in UA and SA (p=0.00, and p=0.001 respectively) which is linked to an elevation of the plasmatic concentration of the total homocysteine. Conclusion: This study suggests an association between the markers of the inflammation and oxidative parameters in the acute coronary syndrome. abstract_id: PUBMED:33511935 Pro-inflammatory Mediators and Oxidative Stress: Therapeutic Markers for Recurrent Angina Pectoris after Coronary Artery Stenting in Elderly Patients. Background: Pro-inflammatory mediators and oxidative stress are related to the severity of angina pectoris in patients with coronary heart disease. Objective: We evaluated the effects of pro-inflammatory mediators and oxidative stress on recurrent angina pectoris after coronary artery stenting in elderly patients. Methods: We determined the expression levels of malondialdehyde (MDA), acrolein (ACR), tumour necrosis factor-α (TNF-α), toll-like receptor 4 (TLR4), superoxide dismutase 3 (SOD3), paraoxonase-1 (PON-1), stromal cell-derived factor-1α (SDF-1α) and endothelial progenitor cells (EPCs) in elderly patients with recurrent angina pectoris after coronary artery stenting. Results: Levels of MDA, ACR, TNF-α and TLR4 were significantly increased (p&lt;0.001), and levels of SOD3, PON-1, SDF-1α and EPCs were significantly decreased (p&lt;0.001) in the elderly patients with recurrent angina pectoris after coronary artery stenting. MDA, ACR, TNF-α and TLR4 as markers of oxidative stress and pro-inflammatory mediators may have suppressed SOD3, PON-1, SDF-1α and EPCs as markers of anti-oxidative stress/anti-inflammatory responses. Oxidative stress and proinflammatory mediators were important factors involved in recurrent angina pectoris of elderly patients after coronary artery stenting. Conclusion: Oxidative stress and pro-inflammatory mediators could be considered as potential noninvasive prognostic, predictive, and therapeutic biomarkers for stable recurrent angina and recurrent unstable angina in elderly patients after coronary artery stenting. abstract_id: PUBMED:11216967 Interaction between chemokines and oxidative stress: possible pathogenic role in acute coronary syndromes. Objectives: We sought to study the relationships between chemokines and oxidative stress in acute coronary syndrome. Background: In view of existing knowledge on the participation of leukocytes and oxidative stress in the pathogenesis of acute coronary syndrome, we hypothesized that chemokines may play a role in recruiting and activating leukocytes in this disorder. Methods: The levels of chemokines and oxidative stress were studied in 38 patients with stable and 38 with unstable angina and in 20 controls. In separate in vitro experiments the effect of chemokines on reactive oxygen species in monocytes and the effect of antioxidants on chemokine levels in these cells were also studied. Results: 1) Angina patients had raised serum levels of chemokines in both cross-sectional and longitudinal testing, with particularly high levels of interleukin (IL)-8, monocyte chemoattractant protein (MCP)-1 and macrophage inflammatory peptide (MIP)-1-alpha in unstable disease. 2) T cells, and particularly monocytes, seem to contribute to the raised IL-8, MCP-1 and MIP-1-alpha levels in unstable angina. 3) Concomitantly, and significantly correlated with MCP-1 and IL-8 levels, stable and particularly unstable angina patients had decreased plasma levels of antioxidants and increased lipid peroxidation, suggesting enhanced oxidative stress. 4) Monocyte chemoattractant protein-1 enhanced the generation of O2- in monocytes from unstable angina patients, and the antioxidant glutathione-monoethyl ester suppressed the production of IL-8 and MCP-1 in these cells. Conclusions: Our findings suggest an interaction between chemokines and oxidative stress in unstable angina. This interaction may represent a vicious circle involved in the pathogenesis of acute coronary syndromes. abstract_id: PUBMED:28592749 Dynamics of Oxidative Stress Evoked by Myocardial Ischemia Reperfusion After Off-Pump Coronary Artery Bypass Grafting Elucidated by Bilirubin Oxidation. Background: Revascularization therapy relieves myocardial ischemia, but can also result in ischemia-reperfusion injury caused by oxidative stress. However, the biokinetics of oxidative stress after myocardial ischemia-reperfusion are uncertain. This study aimed to evaluate the dynamics of oxidative stress after off-pump coronary artery bypass grafting (OPCAB) by measuring urinary biopyrrin levels. Biopyrrin is an oxidative metabolite of bilirubin thought to reflect oxidative stress, along with reactive nitrogen species (RNS).Methods and Results:The study included 18 patients who underwent OPCAB; patients were divided into effort angina pectoris (EAP; n=11) and unstable angina pectoris (UAP; n=7). Urinary biopyrrin and RNS levels were measured during the perioperative period (≤48 h after surgery). Biopyrrin levels transiently increased 4-12 h post-surgery (early phase), followed by a prolonged increase approximately 24-32 h post-surgery (late phase). The delayed increase in biopyrrin tended to be higher in patients with UAP, with a simultaneous increase in RNS. The patients in the UAP group had generally high pulmonary capillary wedge pressure (PCWP), although the cardiac index was within a normal range during the delay phase. Conclusions: The dynamics of biopyrrin levels revealed a biphasic pattern of oxidative stress after OPCAB. Delayed production of oxidative stress may be influenced by preoperative severity of myocardial ischemia and delayed RNS production. abstract_id: PUBMED:11325893 Oxidative stress and homocysteine in coronary artery disease. Background: Oxidative stress is present in cardiovascular diseases (CVDs), and hyperhomocysteinemia, an independent risk factor for these diseases, may play a role by inducing production of oxygen free radicals. Methods: To evaluate the possible role of homocysteine (Hcy) in inducing oxidative stress in coronary artery disease (CAD), plasma Hcy was measured in 68 consecutive cardiovascular patients, and plasma malondialdehyde (MDA), both free and total (free + bound), was measured in 40 patients with CAD (18 with chronic stable angina and 22 with unstable angina). As controls, we tested 70 healthy volunteers. Hcy was measured by an immunoenzymatic method and MDA, an index of lipid peroxidation, by gas chromatography-mass spectrometry. Results: Plasma Hcy concentrations were significantly higher in cardiovascular patients than in controls (10.2 vs 8.9 micromol/L; P &lt;0.0002), with no significant difference between values in the stable and unstable angina subgroups. Similarly, total MDA was significantly higher in the CAD group than in the controls (2.6 vs 1.3 micromol/L; P &lt;0.00001), again with no significant difference between stable and unstable angina patients. By contrast, free MDA, which was significantly higher in the CAD patients than the controls (0.4 vs 0.2 micromol/L; P &lt; 0.00001), was also significantly higher in the unstable than in the stable angina group (0.5 vs 0.3 micromol/L; P &lt;0.03). However, no correlation was observed among Hcy and free and total MDA. Conclusions: Our findings show that a moderate increase of Hcy is associated with CVD but that Hcy at the detected values cannot be considered completely responsible for oxidative damage. That lipid peroxidation is involved in CAD is shown by our observation of significantly increased plasma free and total MDA concentrations compared with controls. Moreover, free MDA values discriminated between unstable and chronic stable angina, and could thus represent a new diagnostic tool. abstract_id: PUBMED:29914299 Elevated oxidative DNA damage in patients with coronary artery disease and its association with oxidative stress biomarkers. Objective: The objective of the present study was to evaluate oxidative DNA damage in peripheral blood leukocytes (PBLs) of patients with coronary artery disease (CAD) and to explore the relationship of oxidised purine and pyrimidine with oxidative stress. Methods: The study participants (n = 100) included 50 patients and unrelated 50 age-, sex- and population-subgroup (Jat Sikhs)-matched healthy controls. Oxidative DNA damage using the modified enzymatic comet in PBLs, and malondialdehyde (MDA) levels, total oxidant status (TOS) and total antioxidant status (TAS) in blood serum samples using spectrophotometric methods was determined. Results: The basal DNA damage of percent tail DNA (T-DNA%) was increased as were tail moment (TM) and olive tail moment (OTM). Oxidative DNA damage in terms of oxidised purines and oxidised pyrimidines was also significantly (p &lt; .001) elevated in patients. Rather the advanced stages of CAD, unstable angina and acute myocardial infarction had significantly more basal and oxidative DNA damage (p &lt; .05) compared to stable angina. MDA levels (p &lt; .01) and TOS (p &lt; .001) were increased significantly in patients with significant (p &lt; .001) decrease in TAS. There was positive correlation of oxidised purines (T-DNA% r = 0.399, p = .004; TM r = 0.623, p = .001; OTM r = 0.456, p= .001) and of total oxidative damage (TM r = 0.515, p = .001; OTM r = 0.463, p = .001) with disease severity, and, with TOS (r = 0.279, p = .050) and negative with TAS (r = -0.341, p = .015). Multiple linear regression analysis revealed TOS and disease severity as independent predictors of oxidative DNA damage. Conclusions: There was significant increase in oxidative DNA damage and oxidative stress in CAD patients compared to levels in healthy controls. abstract_id: PUBMED:15636443 Association between serum paraoxonase activity and oxidative stress in acute coronary syndromes. Objective: The oxidation of low-density lipoprotein (LDL) is believed to have a central role in atherogenesis. Under oxidative stress not only LDL, but all other serum lipids are exposed to oxidation. High-density lipoprotein (HDL)-associated paraoxonase (PON1) was shown to inhibit LDL and HDL oxidation. We investigated the relationship between PON1 and oxidative stress in acute myocardial infarction and unstable angina in a comparative fashion. Methods And Results: Activities of PON1, concentrations of malondialdehyde (MDA), lipids and lipoproteins were measured in patients (38 subjects with acute myocardial infarction and 33 subjects with unstable angina pectoris) and in age- and sex-matched controls (32 subjects). Serum PONI activity was significantly lower in patients than in controls (p &lt; 0.001). Patients had significantly increased serum MDA concentrations (p &lt; 0.001) and there were strong negative correlations (p &lt; 0.001) between serum PON1 and MDA levels in the acute myocardial infarction group (r = -0.673), in the unstable angina pectoris group (r = -0.868) and in healthy controls (r = -0.778). Serum HDL-cholesterol (HDL-C) concentrations were lower in patients than controls (p &lt; 0.05). No correlation was observed between PON1 and HDL-C levels in patients or controls. Apo A I concentrations were significantly lower in the patient groups (p &lt; 0.01), but were insignificant between patients with AMI and UAP. Apo A-I and PON1 levels did not show any correlation. Apo B concentrations were lowest in the healthy controls, higher in the UAPgroup and highest in the AMI group (p &lt; 0.001). In the acute myocardial infarction group LDL/apo B ratio was lower than in healthy controls and in the UAP group, suggesting smaller LDL particle size. Conclusions: Results of this study indicate that lower serum PON1 activity is associated with oxidative stress and the activity of PON1 is not related to HDL-cholesterol. abstract_id: PUBMED:12713676 Breath markers of oxidative stress in patients with unstable angina. Cardiac chest pain is accompanied by oxidative stress, which generates alkanes and other volatile organic compounds (VOCs). These VOCs are excreted in the breath and could potentially provide a rational diagnostic marker of disease. The breath methylated alkane contour (BMAC), a 3-dimensional surface plot of C4-C20 alkanes and monomethylated alkanes, provides a comprehensive set of markers of oxidative stress. In this pilot study, we compared BMACs in patients with unstable angina pectoris and in healthy volunteers. Breath VOCs were analyzed in 30 patients with unstable angina confirmed by coronary angiography and in 38 age-matched healthy volunteers with no known history of heart disease (mean age +/- SD, 62.7 +/- 12.3 years and 62.5 +/- 10.0, not significant). BMACs in both groups were compared to identify the combination of VOCs that provided the best discrimination between the 2 groups. Forward stepwise entry discriminant analysis selected 8 VOCs to construct a predictive model that correctly classified unstable angina patients with sensitivity of 90% (27 of 30) and specificity of 73.7% (28 of 38). On cross-validation, sensitivity was 83.3% (25 of 30) and specificity was 71.1% (27 of 38). We conclude that the breath test distinguished between patients with unstable angina and healthy control subjects. abstract_id: PUBMED:14654064 Endothelial dysfunction in acute and chronic coronary syndromes: evidence for a pathogenetic role of oxidative stress. The past two decades have highlighted the pivotal role of the endothelium in preserving vascular homeostasis. Among others, nitric oxide (NO) is currently believed to be the main component responsible for endothelium dependent vasorelaxation and therefore for endothelial function integrity. Reduced NO bioavailability causes the so-called "endothelial dysfunction," which seems to be the common molecular disorder comprising stable atherosclerotic narrowing lesions or acute plaque rupture causing unstable angina or myocardial infarction. Compelling evidence is accumulating, stressing the role of oxidative stress in causing reduced NO bioavailability and subsequently endothelial dysfunction (ED). More recently, the role of endothelial cell (EC) apoptosis as a possible final stage of ED and plaque activation has been suggested. In vitro and in vivo evidence suggests a role of oxidative stress also as a putative mechanism finally leading to plaque denudation and activation through increased EC apoptosis. Thus, oxidative stress, irrespective of atherosclerotic disease stages, seems to represent a key phenomenon in vascular disease progression and possible prevention. Answer: The available evidence suggests that oxidative stress is indeed causally linked to unstable angina pectoris. Studies have shown that patients with unstable angina pectoris (UAP) exhibit significantly higher levels of lipid peroxidation parameters, such as peroxides, thiobarbituric acid reactive substances (TBARS), and conjugated dienes, compared to patients with stable angina pectoris (SAP) and to controls, indicating increased oxidative stress in UAP (PUBMED:9534853). Additionally, oxidative stress markers have been found to correlate with the severity of acute coronary syndrome, with higher levels of oxidized LDL (ox-LDL) and high sensitive CRP observed in patients with more severe forms of coronary artery disease (PUBMED:21780037). Furthermore, pro-inflammatory mediators and oxidative stress have been linked to recurrent angina pectoris after coronary artery stenting in elderly patients, with increased levels of malondialdehyde (MDA), acrolein (ACR), and tumor necrosis factor-α (TNF-α) observed in these patients (PUBMED:33511935). The interaction between chemokines and oxidative stress has also been suggested to play a pathogenic role in acute coronary syndromes, with unstable angina patients showing decreased plasma levels of antioxidants and increased lipid peroxidation (PUBMED:11216967). Moreover, oxidative DNA damage has been found to be elevated in patients with coronary artery disease, with a significant increase in oxidative stress markers such as malondialdehyde (MDA) and total oxidant status (TOS), and a decrease in total antioxidant status (TAS) (PUBMED:29914299). The association between serum paraoxonase activity, which inhibits LDL and HDL oxidation, and oxidative stress has also been observed in acute coronary syndromes, with lower serum paraoxonase activity associated with oxidative stress (PUBMED:15636443). In summary, multiple studies support the causal link between oxidative stress and unstable angina pectoris, with oxidative stress parameters being elevated in patients with UAP and associated with the severity of coronary artery disease.
Instruction: Are Ultrasonographic Findings Suggestive of Ovarian Stromal Edema Associated with Ischemic Adnexal Torsion? Abstracts: abstract_id: PUBMED:26336916 Are Ultrasonographic Findings Suggestive of Ovarian Stromal Edema Associated with Ischemic Adnexal Torsion? Objective: To study whether sonographic findings suggestive of ovarian stromal edema are associated with tissue ischemia in patients with adnexal torsion. Methods: A study of 79 patients with adnexal torsion was performed. Patients were divided into an ischemic group, in which the twisted adnexa were seen as blue or black, and a non-ischemic group, in which the adnexa retained normal color and appeared viable. Clinical and ultrasonographic findings, specifically the presence of ultrasonographic signs suggestive of ovarian stromal edema, were compared between the two groups. Results: Of the 79 patients with torsion, in 44 (55.7%) the adnexa appeared ischemic at surgery. The presence of ischemia was not associated with age, pregnancy, duration of pain, vomiting or findings at physical examination. There was no significant difference between the ischemic and the non-ischemic group in the proportion of patients with signs of ovarian stromal edema (59 vs. 40%, p = 0.11), in the proportion of patients with absent/diminished stromal Doppler flow (36 vs. 28%, p = 0.12%) or in the proportion of patients with both signs of stroma edema and absent/diminished stromal Doppler flow (20 vs. 12%, p = 0.36). Conclusion: Ultrasonographic signs of ovarian stromal edema do not assist in differentiating between ischemic and non-ischemic adnexal torsion. abstract_id: PUBMED:34013996 Accurate Diagnosis of Adnexal Torsion-Not Only for Expert Sonographers: A Retrospective Cohort Analysis. Objectives: The aim of the study was to evaluate the clinical and sonographic parameters associated with a correct or incorrect diagnosis of adnexal torsion made by resident/attending physicians who are not expert sonographers. Methods: A retrospective study design included women who presented to the Emergency Department (ED) of a tertiary medical center between 2010 and 2019 with acute lower abdominal pain, who were subsequently diagnosed laparoscopically with adnexal torsion. Women who were correctly diagnosed in the ED by an obstetrician-gynecologist who was not an expert sonographer were compared for clinical and sonographic parameters with women who were initially incorrectly diagnosed and underwent a second ultrasound examination by an expert sonographer following admission. Results: Of 118 women with surgically proven adnexal torsion, 76 were correctly diagnosed in the ED and 42 were initially incorrectly diagnosed. The correctly diagnosed group was characterized by a significantly shorter mean time from admission to surgery, higher rate of vomiting, shorter duration of abdominal pain, and higher rate of prior torsion (P &lt; .05). Physicians who made the correct diagnosis utilized more sonographic parameters than those who did not, namely ovarian size, ovarian edema, and Doppler flow. The correctly diagnosed group had a higher rate of bluish-black (ischemic) adnexa at laparoscopy, but a higher rate of recovery following detorsion. Conclusions: Physicians in the ED should be alerted to the need to address some basic sonographic features when adnexal torsion is suspected. The diagnostic accuracy of adnexal torsion can be improved by utilizing simple sonographic markers, even in the ED setting. abstract_id: PUBMED:23980225 Medial or lateral location of the whirlpool sign in adnexal torsion: clinical importance. Objectives: The whirlpool sign is used as a sonographic marker for adnexal torsion. The aims of this study were to describe the location of the whirlpool sign (lateral or medial to the ovary) and to evaluate the clinical importance of the location. Methods: Thirty patients with a confirmed surgical diagnosis of torsion and a positive whirlpool sign on sonography before surgery were included. We examined the sonographic clips of these patients. Classic signs of adnexal torsion were determined: ovarian edema, the presence of a mass, pelvic free fluid, and ovarian ischemia on Doppler imaging. The whirlpool sign was detected on grayscale and color Doppler sonography by moving the transducer to and fro along the axis of suspected torsion. Results: Sixteen of 30 patients had right-sided torsion. Of these, 7 had a lateral whirlpool sign. All 7 of these patients had an ovarian or paraovarian mass. Nine of these 16 patients had a medial whirlpool sign. Of these, 7 had an ovarian or paraovarian mass, and 2 had no mass. Of the 14 patients with left-sided torsion, all had a medial whirlpool sign. Nine of 14 these patients had an ovarian or paraovarian mass, and 5 had no mass. The mean volume of the masses among cases with the lateral whirlpool sign was significantly greater compared to those with the medial whirlpool sign (304 versus 108 cm(3); P = .035). In 25 of 30 cases, the torsed components included the ovary. Conclusions: The lateral whirlpool sign is associated with enlarged masses in comparison to the medial whirlpool sign. This finding indicates the need to search meticulously for the lateral whirlpool sign in cases with enlarged masses to decide whether to operate on these patients emergently. abstract_id: PUBMED:9605461 Laparoscopic adnexal preservation in a patient with complete torsion--a case report. Objective: Laparoscopic adnexal preservation in a patient with complete torsion. Study Design: Laparoscopy was performed in a 20-year-old nulliparous patient with a 24-h history of lower abdominal pain. Results: Torsion of the left adnexa was diagnosed and detorsion was performed. After detorsion the patient reported complete resolution of pain. At second look laparoscopy blood supply of the left adnexa was completely normalized and a cystadenofibroma was excised with preservation of the ovary. Conclusions: Complete torsion of adnexa associated with edema and ischemia can be treated by laparoscopic detorsion. abstract_id: PUBMED:8729041 Massive ovarian oedema. Eighteen cases of massive ovarian oedema are presented. The age of patients averaged 26 years and 16 presented with an acute abdomen. Hormonal symptoms included virilism in three cases and one with precocious pseudopuberty. Ultrasonographic findings were variable and not diagnostically accurate. When performed, CA 125 levels were not raised. Seventy-two percent of cases occurred in the right ovary and none were bilateral. Torsion occurred in 14 cases. Salpingo-oophorectomy was performed in all cases. To elucidate its pathogenesis, be this either due to intermittent chronic torsion or to a proliferative phenomenon, immunohistochemistry for Ki-67 and PCNA proliferation antigens, alpha-actin and oestrogen and progesterone receptors was performed. The Ki-67 proliferation index ranged between 0% and 3%, demonstrating the low proliferative status of stromal cells. The PCNA indices, however, were unusually high (60% and above). The divergence between these findings is explained by the fact that PCNA positivity may be related to nuclear reparation subsequent to ischaemia. Alpha-actin was consistently positive in stromal cells, reflecting a myofibroblastic transformation of these cells. These findings together with the clinical evidence of torsion in the majority of cases, lead us to consider that ovarian oedema is a reactive, non-proliferative state of specific stromal cells, occurring as a response to torsion and subsequent ischaemia. The stromal cells have positive oestrogen progesterone receptors and may undergo stimulatory changes responsible for the hormonally related symptoms often found associated with massive ovarian oedema. abstract_id: PUBMED:37574000 Fertility preserving management of ovarian torsion. Objective: To analyze characteristics of acute and chronic ovarian torsion, review treatment recommendations, and present possible surgical techniques for fertility preservation in young women. Design: Literature review and demonstration of perioperative management of ovarian torsion using radiologic images and intraoperative video footage. Ovarian torsion is mostly mentioned in context of gynecologic emergencies, where acute ovarian torsion with arterial obstruction leads to ovarian ischemia and necrosis. However, ovarian torsion can also occur as a partial or intermittent torsion with venous and lymphatic obstruction, followed by ovarian swelling. In both cases, surgical management of ovarian torsion commonly includes oophorectomy, although leading guidelines recommend preservation of the ovary. We here aimed to raise awareness for the clinical features of ovarian torsion and demonstrate adequate perioperative management, thereby avoiding surgical overtreatment in young women. Setting: Medical University of Vienna, Department of Obstetrics and Gynecology. Patient(s): We present a case of acute ovarian torsion with a consequently ischemic ovary as well as a case of chronic ovarian torsion with related massive ovarian edema. The patients included in this video gave consent for publication of the video and posting of the video online, including social media, the journal website, scientific literature websites (such as PubMed, ScienceDirect, Scopus, etc.), and other applicable sites. Intervention(s): Laparoscopic management with detorsion of the torquated ovaries, cystectomy on an ischemic ovary and oophoropexy to the pelvic side wall and utero-ovarian ligament to prevent recurrence. Main Outcome Measures: Postoperative relief of pain and normalization of ovarian size and morphology on ultrasound imaging. Results: The current cases show successful conservative surgical management of ovarian torsion, hence preserving hormonal function and fertility in young women. Conclusion: Although it is recommended to preserve fertility in young women affected by ovarian torsion, surgical overtreatment by means of oophorectomy is still common in clinical routine. Increasing awareness for the clinical characteristics of acute and chronic ovarian torsion, as well as for the importance of preservation of the ovary, is crucial. We therefore believe that ovarian torsion and its surgical management deserve increased attention in the future. abstract_id: PUBMED:36164847 The protective effect of erythropoietin on ischemia- reperfusion injury caused by ovarian torsion-detorsion in the experimental rat model. Ovarian torsion is one of the most dangerous gynecological emergencies requiring surgery. A total of 50%-90% ovarian torsion cases are caused by physiological cysts, endometriosis, and other benign or malignant ovarian neoplasms. The aim of the study was to investigate the effects of erythropoietin (EPO) treatment on ischemia/reperfusion (IR) injury caused by ovarian torsion/detorsion (T/D) injury. Thirty female Wistar albino rats were divided into five groups as follows: Group I: Control; Group II: Torsion (T); Group III: Torsion/Detorsion(T/D); Group IV: Torsion/Detorsion (T/D) + EPO; Group V: EPO. Sections of the ovaries were evaluated for histopathological changes with hematoxylin and eosin stain, a immunohistochemical assay for caspase 3 expression, and the TUNEL assay for apoptosis. Ovarian sections from torsion/detorsion and torsion groups showed more hemorrhage, vascular congestion, edema, degenerative granulosa, and stromal cells. Fewer histopathological changes were found in EPO and T/D + EPO groups. Caspase 3 and TUNEL positive cells were significantly increased in the torsion/detorsion group as compared with the other groups (p &lt; 0.05). Treatment with erythropoietin decreased the number of caspase 3 and TUNEL positive cells. The results of the study showed that erythropoietin administration is effective for recovery from degenerative changes in the ovary induced by the torsion-detorsion injury. abstract_id: PUBMED:26034738 Protective role of Proanthocyanidin in experimental ovarian torsion. Background: Proanthocyanidin is a potent bioactive antioxidant naturally occurring in grape seed and acts as reactive oxygen species (ROS) scavenger. The aim of this study was to investigate the effects of proanthocyanidinin in experimental ovarian torsion injury. Methods: Twenty four rats were randomly divided into three groups (n=8). Group 1: the laparotomy group, group 2: ovarian torsion group, and group 3: intervention group administered proanthocyanidinin of 50 mg/kg before bilateral ovarian ischemia and reperfusion. Histologic examination and scoring was done at the end of the experiment. Statistical analyses were performed using the SPSS v. 19. Results: Ovarian histopathologic findings of all three groups were significantly different in terms of hemorrhage (p&lt;0.001), edema (p=0.001) and vascular dilatation (p&lt; 0.001). Pathologic changes induced by I/R were reduced in ovaries of rats administered proanthocyanidin, in particular, hemorrhage, edema and vascular dilatation. Conclusion: Proanthocyanidin, known as free radical scavenger and antioxidant, is protective against tissue damage induced by ischemia and/or ischemia/reperfusion in rat ovaries. abstract_id: PUBMED:17503304 The efficacy of conservative treatment for late term ovarian torsion. Background/purpose: Recent reports have focused on detorsion after ovarian torsion in the literature. The aim of the study was to investigate late term changes in both ovaries after delayed detorsion following ovarian torsion in rats. Materials: Female, prepubertal, Wistar albino rats were divided into four groups (n = 6/group). The left ovaries were used for the study and the right ovaries were kept as the control. The groups were constituted as follows: Group 1: left ovarian fixation, bilateral oophorectomy 48 hours later; Group 2: left ovarian torsion and fixation, bilateral oophorectomy 48 hours later; Group 3: detorsion 48 hours after torsion and bilateral oophorectomy after another 48 hours; Group 4: detorsion 48 hours after torsion and bilateral oophorectomy after 21 days. The total injury score (TIS) was compiled histologically in a double-blind fashion. Congestion, edema, bleeding and polymorphonuclear lymphocyte infiltration were assessed for TIS. Results: The TIS was found to be 8 points in Group 1; 38 in Group 2; 28 in Group 3 and 12 in Group 4, respectively. The TIS was based on results from the left ovaries in Group 1, whereas 31 points were attributable to the left ovaries and 7 to the right ovaries in Group 2. In Groups 3 and 4, TIS points were the same in both study and control ovaries. The difference between the left ovaries of Groups 1 and 2 and the left ovaries of Groups 2 and 4 was statistically significant (p &lt; 0.05). Conclusion: Viable ovarian tissue can be detected even after 48 hours of torsion, which is a relatively long period of ischemia. Tissue injury decreases significantly after detorsion during late recovery. In view of previous case reports in the literature and the present findings, detorsion is recommended in children with ovarian torsion regardless of the ischemic period and/or macroscopic appearance. abstract_id: PUBMED:25861537 Conservative management of twisted ischemic adnexa in early pregnancy. Diagnosis of adnexal torsion can be difficult, especially during pregnancy. Delay in diagnosis can lead to necrosis and loss of the affected ovary compromising the reproductive capacity, especially in young women. A 22-year-old primigravida presented to the emergency room at 10 weeks of gestation with acute onset left iliac fossa pain. Ultrasound examination showed a live intra-uterine gestation of 10 weeks, enlarged and edematous left ovary with a clear cyst of 76 mm × 63 mm with flow to the ovary. She underwent emergency surgery with a provisional diagnosis of torsion of left adnexa. Mini-laparotomy, detorsion of the twisted adnexa and ovarian cystectomy with reconstruction was performed thus conserving the tube and ovary. The pregnancy progressed normally; follow-up ultrasound scans showed normal functioning ovaries. Clinical appearance of torted adnexa does not correlate well with the residual function. Ovarian function can be preserved by untwisting, even if ovary appears necrotic. Answer: Ultrasonographic findings suggestive of ovarian stromal edema are not associated with tissue ischemia in patients with adnexal torsion. A study involving 79 patients with adnexal torsion found no significant difference between the ischemic and non-ischemic groups in the proportion of patients with signs of ovarian stromal edema, absent/diminished stromal Doppler flow, or both signs combined (PUBMED:26336916). Therefore, ultrasonographic signs of ovarian stromal edema do not assist in differentiating between ischemic and non-ischemic adnexal torsion.
Instruction: Do therapeutic indications of antidepressants change from one year to another? Abstracts: abstract_id: PUBMED:15133783 Do therapeutic indications of antidepressants change from one year to another? Purpose: To analyse and describe the pattern of prescriptions for antidepressants by all psychiatrists working for the Canary Islands Health Service (CIHS) during the 4-year period from 1999 to 2002. Methods: All prescriptions for antidepressants by the ten psychiatrists who worked continuously for the CIHS during the period 1999-2002 in the island of Tenerife (700,000 inhabitants) were collected from the CIHS central database. Global prescription of antidepressant medication for the entire region (1.8 million inhabitants) by any physician working for the public sector was counted and converted into defined daily doses (DDDs). Results: The intensity of prescribing antidepressants increased from 22.1 DDD/1000 inhabitants/day in 1999 to 29.1 DDD/1000 inhabitants/day in 2002, with the five top selective serotonine reuptake inhibitors (SSRIs) being responsible for 83.6% of all the antidepressant prescriptions in the year 2002. A wide variation in individual prescription pattern was evident both between and within each psychiatrist during these years. Working in the same conditions, and with a similar morbidity pattern, one psychiatrist prescribed up to 32,000 DDDs in one year, whereas another colleague only prescribed 600 DDDs in the same period and to the same covered population. The amount of individual variation in prescription pattern highly correlated with the intensity of drug prescribing. Conclusions: The high inter- and intra-individual variation in antidepressant prescribing could not be linked with personal, structural or morbidity patterns, and the heavy influence of pharmaceutical industry could not be ruled out. abstract_id: PUBMED:31695303 A two-year analysis of therapeutic apheresis practices in a tertiary center: are we chasing the new indications? Background: Therapeutic apheresis (TA) as primary or adjunctive therapy proved itself in a broad spectrum of diseases. This study aims to present TA practices in a tertiary center with an emphasis on the rate of the utility of TA on the new American Society for Apheresis (ASFA) indications. Methods: We conducted a retrospective analysis of data regarding TA applications through our electronic medical database from June 2016 to July 2018. The data included demographics, clinical indications, and procedural characteristics. We also searched for the rate of the utility of TA procedures on new ASFA indications by entering both the diagnostic and TA modality codes for these indications on the electronic database during the study interval. Results: A total of 720 TA procedures were performed on 96 patients (54 males, 42 females, with a mean age of 48.15 ± 26.71 years). The procedures were 68.8 % therapeutic plasma exchange (TPE), 16.4 % leukocytapheresis, 11.5 % immunoadsorption (IA), 3.1 % double filtration plasmapheresis (DFPP), and 0.13 % erythrocyte exchange. The categorical indications included 60.41 % category I and category II, 28.12 % category III, and 1.04 % category IV. The most common indication was thrombotic thrombocytopenic purpura (TTP) (26.04 %). The procedure failure rate was 2.08 %. Patient-related adverse events were reported in 7.5 % of procedures. The case mortality rate was 16.66 %. TA utility rate was 0.98 % for the new indications in the ASFA 2016 guideline. Conclusion: Therapeutic apheresis is a progressively developing, safe, and effective treatment modality with add-on indications. Physicians should keep track of new developments on this modality to implement the appropriate indications into clinical practice. HIPPOKRATIA 2018, 22(4): 167-172. abstract_id: PUBMED:27593707 What is the therapeutic value of antidepressants in dementia? A narrative review. Objectives: Antidepressants are commonly used in dementia. Depression is a frequent and important co-morbidity in dementia, and antidepressants are often used to treat depression and more widely. However, there are questions about their utility in depression in dementia and other behavioural and psychological symptoms of dementia. The aim of this narrative review is to summarize the evidence on whether there is therapeutic value in prescribing antidepressants to people with dementia. Methods: A PubMed search was performed to identify randomized controlled trials that prescribed antidepressants to people with dementia, either in the treatment of behavioural and psychological symptoms of dementia (depression, anxiety, agitation/aggression, psychosis and apathy) or for secondary outcomes (quality of life, carer burden, activities of daily living, cognition, clinical severity and adverse events). Results: Thirty-six randomized controlled trials were identified (participant n = 3386). A consistent finding in well-designed blinded placebo controlled trials in dementia is the lack of positive effect of antidepressants on outcomes of interest, including depression. One large well-designed study has reported a significant reduction in agitation in people with dementia, but at the expense of clinically significant adverse events. Otherwise, change observed in open trials is also seen in the placebo group, suggesting that any effect is not attributable to the prescription of antidepressants. Conclusions: It is striking how few data there are on indications other than depression. We should question the use of antidepressants in dementia. Definitive trials of clinical effectiveness of specific indications such as anxiety and agitation in dementia and discontinuation of antidepressants in dementia are needed. Copyright © 2016 John Wiley &amp; Sons, Ltd. abstract_id: PUBMED:33735040 For What Indications are Antidepressants Being Used in Adults in Colombia? Introduction: Antidepressants are useful in the pharmacological treatment of different depressive and anxiety disorders, as well as being potentially useful in other indications. The aim of the study was to determine the indications for which antidepressants are being prescribed in patients over 18 years of age registered with the Colombian Health System. Methods: Retrospective study with data from patients over 18 years old, of either sex, registered with the Colombian Health System, who are prescribed antidepressants. Medical records were reviewed, looking for indications approved and not approved by regulatory agencies. Patients were identified randomly, as well as the sociodemographic, clinical and pharmacological variables that could be associated with prescriptions for unapproved indications. Results: 351 patients were evaluated, with a mean age of 60.4±15.3 years, of whom 72.6% were women. They were attended in 34 cities, with prescriptions mainly by general practitioners (n=276; 78.6%), and psychiatrists (n=42; 12.0%). Selective serotonin reuptake inhibitors were the most widely prescribed antidepressants (n=204; 58.1%), followed by atypicals (n=76; 21.7%). The most frequent indications were for depression (n=169; 48.1%), anxiety (n=48; 13.7%), pain (n=22; 6.3%) and sleep disorders (n=17; 4.8%). A total of 188 prescriptions (53.6%) were made for approved indications, and the remaining 163 (46.4%) were classified as unapproved. Conclusions: Antidepressants are being prescribed for the treatment of depression, anxiety, pain and sleep disorders, especially in older adult women, but almost half of the formulations were for unapproved indications. abstract_id: PUBMED:29713202 Derivation and validation of a multivariable model to predict when primary care physicians prescribe antidepressants for indications other than depression. Objective: Physicians commonly prescribe antidepressants for indications other than depression that are not evidence-based and need further evaluation. However, lack of routinely documented treatment indications for medications in administrative and medical databases creates a major barrier to evaluating antidepressant use for indications besides depression. Thus, the aim of this study was to derive a model to predict when primary care physicians prescribe antidepressants for indications other than depression and to identify important determinants of this prescribing practice. Methods: Prediction study using antidepressant prescriptions from January 2003-December 2012 in an indication-based electronic prescribing system in Quebec, Canada. Patients were linked to demographic files, medical billings data, and hospital discharge summary data to create over 370 candidate predictors. The final prediction model was derived on a random 75% sample of the data using 3-fold cross-validation integrated within a score-based forward stepwise selection procedure. The performance of the final model was assessed in the remaining 25% of the data. Results: Among 73,576 antidepressant prescriptions, 32,405 (44.0%) were written for indications other than depression. Among 40 predictors in the final model, the most important covariates included the molecule name, the patient's education level, the physician's workload, the prescribed dose, and diagnostic codes for plausible indications recorded in the past year. The final model had good discrimination (concordance (c) statistic 0.815; 95% CI, 0.787-0.847) and good calibration (ratio of observed to expected events 0.986; 95% CI, 0.842-1.136). Conclusion: In the absence of documented treatment indications, researchers may be able to use health services data to accurately predict when primary care physicians prescribe antidepressants for indications other than depression. Our prediction model represents a valuable tool for enabling researchers to differentiate between antidepressant use for depression versus other indications, thus addressing a major barrier to performing pharmacovigilance research on antidepressants. abstract_id: PUBMED:36075858 For what indications are antidepressants being used in adults in Colombia? Introduction: Antidepressants are useful in the pharmacological treatment of different depressive and anxiety disorders, as well as being potentially useful in other indications. The aim of the study was to determine the indications for which antidepressants are being prescribed in patients over 18 years of age registered with the Colombian Health System. Methods: Retrospective study with data from patients over 18 years old, of either sex, registered with the Colombian Health System, who are prescribed antidepressants. Medical records were reviewed, looking for indications approved and not approved by regulatory agencies. Patients were identified randomly, as well as the sociodemographic, clinical and pharmacological variables that could be associated with prescriptions for unapproved indications. Results: 351 patients were evaluated, with a mean age of 60.4 ± 15.3 years, of whom 72.6% were women. They were attended in 34 cities, with prescriptions mainly by general practitioners (n = 276; 78.6%), and psychiatrists (n = 42; 12.0%). Selective serotonin reuptake inhibitors were the most widely prescribed antidepressants (n = 204; 58.1%), followed by atypicals (n = 76; 21.7%). The most frequent indications were for depression (n = 169; 48.1%), anxiety (n = 48; 13.7%), pain (n = 22; 6.3%) and sleep disorders (n = 17; 4.8%). A total of 188 prescriptions (53.6%) were made for approved indications, and the remaining 163 (46.4%) were classified as unapproved. Conclusions: Antidepressants are being prescribed for the treatment of depression, anxiety, pain and sleep disorders, especially in older adult women, but almost half of the formulations were for unapproved indications. abstract_id: PUBMED:29016968 Formalizing drug indications on the road to therapeutic intent. Therapeutic intent, the reason behind the choice of a therapy and the context in which a given approach should be used, is an important aspect of medical practice. There are unmet needs with respect to current electronic mapping of drug indications. For example, the active ingredient sildenafil has 2 distinct indications, which differ solely on dosage strength. In progressing toward a practice of precision medicine, there is a need to capture and structure therapeutic intent for computational reuse, thus enabling more sophisticated decision-support tools and a possible mechanism for computer-aided drug repurposing. The indications for drugs, such as those expressed in the Structured Product Labels approved by the US Food and Drug Administration, appears to be a tractable area for developing an application ontology of therapeutic intent. abstract_id: PUBMED:30194543 Therapeutic Drug Monitoring of Antidepressants. For a number of antidepressants in current clinical use, concentrations in serum or plasma are a more reliable index of target drug concentrations than is dosage. For such drugs, therapeutic drug monitoring (TDM) may be a useful clinical guide for the purpose of maximizing the likelihood of favorable therapeutic outcome while minimizing the probability of clinical ineffectiveness or adverse side effects. TDM is of greatest benefit when a therapeutic range of serum concentrations has been well established. Even if such a range is not definitively determined, TDM can be of help in situations in which patients are refractory to therapy despite adequate or high dosages, when adverse events supervene even with low doses, or when noncompliance with the intended dosage plan is suspected. Serum antidepressant concentrations from TDM should be interpreted in the full context of the patient's demographic characteristics and clinical status, along with an understanding of the pharmacokinetics of the medication being taken, the timing of the sample in relation to the dosage regimen, and the specific laboratory assay procedure. TDM measurements may be costly, and the potential benefits of the information need to be weighed against the cost to the patient or to the health care system. abstract_id: PUBMED:35126743 Phytotherapy in obstetrics - therapeutic indications, limits, and dangers. The wide access to varied, attractive, and aggressively promoted information can induce pregnant women to think that any form of complementary therapy can be a saving solution for a medical problem because these therapies are natural, therefore, harmless. Updated information from literature about indications, benefits, limits, and risks of phytotherapy in pregnancy was presented. Valuable therapeutic resources with proven clinical efficacy (evidence-based medicine) were presented for each trimester of pregnancy, during labor, postpartum, but also during breastfeeding. For some phytotherapeutics, there are scientific studies. There is also a detailed presentation about some possibilities for therapeutic errors, which should be avoided during pregnancy. Positive results of phytotherapy deserve to be known and applied by the obstetrician for the certain benefit of future mothers. abstract_id: PUBMED:36498064 Income Change One Year after Confirmed Cancer Diagnosis and Its Associated Factors in Japanese Patients. The number of patients who survive for a long time after cancer diagnosis is rapidly increasing; however, such patients experience major problems such as returning to work and changes in their income. This study aimed to determine the extent of income changes of cancer patients during the first year after cancer diagnosis and identify the influencing factors. From November 2019 through January 2020, we conducted a multicenter, self-administered anonymous survey of cancer patients in Kagawa Prefecture, Japan. The number of questionnaires collected was 483 (recovery rate 60.4%), and the number of participants who met the inclusion criteria was 72. Mean year-on-year income level one year since cancer diagnosis was 66% (SD: 32%; median: 70%). Cancer stage (p = 0.016), employment status at diagnosis (p = 0.006), and continued employment at the same workplace (p = 0.001) were associated with income change. Findings from this study showed that cancer patients lost one-thirds of their income one year after their diagnosis. It was related to the stage of their illness, employment status, and continued employment at their workplace just before the diagnosis. Employers should provide cancer patients with the support they need to keep them employed. Answer: Yes, therapeutic indications of antidepressants can change from one year to another. A study analyzing the pattern of prescriptions for antidepressants by psychiatrists working for the Canary Islands Health Service (CIHS) from 1999 to 2002 found a wide variation in individual prescription patterns both between and within each psychiatrist during these years (PUBMED:15133783). This suggests that the indications for which antidepressants were prescribed could have varied over time. Additionally, a study in Colombia found that almost half of the antidepressant prescriptions were for unapproved indications, indicating that the use of antidepressants may extend beyond their initially approved therapeutic indications (PUBMED:33735040). This could imply that over time, as clinicians observe the effects of antidepressants in various conditions, they may begin to prescribe them for new indications based on their clinical judgment and emerging evidence. Furthermore, a narrative review on the therapeutic value of antidepressants in dementia highlighted that there is a lack of positive effect of antidepressants on outcomes of interest, including depression, in well-designed blinded placebo-controlled trials in dementia (PUBMED:27593707). This suggests that the therapeutic indications for antidepressants in the context of dementia may be changing as new evidence emerges regarding their effectiveness. In summary, the therapeutic indications for antidepressants can change over time due to variations in prescribing patterns, emerging evidence, and the clinical judgment of healthcare providers.
Instruction: Does antipsychotic polypharmacy increase the risk for metabolic syndrome? Abstracts: abstract_id: PUBMED:17070017 Does antipsychotic polypharmacy increase the risk for metabolic syndrome? Objective: To determine whether the coprescribing of two or more antipsychotics, a relatively frequent practice with little data to support its safety and efficacy, is associated with an increased prevalence of metabolic syndrome. Methods: 364 newly admitted adults treated with second-generation antipsychotics underwent assessments evaluating antipsychotic polytherapy, and of the presence of metabolic syndrome and triglycerides/high-density lipoprotein cholesterol ratio&gt;3.5 (TG/HDL), a sensitive marker of insulin resistance. The correlates of antipsychotic polytherapy and associations with metabolic syndrome and TG/HDL were determined by univariate comparisons and multiple logistic regression analyses. Results: Antipsychotic polytherapy was present in 70 patients (19.2%) and was significantly more likely in patients with schizophrenia and those treated with clozapine, quetiapine or ziprasidone (p&lt;0.0001). Compared with antipsychotic monotherapy, polytherapy was associated with elevated rates of metabolic syndrome (50.0% vs. 34.3%, p=0.015) and TG/HDL (50.7% vs. 35.0%, p=0.016). However, in logistic regression analyses, metabolic syndrome was significantly associated with higher body mass index (BMI), older age, a diagnosis of bipolar disorder or schizophrenia, and cotreatment with a first-generation antipsychotic (r(2): 0.25, p&lt;0.0001). The TG/HDL marker of insulin resistance was associated with higher BMI, male sex, Caucasian race and absence of aripiprazole treatment (r(2): 0.14, p&lt;0.0001). Antipsychotic polypharmacy dropped out of both multivariate models. Conclusions: Compared with patients receiving antipsychotic monotherapy, patients on antipsychotic polytherapy have higher rates of metabolic syndrome and lipid markers of insulin resistance. However, antipsychotic polytherapy is not independently associated with the prevalence of these abnormalities, which are related to known demographic, clinical and anthropometric risk factors. abstract_id: PUBMED:29210867 Antipsychotic Polypharmacy and Its Relation to Metabolic Syndrome in Patients With Schizophrenia: An Egyptian Study. Purpose/background: Few studies have examined the relationship between antipsychotic polypharmacy and metabolic syndrome in schizophrenia. Some studies suggest that antipsychotic polypharmacy may be associated with greater metabolic risk, whereas other studies suggest that this is uncertain. To date, there have been no studies in Egypt or the Arab world that have investigated this relationship. We sought to compare subjects with schizophrenia receiving antipsychotic polypharmacy and monotherapy as regards metabolic outcomes and to investigate medication-related factors associated with metabolic syndrome. Methods/procedures: We recruited 118 subjects with schizophrenia and compared between those receiving antipsychotic polypharmacy (86 subjects) and monotherapy (32 subjects) as regards demographic, clinical, metabolic, and antipsychotic medication characteristics. We examined the effect of antipsychotic-related factors an outcome of metabolic syndrome. Findings/results: The prevalence of metabolic syndrome in our sample was 38.1%. Except for gender, there was no statistically significant difference as regards demographic and clinical characteristics, rates of metabolic syndrome, or for individual metabolic parameters. We found a statistically significant difference (P &lt; 0.05) between the 2 groups as regards the number, dose, and duration of intake and for the number of subjects receiving typical antipsychotics (oral and depot) and a number of individual antipsychotic medications. Using logistic regression, receiving haloperidol depot was the only antipsychotic-related factor predictive for metabolic syndrome. Implications/conclusions: The prevalence of metabolic syndrome does not differ in schizophrenia whether patients are receiving polypharmacy and monotherapy nor do they differ for individual metabolic parameters. Most antipsychotic-related characteristics did not predict for metabolic syndrome. abstract_id: PUBMED:22563628 Safety and tolerability of antipsychotic polypharmacy. Introduction: Antipsychotic polypharmacy (APP), the concomitant use of ≥ 2 antipsychotics, is common in clinical practice. Prior reviews have focused on the efficacy of APP, but no systematic review exists regarding the safety and tolerability of this practice. Areas Covered: A systematic review of adverse effects associated with APP was conducted to prepare this review; case series with ≥ 2 patients, chart reviews, naturalistic, database, cohort and randomized studies that reported on the association between APP in general or specific APP combinations and global or specific adverse effect were included. Methodological limitations of available studies are discussed and recommendations for clinicians and future research are provided. Expert Opinion: Across mostly small and uncontrolled studies, APP has been associated with increased global side effect burden, rates of Parkinsonian side effects, anticholinergic use, hyperprolactinemia, sexual dysfunction, hypersalivation, sedation/somnolence, cognitive impairment and diabetes. Effects on akathisia and mortality were inconclusive. Although some combinations, particularly aripiprazole augmentation of an agent with greater side effect burden, may reduce weight gain, dyslipidemia, hyperprolactinemia and sexual dysfunction, APP should remain a last-resort treatment option after monotherapy, switching and non-antipsychotic combinations have failed. More data are needed to further inform the individualized risk-benefit evaluation of APP. abstract_id: PUBMED:30176844 Antipsychotic polypharmacy and metabolic syndrome in schizophrenia: a review of systematic reviews. Background: There is conflicting evidence on the association between antipsychotic polypharmacy and metabolic syndrome in schizophrenia. We conducted a review of published systematic reviews to evaluate evidence on the association between metabolic syndrome (diabetes, hypertension, and hyperlipidaemia) and exposure to antipsychotic polypharmacy in schizophrenia. Methods: We searched five electronic databases, complemented by reference screening, to find systematic reviews that investigated the association of antipsychotic polypharmacy in schizophrenia with hypertension, diabetes, or hyperlipidaemia. Selection of reviews, data extraction and review quality were conducted independently by two people and disagreements resolved by discussion. Results were synthesised narratively. Results: We included 12 systematic reviews, which reported heterogeneous results, mostly with narrative syntheses and without pooled data. The evidence was rated as low quality. There was some indication of a possible protective effect of drug combinations including aripiprazole for diabetes and hyperlipidaemias, compared to other combinations and/or monotherapy. Only one review reported the association between APP and hypertension. The most frequently reported combinations of medication included clozapine, possibly representing a sample of patients with treatment resistant illness. No included review reported results separately by setting (primary or secondary care). Conclusions: Further robust studies are needed to elucidate the possible protective effect of aripiprazole. Long-term prospective studies are required for accurate appraisal of diabetes risk, hypertension and hyperlipidaemia in patients exposed to antipsychotic polypharmacy. abstract_id: PUBMED:30396766 Using an Integrated Care Pathway for Late-Life Schizophrenia Improves Monitoring of Adverse Effects of Antipsychotics and Reduces Antipsychotic Polypharmacy. Objective: Antipsychotic use in older patients is associated with many adverse effects, including tardive dyskinesia and extrapyramidal symptoms, which, in turn, increase the risk of falling. Antipsychotics are also associated with metabolic syndrome and cognitive impairment in older patients. Integrated care pathways (ICPs) are designed to manage specific conditions using standardized assessments and measurement-based interventions. This study aims to compare the use of recommended tools to monitor for adverse effects associated with antipsychotics in older patients managed within an ICP and those managed under usual care conditions-i.e., treatment as usual (TAU). Methods: We reviewed and compared the health records of 100 older patients enrolled in an ICP for late-life schizophrenia with those of 100 older patients treated with antipsychotics under TAU conditions. Results: Monitoring rates were significantly higher in the ICP group than in the TAU group for all assessments: extrapyramidal symptoms (94% versus 5%), metabolic disturbances (91% versus 25%), fall risk (82% versus 35%), and cognitive impairment (72% versus 28%). Rates of antipsychotic polypharmacy were also six times higher in the TAU group. Conclusion: Older patients with schizophrenia treated with antipsychotics within an ICP experience higher rates of monitoring and less psychotropic polypharmacy than older patients treated with antipsychotics under TAU conditions. These findings suggest that an ICP can improve the quality of antipsychotic pharmacotherapy in older patients and thus possibly its effectiveness. This needs to be confirmed by a randomized controlled trial. abstract_id: PUBMED:21791046 Is antipsychotic polypharmacy associated with metabolic syndrome even after adjustment for lifestyle effects?: a cross-sectional study. Background: Although the validity and safety of antipsychotic polypharmacy remains unclear, it is commonplace in the treatment of schizophrenia. This study aimed to investigate the degree that antipsychotic polypharmacy contributed to metabolic syndrome in outpatients with schizophrenia, after adjustment for the effects of lifestyle. Methods: A cross-sectional survey was carried out between April 2007 and October 2007 at Yamanashi Prefectural KITA hospital in Japan. 334 patients consented to this cross-sectional study. We measured the components consisting metabolic syndrome, and interviewed the participants about their lifestyle. We classified metabolic syndrome into four groups according to the severity of metabolic disturbance: the metabolic syndrome; the pre-metabolic syndrome; the visceral fat obesity; and the normal group. We used multinomial logistic regression models to assess the association of metabolic syndrome with antipsychotic polypharmacy, adjusting for lifestyle. Results: Seventy-four (22.2%) patients were in the metabolic syndrome group, 61 (18.3%) patients were in the pre-metabolic syndrome group, and 41 (12.3%) patients were in visceral fat obesity group. Antipsychotic polypharmacy was present in 167 (50.0%) patients. In multinomial logistic regression analyses, antipsychotic polypharmacy was significantly associated with the pre-metabolic syndrome group (adjusted odds ratio [AOR], 2.348; 95% confidence interval [CI], 1.181-4.668), but not with the metabolic syndrome group (AOR, 1.269; 95%CI, 0.679-2.371). Conclusions: These results suggest that antipsychotic polypharmacy, compared with monotherapy, may be independently associated with an increased risk of having pre-metabolic syndrome, even after adjusting for patients' lifestyle characteristics. As metabolic syndrome is associated with an increased risk of cardiovascular mortality, further studies are needed to clarify the validity and safety of antipsychotic polypharmacy. abstract_id: PUBMED:26609650 Metabolic syndrome in schizophrenia - who is more to blame: FGA polypharmacy or clozapine monotherapy? Background: To establish the prevalence of metabolic syndrome and its parameters in group of patients with schizophrenia in polypharmacy - receiving first generation antipsychotics versus clozapine alone treated group. Subjects And Methods: 48 outpatients with schizophrenia divided into two groups: the first group of 21 patients in polypharmacy with first generation antipsychotics, and the second group of 27 patients treated with clozapine alone were assessed for the presence of metabolic syndrome. We used logistic regression models to assess the relationship between metabolic syndrome and antipsychotic therapy, gender and age. Results: Metabolic syndrome was found in 52.1% of all subjects. Compared to first generation antipsychotics polypharmacy, the monopharmacy with clozapine was associated with elevated rates of metabolic syndrome (28.6% vs. 70.4%, p=0.004). With regard to particular parameters of metabolic syndrome, the elevated plasma triglycerides were significantly more present in subjects within Clozapine group (p=0.03). Logistic regression analysis showed that female gender (p=0.004), and clozapine treatment (p=0.005) were significantly associated with metabolic syndrome. Conclusion: Compared to polypharmacy with first generation antipsychotics, the higher prevalence of metabolic syndrome is found in patients treated with Clozapine alone. The most prevalent metabolic disorder is dyslipidemia. abstract_id: PUBMED:23413275 Antipsychotic polypharmacy in psychotic disorders: a critical review of neurobiology, efficacy, tolerability and cost effectiveness. Objective: The purpose of this study was to review the scientific evidence for neurobiological rationale, efficacy, tolerability and cost effectiveness of antipsychotic polypharmacy (APP). Data Sources: A systematic literature search of Medline, Embase, Ovid and the Cochrane Database of Systematic Reviews until April 2012 was carried out. Results: Theories behind APP have only modest pre-clinical and clinical evidence. We found limited statistical evidence supporting modest efficacy of APP in patients with psychotic symptoms refractory to clozapine. APP is associated with increased mortality, metabolic syndrome, decreased cognitive functioning, high dose prescription and non-adherence. It brings up extra costs, lacking evidence for cost-effectiveness. Conclusions: Pre-clinical studies underpinning neurobiological hypotheses in APP are lacking. Evidence supporting efficacy of APP is limited with modest beneficial clinical relevance. APP is associated with several serious adverse effects and increased health costs. In the absence of more convincing pre-clinical support and clinical evidence we advise adherence to existing guidelines and limiting combinations of antipsychotics (in consideration with other pharmacotherapeutic, somatic and psychotherapeutic options) to patients with clozapine-refractory psychosis in well-evaluated individual trials that might need 10 weeks or more. abstract_id: PUBMED:29057817 Unresolved Issues for Utilization of Atypical Antipsychotics in Schizophrenia: Antipsychotic Polypharmacy and Metabolic Syndrome. Atypical antipsychotics (AAP) are the prevailing form of schizophrenia treatment today due to their low side effects and superior efficacy. Nevertheless, some issues still need to be addressed. First, there are still a large number of patients with treatment-resistant schizophrenia (TRS), which has led to a growing trend to resort to AAP polypharmacy with few side effects. Most clinical treatment guidelines recommend clozapine monotherapy in TRS, but around one third of schizophrenic patients fail to respond to clozapine. For these patients, with clozapine-resistant schizophrenia AAP polypharmacy is a common strategy with a continually growing evidence base. Second, AAP generally have great risks for developing metabolic syndrome, such as weight gain, abnormality in glucose, and lipid metabolism. These metabolic side effects have become huge stumbling blocks in today's schizophrenia treatment that aims to improve patients' quality of life as well as symptoms. The exact reasons why this particular syndrome occurs in patients treated with AAP is as yet unclear though factors such as interaction of AAP with neurotransmitter receptors, genetic pholymorphisms, type of AAPs, length of AAP use, and life style of schizophrenic patients that may contribute to its development. The present article aimed to review the evidence underlying these key issues and provide the most reasonable interpretations to expand the overall scope of antipsychotics usage. abstract_id: PUBMED:15823759 Metabolic risk during antipsychotic treatment. Background: Compared with the general population, individuals with schizophrenia demonstrate an increased prevalence of obesity, type 2 diabetes mellitus (T2DM), and cardiovascular disease (CVD). Increased adiposity is associated with decreases in insulin sensitivity, leading to an increased risk of hyperglycemia and hyperlipidemia. Antipsychotic medications can increase adiposity, and a range of evidence from case reports, observational studies, retrospective database analyses, and controlled experimental studies (including randomized clinical trials) suggests that treatment with antipsychotic medications may be associated with an increased risk for insulin resistance, hyperglycemia, dyslipidemia, and T2DM. Objective: This article reviews current evidence for the hypothesis that treatment with antipsychotic medications may be associated with increased risks for weight gain, insulin resistance, hyperglycemia, dyslipidemia, and T2DM, and examines the relationship of adiposity to medical risk. Methods: Relevant publications were identified through a search of MEDLINE from 1975 to the present using the primary search parameters "diabetes or hyperglycemia or glucose or insulin or lipids" and "antipsychotic." Meeting abstracts and earlier nonindexed articles concerning antipsychotic-associated weight gain and metabolic disturbance were also reviewed. Key studies in this emerging literature were summarized, including case reports, observational studies, retrospective database analyses, and controlled experimental studies. Results: Individual antipsychotic medications are associated with different degrees of treatment-induced increases in body weight and adiposity, ranging from modest effects (&lt;2 kg) with amisulpride, ziprasidone, and aripiprazole to clinically significant increases with olanzapine (4-10 kg). In addition to strong evidence concerning the effect of adiposity on insulin sensitivity in nonpsychiatric populations, increased adiposity in patients with schizophrenia has been associated with decreases in insulin sensitivity; this and other effects may contribute to increases in plasma glucose concentrations and lipid levels. Conclusion: Metabolic changes in psychiatric patients who receive antipsychotic agents can contribute to the development of the metabolic syndrome and increase the risk for T2DM and CVD. Answer: The relationship between antipsychotic polypharmacy and the risk for metabolic syndrome is complex and has been the subject of various studies with differing conclusions. Some studies suggest that antipsychotic polypharmacy is associated with an increased prevalence of metabolic syndrome. For instance, a study found that patients on antipsychotic polytherapy had higher rates of metabolic syndrome and lipid markers of insulin resistance compared to those on monotherapy, although antipsychotic polypharmacy was not independently associated with the prevalence of these abnormalities when demographic, clinical, and anthropometric risk factors were considered (PUBMED:17070017). Another study conducted in Egypt found no significant difference in the prevalence of metabolic syndrome between patients receiving antipsychotic polypharmacy and monotherapy, suggesting that most antipsychotic-related characteristics did not predict for metabolic syndrome (PUBMED:29210867). Similarly, a review of systematic reviews indicated conflicting evidence on the association between antipsychotic polypharmacy and metabolic syndrome, with some indication of a possible protective effect of drug combinations including aripiprazole for diabetes and hyperlipidemias compared to other combinations and/or monotherapy (PUBMED:30176844). However, other studies have found associations between antipsychotic polypharmacy and metabolic syndrome. For example, one study suggested that antipsychotic polypharmacy, compared with monotherapy, may be independently associated with an increased risk of having pre-metabolic syndrome, even after adjusting for patients' lifestyle characteristics (PUBMED:21791046). Another study found that clozapine monotherapy was associated with elevated rates of metabolic syndrome compared to first-generation antipsychotics polypharmacy (PUBMED:26609650). Overall, while there is evidence to suggest that antipsychotic polypharmacy may be associated with an increased risk for metabolic syndrome, the relationship is not straightforward and may be influenced by various factors, including the specific medications used, patient demographics, clinical characteristics, and lifestyle factors. Further robust studies are needed to clarify the relationship between antipsychotic polypharmacy and metabolic syndrome (PUBMED:30396766, PUBMED:23413275, PUBMED:29057817, PUBMED:15823759).
Instruction: Is Prophylactic Intervention More Cost-effective Than the Treatment of Pathologic Fractures in Metastatic Bone Disease? Abstracts: abstract_id: PUBMED:27020430 Is Prophylactic Intervention More Cost-effective Than the Treatment of Pathologic Fractures in Metastatic Bone Disease? Background: Metastatic bone disease is a substantial burden to patients and the healthcare system as a whole. Metastatic disease can be painful, is associated with decreased survival, and is emotionally traumatic to patients when they discover their disease has progressed. In the United States, more than 250,000 patients have metastatic bone disease, with an estimated annual cost of USD 12 billion. Prior studies suggest that patients who receive prophylactic fixation for impending pathologic fractures, compared with those treated for realized pathologic fractures, have decreased pain levels, faster postoperative rehabilitation, and less in-hospital morbidity. However, to our knowledge, the relative economic utility of these treatment options has not been examined. Questions/purposes: We asked: (1) Is there a cost difference between a cohort of patients treated surgically for pathologic fractures compared with a cohort of patients treated prophylactically for impending pathologic lesions? (2) Do these cohorts differ in other ways regarding their utilization of healthcare resources? Methods: We performed a retrospective study of 40 patients treated our institution. Between 2011 and 2014, we treated 46 patients surgically for metastatic lesions of long bones. Of those, 19 (48%) presented with pathologic fractures; the other 21 patients (53%) underwent surgery for impending fractures. Risk of impending fracture was determined by one surgeon based on appearance of the lesion, subjective symptoms of the patient, cortical involvement, and location of the lesion. At 1 year postoperative, four patients in each group had died. Six patients (13%) were treated for metastatic disease but were excluded from the retrospective data because of a change in medical record system and inability to obtain financial records. Variables of interest included total and direct costs per episode of care, days of hospitalization, discharge disposition, 1-year postoperative mortality, and descriptive demographic data. All costs were expressed as a cost ratio between the two cohorts, and total differences between the groups, as required per medical center regulations. All data were collected by one author and the medical center's financial office. Results: Mean total cost was higher in patients with pathologic fractures (cost unit [CU], 642 ± 519) than those treated prophylactically without fractures (CU, 370 ± 171; mean difference, 272; 95% CI, 19-525; p = 0.036). In USD, this translates to a mean of nearly USD 21,000 less for prophylactic surgery. Mean direct cost was 41% higher (nearly USD 12,000) in patients with a pathologic fracture (CU, 382 ± 300 versus 227 ± 93; mean difference, 155; 95% CI, 9-300; p = 0.038). Mean length of stay was longer in patients with pathologic fractures compared with the group treated prophylactically (8 ± 6 versus 4 ± 3 days; mean difference, 4; 95% CI, 1-7; p = 0.01). Conclusions: These findings show economic and clinical value of prophylactic stabilization of metastatic lesions when performed for patients with painful lesions compromising the structural integrity of long bones. Patients sustaining a pathologic fracture may represent a more severe, sicker demographic than patients treated for impending pathologic lesions. Level Of Evidence: Level IV, economic and decision analysis. abstract_id: PUBMED:17168210 The role of external beam radiotherapy in the management of bone metastases. External beam radiotherapy is effective in the management of bone metastases for both local and more widespread pain. It is effective in spinal canal compression and pathological fracture where it also may have a prophylactic role. Single dose radiotherapy for bone metastases is a highly cost effective palliative treatment. abstract_id: PUBMED:30523763 Early Clinical and Economic Outcomes of Prophylactic and Acute Pathologic Fracture Treatment. Introduction: Pathologic fractures often contribute to adverse events in metastatic bone disease, and prophylactic fixation offers to mitigate their effects. This study aims to analyze patient selection, complications, and in-hospital costs that are associated with prophylactic fixation compared with traditional acute fixation after completed fracture. Materials And Methods: The Nationwide Inpatient Sample database was queried from 2002 to 2014 for patients with major extremity pathologic fractures. Patients were divided by fixation technique (prophylactic or acute) and fracture location (upper or lower extremity). Patient demographics, comorbidities, complications, hospitalization length, and hospital charges were compared between cohorts. Preoperative variables were analyzed for potential confounding, and χ2 tests and Student's t tests were used to compare fixation techniques. Results: Cumulatively, 43,920 patients were identified, with 14,318 and 28,602 undergoing prophylactic and acute fixation, respectively. Lower extremity fractures occurred in 33,582 patients, and 10,333 patients had upper extremity fractures. A higher proportion of prophylactic fixation patients were white ( P = .043), male ( P = .046), age 74 years or younger ( P &lt; .001), and privately insured ( P &lt; .001), with decreased prevalence of obesity ( P = .003) and/or preoperative renal disease ( P = .008). Prophylactic fixation was also associated with decreased peri- and postoperative blood transfusions ( P &lt; .001), anemia ( P &lt; .001), acute renal failure ( P = .010), and in-hospital mortality ( P = .031). Finally, prophylactic fixation had decreased total charges (-$3,405; P = .001), hospitalization length ( P = .004), and extended length of stay (greater than 75th percentile; P = .012). Conclusion: Prophylactic fixation of impending pathologic fractures is associated with decreased complications, hospitalization length, and total charges, and should be considered in appropriate patients. abstract_id: PUBMED:27172817 CORR (®) Tumor Board: Is Prophylactic Intervention More Cost-Effective than the Treatment of Pathologic Fractures in Metastatic Bone Disease? N/A abstract_id: PUBMED:32168065 Is There an Association Between Prophylactic Femur Stabilization and Survival in Patients with Metastatic Bone Disease? Background: The femur is the most common site of metastasis in the appendicular skeleton, and metastatic bone disease negatively influences quality of life. Orthopaedic surgeons are often faced with deciding whether to prophylactically stabilize an impending fracture, and it is unclear if prophylactic fixation increases the likelihood of survival. Questions/purposes: Is prophylactic femur stabilization in patients with metastatic disease associated with different overall survival than fixation of a complete pathologic fracture? Methods: We performed a retrospective, comparative study using the national Veterans Administration database. All patient records from September 30, 2010 to October 1, 2015 were queried. Only nonarthroplasty procedures were included. The final study sample included 950 patients (94% males); 362 (38%) received prophylactic stabilization of a femoral lesion, and 588 patients (62%) underwent fixation of a pathologic femur fracture. Mean followup duration was 2 years (range, 0-7 years). We created prophylactic stabilization and pathologic fracture fixation groups for comparison using Common Procedural Terminology and ICD-9 codes. The primary endpoint of the analysis was overall survival. Univariate survival was estimated using the Kaplan-Meier method; between-group differences were compared using the log-rank test. Covariate data were used to create a multivariate Cox proportional hazards model for survival to adjust for confounders in the two groups, including Gagne comorbidity score and cancer type. Results: After adjusting for comorbidities and cancer type, we found that patients treated with prophylactic stabilization had a lower risk of death than did patients treated for pathologic femur fracture (hazard ratio = 0.75, 95% CI, 0.62-0.89; p = 0.002). Conclusions: In the national Veterans Administration database, we found greater overall survival between patients undergoing prophylactic stabilization of metastatic femoral lesions and those with fixation of complete pathologic fractures. We could not determine the cause of this association, and it is possible, if not likely, that patients treated for fracture had more aggressive disease causing the fracture than did those undergoing prophylactic stabilization. Currently, most orthopaedic surgeons who treat pathological fractures stabilize the fracture prophylactically when reasonable to do so. We may be improving survival in addition to preventing a pathological fracture; further study is needed to determine whether the association is cause-and-effect and whether additional efforts to identify and treat at-risk lesions improves patient outcomes. Level Of Evidence: Level III, therapeutic study. abstract_id: PUBMED:32551337 Finite element models for fracture prevention in patients with metastatic bone disease. A literature review. Patients with bone metastases have an increased risk to sustain a pathological fracture as lytic metastatic lesions damage and weaken the bone. In order to prevent fractures, prophylactic treatment is advised for patients with a high fracture risk. Mechanical stabilization of the femur can be provided through femoroplasty, a minimally invasive procedure where bone cement is injected into the lesion, or through internal fixation with intra- or extramedullary implants. Clinicians face the task of determining whether or not prophylactic treatment is required and which treatment would be the most optimal. Finite element (FE) models are promising tools that could support this decision process. The aim of this paper is to provide an overview of the state-of-the-art in FE modeling for the treatment decision of metastatic bone lesions in the femur. First, we will summarize the clinical and mechanical results of femoroplasty as a prophylactic treatment method. Secondly, current FE models for fracture risk assessment of metastatic femurs will be reviewed and the remaining challenges for clinical implementation will be discussed. Thirdly, we will elaborate on the simulation of femoroplasty in FE models and discuss future opportunities. Femoroplasty has already proven to effectively relieve pain and improve functionality, but there remains uncertainty whether it provides sufficient mechanical strengthening to prevent pathological fractures. FE models could help to select appropriate candidates for whom femoroplasty provides sufficient increase in strength and to further improve the mechanical benefit by optimizing the locations for cement augmentation. abstract_id: PUBMED:31344005 Prophylactic Versus Postfracture Stabilization for Metastatic Lesions of the Long Bones: A Comparison of 30-day Postoperative Outcomes. Introduction: The goals of orthopaedic treatment for most patients with osseous metastases are to control pain, maintain function, and maximize quality of life and time at home. The aim of this study was to determine differences in 30-day postoperative morbidity and mortality between patients who underwent prophylactic versus postfracture stabilization for metastatic lesions of long bones. Methods: The American College of Surgeons National Surgical Quality Improvement Program database was queried for patients who underwent prophylactic fixation (n = 461) or postfracture stabilization (n = 856) for pathologic fractures because of metastatic lesions of long bones from 2006 to 2016. The groups were compared with respect to several potential confounders using Student t, Kruskal-Wallis, and χ tests. Logistic and Poisson regression models (inclusion threshold of P &lt; 0.1) were used to assess the associations of functional status with outcomes. The alpha level was set at 0.05. Results: Prophylactic fixation was associated with a lower risk of major medical complications (odds ratio = 0.64; 95% confidence interval [CI], 0.45 to 0.93; P = 0.02), discharge to a care facility rather than home (odds ratio = 0.48; 95% CI, 0.36 to 0.63; P &lt; 0.01), and lower risk of a longer hospital stay (incidence risk ratio = 0.86; 95% CI, 0.74 to 0.96; P = 0.01) compared with postfracture stabilization. No significant difference was found in the risk of unplanned revision surgery or 30-day postoperative mortality between the two groups. Conclusion: Although prevention of pathologic fractures caused by metastatic disease may not always be possible, patients who underwent prophylactic stabilization had a lower risk of major complications within 30 days postoperatively and shorter hospital stays compared with patients who underwent postfracture stabilization. Level Of Evidence: Level IV, retrospective cohort. abstract_id: PUBMED:27113593 Erratum to: Is Prophylactic Intervention More Cost-effective Than the Treatment of Pathologic Fractures in Metastatic Bone Disease? N/A abstract_id: PUBMED:28105069 The current status of prophylactic femoral intramedullary nailing for metastatic cancer. The most common site for cancer to spread is bone. At post-mortem, bony metastases have been found in 70% of patients dying from breast and prostate cancer. Due to the prevalence of cancer, bone metastasis and the associated management represents a huge burden on NHS resources. In patients with metastasis, around 56% of these involve the lower limb long bones. Due to the huge forces placed upon long bones during weight bearing, there is a high risk of fracture through areas of metastasis. It is reported that 23% of pathological fractures occur in the femoral subtrochanteric region. This area is subjected to forces up to four times the body weight, resulting in poor union rate for these fractures, and significant morbidity associated with difficulty in mobilising, and in patient nursing. As cancer treatments improve, the life expectancy in this subgroup of patients is likely to increase. Therefore medium-to-long-term management of these fractures, beyond the palliative, will become essential. We aim to evaluate the current management for metastatic malignant femoral disease, with particular focus on the prophylactic augmentation of diseased femorii using intramedullary nails. abstract_id: PUBMED:1720013 An evaluation of the potential cost reductions resulting from the use of clodronate in the treatment of metastatic carcinoma of the breast to bone. The reported studies of clodronate in the management of osteolytic bone metastases suggest a significant palliative role for this drug. In this paper we report on analysis of the hospital costs associated with the management of osteolytic metastatic disease, and an estimate of the potential cost/benefit impact of clodronate therapy. Two separate patient populations were assessed retrospectively. The first, a sample of 120 patients with symptomatic bone metastases who had died from metastatic breast cancer over the period 1980-1990, was used to define the natural history of the disease. A second non-concurrent patient group of 337 patients was evaluated to determine the mean cost of all hospital admissions for patients with bone metastases from breast carcinoma. The length of stay and costs for hospital admissions related to the bone metastases were also assessed, in addition to the cost of out-patient radiation therapy. Our cost/benefit value analysis suggests that there are significant savings to be gained from the use of clodronate if a 20% or greater reduction occurs in the incidence of fractures, hypercalcaemia, and hospital-based treatment for pain control (via radiotherapy). We also speculate that the quality of life of patients with osteolytic bone metastases may be improved with this agent. Answer: Yes, prophylactic intervention appears to be more cost-effective than the treatment of pathologic fractures in metastatic bone disease. A retrospective study of 40 patients treated for metastatic lesions of long bones showed that the mean total cost was significantly higher for patients with pathologic fractures compared to those treated prophylactically for impending fractures. The mean total cost difference was nearly USD 21,000 less for prophylactic surgery, and the mean direct cost was 41% higher (nearly USD 12,000 more) for patients with a pathologic fracture. Additionally, the mean length of stay was longer for patients with pathologic fractures (PUBMED:27020430). Another study using the Nationwide Inpatient Sample database found that prophylactic fixation of impending pathologic fractures was associated with decreased complications, hospitalization length, and total charges, suggesting that prophylactic fixation should be considered in appropriate patients (PUBMED:30523763). Furthermore, a study using the national Veterans Administration database indicated that patients treated with prophylactic stabilization had a lower risk of death than those treated for pathologic femur fractures, suggesting a potential survival benefit in addition to cost savings (PUBMED:32168065). The American College of Surgeons National Surgical Quality Improvement Program database also showed that patients who underwent prophylactic stabilization had a lower risk of major complications within 30 days postoperatively and shorter hospital stays compared with patients who underwent postfracture stabilization (PUBMED:31344005). In summary, the evidence from these studies suggests that prophylactic intervention for metastatic bone disease is more cost-effective than treating pathologic fractures after they occur, with additional benefits including reduced hospital stays, complications, and potentially improved survival rates.
Instruction: Do children with cystic fibrosis receiving outreach care have poorer clinical outcomes than those treated at a specialist cystic fibrosis centre? Abstracts: abstract_id: PUBMED:27859825 Do children with cystic fibrosis receiving outreach care have poorer clinical outcomes than those treated at a specialist cystic fibrosis centre? Introduction: Although cystic fibrosis (CF) centre care is generally considered ideal, children living in regional Australia receive outreach care supported by the academic CF centres. Methods: This is a retrospective database review of children with CF treated at the Royal Children's Hospital in Melbourne and its outreach clinics in Albury (Victoria), and Tasmania. The aim was to compare the outcomes of children with CF managed at an academic centre with that of outreach care, using lung function, nutritional status and Pseudomonas aeruginosa colonisation. Three models of care, namely CF centre care, Shared care and predominantly Local care, were compared, based on the level of involvement of CF centre multidisciplinary team. In our analyses, we controlled for potential confounders, such as socio-economic status and the degree of remoteness, to determine its effect on the outcome measures. Results: There was no difference in lung function, i.e. forced expiratory volume in 1 s (FEV1 ), the prevalence of Pseudomonas aeruginosa colonisation or nutritional status (body mass index (BMI)) between those receiving CF centre care and various modes of outreach care. Neither socio-economic status, measured by the Socio-Economic Index for Area (SEIFA) for disadvantage, nor distance from an urban centre (Australian Standard for Geographical Classification (ASGC)) were associated with lung function and nutritional outcome measures. There was however an association between increased Pseudomonas aeruginosa colonisation and poorer socio-economic status. Conclusion: Outcomes in children with CF in regional and remote areas receiving outreach care supported by an academic CF centre were no different from children receiving CF centre care. abstract_id: PUBMED:18241167 Clinical outcomes of Queensland children with cystic fibrosis: a comparison between tertiary centre and outreach services. Objective: To evaluate and compare the clinical outcomes of children with cystic fibrosis (CF) managed primarily at a tertiary cystic fibrosis centre (CFC) with those treated at regional centres by local health care professionals and the cystic fibrosis outreach service (CFOS). Design, Setting And Patients: Retrospective study of 273 children with CF born between 19 October 1982 and 19 February 2002 and with clinical data available between 1 January 2000 and 31 December 2002. Patients were grouped into CFC (n = 131) or CFOS (n = 142), with CFOS then further categorised into three groups depending on the level of care they received. Main Outcome Measures: Pulmonary function, Pseudomonas aeruginosa status, height and weight z scores, and hospital admission rates. Results: There were no significant differences in pulmonary function, P. aeruginosa status, or height and weight z scores between children managed by CFC or by CFOS. Children receiving more care at the CFC (level of care [LOC] 1 and 2) were more likely to have multiple hospital admissions than children receiving more care in regional areas (LOC 3 and 4) (P &lt; 0.001). Conclusion: The CFOS model provides effective delivery of specialised multidisciplinary care to children and adolescents living in rural and regional Queensland. abstract_id: PUBMED:23473788 Adults with cystic fibrosis should be treated at a specialist centre. Cystic fibrosis is a complex multisystem disease. It is recommended that clinical care for adults with cystic fibrosis should be delivered through specialist centres staffed by multidisciplinary teams and assisted by appropriate support services. The centre should be able to meet the national service specifications for adult CF care including the provision of emergency cover and advice for patients, 24 hours a day, 7 days a week. Specialist centre care has been shown to be associated with increased patient satisfaction and improved clinical outcomes for adults with CF. abstract_id: PUBMED:24856776 European Cystic Fibrosis Society Standards of Care: Framework for the Cystic Fibrosis Centre. A significant increase in life expectancy in successive birth cohorts of people with cystic fibrosis (CF) is a result of more effective treatment for the disease. It is also now widely recognized that outcomes for patients cared for in specialist CF Centres are better than for those who are not. Key to the effectiveness of the specialist CF Centre is the multidisciplinary team (MDT), which should include consultants, clinical nurse specialist, microbiologist, physiotherapist, dietitian, pharmacist, clinical psychologist, social worker, clinical geneticist and allied healthcare professionals, all of whom should be experienced in CF care. Members of the MDT are also expected to keep up to date with developments in CF through continued professional development, attendance at conferences, auditing and involvement in research. Specialists CF Centres should also network with other Centres both nationally and internationally, and feed Centre data to registries in order to further the understanding of the disease. This paper provides a framework for the specialist CF Centre, including the organisation of the Centre and the individual roles of MDT members, as well as highlighting the value of CF organisations and disease registries. abstract_id: PUBMED:18838309 Early referral to cystic fibrosis specialist centre impacts on respiratory outcome. Background: Published studies concerning the impact of specialist care on lung disease in cystic fibrosis remain limited and most are either biased due to comparison with historical controls and/or underpowered. Methods: In this retrospective multicentric study, data from all CF children fulfilling the following criteria were collected: 1) Age 6-&lt;18 at the end of 2003; 2) diagnosis before 8 y; 3) follow-up in an accredited CF Belgian centre; 4) at least 1 spirometry and respiratory culture available for 2003. Group A included children referred &gt; or =2 years after the diagnosis. Patients from Group A were then matched with a single early referred patient on the basis of 2 criteria: same centre, as closest age as possible (Group B). Results: Data from 217 children were collected (Group A: 67/217). Late referred patients had a lower FEV(1) (77.2%+/-22.4 vs 86.7% pred.+/-19.4, p=0.01) and a higher prevalence of Pseudomonas aeruginosa (38.6 vs 17.5%, p&lt;0.05). Conclusion: In this population of CF children, a delay of 6.1 y (vs 0.1 y) between diagnosis and referral to a specialist clinic resulted in poorer respiratory outcome at age 13. abstract_id: PUBMED:31475469 Children with bronchiectasis have poorer lung function than those with cystic fibrosis and do not receive the same standard of care. Background: Children with cystic fibrosis (CF) are routinely managed in a multidisciplinary clinic at tertiary pediatric centers. However, children with bronchiectasis may not be managed in the same way. We sought to compare the management model and clinical outcomes of children with bronchiectasis with children diagnosed with CF, in a single pediatric center. Methods: We identified patients with bronchiectasis from hospital medical records at an urban tertiary pediatric hospital and identified a sex- and age-matched CF patient at the same center to compare lung function, nutritional status, frequency of physiotherapy and respiratory physician visits, and number of microbiological samples taken for bacterial culture. Results: Twenty-two children with bronchiectasis were identified, mean (standard deviation [SD]) age was 11 (3) years. The most common known etiology for bronchiectasis was postinfective (6 of 22) but was unknown in 8 of 22. The cohort with bronchiectasis had poorer lung function (FEV1 mean [SD] percent predicted 78.6 [20.5] vs 94.5 [14.7], P = .005) and had less outpatient reviews by the respiratory physician (P &lt; .001) and respiratory physiotherapist (P &lt; .001) when compared to those with CF. Nutritional parameters did not differ between the groups. Many children (10 of 22, 45%) with bronchiectasis did not have any microbiological respiratory tract samples taken for evaluation. Conclusion: Children with bronchiectasis at this institution have poorer lung function than children with CF, and are deserving of improved multidisciplinary care. abstract_id: PUBMED:32037704 Centralised versus outreach models of cystic fibrosis care should be tailored to the needs of the individual patient. Cystic fibrosis (CF) is a common life-limiting genetic condition. As the disease progresses access to specialist tertiary multi-disciplinary care services may become necessary. For patients living in regional/remote Australia, accessing such services may be a challenge. Here, we describe long-term outcomes for CF patients according to their access to specialist CF centre care in childhood. abstract_id: PUBMED:21317431 Full, shared and hybrid paediatric care for cystic fibrosis in South and Mid Wales. Background: Although care for children with cystic fibrosis (CF) is increasingly shared between CF centres and local CF clinics, the optimal model is unclear. Objectives: The authors compared three models of care within a well established CF network: full centre care; local clinic based care with annual review by the CF centre; and hybrid care, where the child is usually reviewed at least three times a year by the specialist CF centre. Results: Of 199 children and young people with CF in South and Mid Wales, 77 were receiving full care, 102 shared care and 20 hybrid care. There were no significant differences in baseline characteristics, nutritional outcomes or use of chronic therapies. There was however a statistically significant difference between full, shared and hybrid care in mean forced expiratory volume in 1 s (FEV(1)) per cent predicted (89.2% vs 74.5% vs 88.9%; p=0.001). Conclusions: These differences in pulmonary function are likely to reflect the model of care received, and may affect long term outcomes. abstract_id: PUBMED:23422794 A pilot outreach physiotherapy and dietetic quality improvement initiative reduces IV antibiotic requirements in children with moderate-severe cystic fibrosis. Background: At our hospital the current model of care for children with moderate-severe CF is focused on intensive inpatient intervention, regular outpatient clinic review and specialist outreach care as required. An alternative model providing more regular physiotherapy and dietetic outreach support, in addition to these specialist services, may be more effective. Methods: 16 children (4 male; 12 female; mean age 10.9±2.93; range 4-15 years) who required &gt;40days of IV antibiotics in the 12-months pre-intervention were enrolled. Physiotherapy included weekly-supervised exercise sessions, alongside regular review of home physiotherapy regimens. Dietetic management included 1-2 monthly monitoring of growth, appetite, intake and absorption, and nutrition education sessions. Results: There was a 23% reduction in inpatient IV antibiotic requirement and 20% reduction in home IV antibiotic requirement during the intervention year. Cost-benefit analyses showed savings of £113,570. VO(2Peak) increased by 4.9 ml·kg·min(-1) (95%CI 1.01 to 8.71; p=0.02), and 10 m-MSWT distance and increment achieved increased by 229 m (95%CI 109 to 350; p&lt;0.001) and 2 levels (95%CI 1 to 3; p&lt;0.002) respectively. No significant differences in physiological and patient reported outcomes were demonstrated, although there was a possible trend towards improvement in outcomes when compared to the pre-intervention year. Conclusion: This pilot programme demonstrated a reduction in IV and admission requirements with a cost benefit in a small group of children with moderate-severe CF. A fully powered clinical trial is now warranted. abstract_id: PUBMED:24856777 European Cystic Fibrosis Society Standards of Care: Quality Management in cystic fibrosis. Since the earliest days of cystic fibrosis (CF) treatment, patient data have been recorded and reviewed in order to identify the factors that lead to more favourable outcomes. Large data repositories, such as the US Cystic Fibrosis Registry, which was established in the 1960s, enabled successful treatments and patient outcomes to be recognized and improvement programmes to be implemented in specialist CF centres. Over the past decades, the greater volumes of data becoming available through Centre databases and patient registries led to the possibility of making comparisons between different therapies, approaches to care and indeed data recording. The quality of care for individuals with CF has become a focus at several levels: patient, centre, regional, national and international. This paper reviews the quality management and improvement issues at each of these levels with particular reference to indicators of health, the role of CF Centres, regional networks, national health policy, and international data registration and comparisons. Answer: Children with cystic fibrosis (CF) receiving outreach care do not necessarily have poorer clinical outcomes compared to those treated at a specialist cystic fibrosis centre. Studies have shown that there is no significant difference in lung function, Pseudomonas aeruginosa status, or nutritional status (height and weight z scores) between children managed by a cystic fibrosis centre (CFC) and those treated by cystic fibrosis outreach services (CFOS) (PUBMED:18241167). Similarly, another study found no difference in lung function, prevalence of Pseudomonas aeruginosa colonisation, or nutritional status (body mass index) between children receiving CF centre care and various modes of outreach care (PUBMED:27859825). Moreover, the CF outreach service model has been found to provide effective delivery of specialised multidisciplinary care to children and adolescents living in rural and regional areas (PUBMED:18241167). It is also noted that the outcomes in children with CF in regional and remote areas receiving outreach care supported by an academic CF centre were no different from children receiving CF centre care (PUBMED:27859825). However, it is important to note that the quality of care for individuals with CF should be a focus at several levels, including patient, centre, regional, national, and international, and that quality management and improvement issues are relevant at each of these levels (PUBMED:24856777). The European Cystic Fibrosis Society Standards of Care also emphasize the importance of a multidisciplinary team and the organization of specialist CF Centres, highlighting the value of CF organizations and disease registries (PUBMED:24856776). In conclusion, the evidence suggests that children with CF can receive comparable care through outreach services as they do at specialist CF centres, and the choice between centralized and outreach models of care should be tailored to the needs of the individual patient (PUBMED:32037704).
Instruction: Do subject characteristics modify the effects of particulate air pollution on daily mortality among the elderly? Abstracts: abstract_id: PUBMED:15534498 Do subject characteristics modify the effects of particulate air pollution on daily mortality among the elderly? Objective: Studies have reported associations between mortality and air pollution, but questions subsist on the identification of susceptible subgroups in the population. We studied individual characteristics that modify the relationship between particulate air pollution and mortality among elderly. Method: We examined 527 nonaccidental deaths (197 cardiorespiratory deaths) among the 1469 subjects from the Personnes Agees QUID cohort in Bordeaux between 1988 and 1997. Air pollution was measured as black smoke by urban monitoring background stations. We used a case crossover approach and calculated odds ratio by conditional logistic regression models. Results: We observed associations between the third lag day and cardiorespiratory mortality for an increase of 10 microg/m3 of black smoke (odds ratio = 1.30, 95% confidence interval: 1.01-1.68). Conclusions: Our results provide insight into factors possibly conferring susceptibility to the acute effect of urban air pollution. abstract_id: PUBMED:30999903 Particulate air pollution on cardiovascular mortality in the tropics: impact on the elderly. Background: Air pollution has a significant health impact. Most data originate from temperate regions. We aim to study the health impact of air pollution, particularly among the elderly, in a tropical region. Methods: A daily time-series analysis was performed to estimate excess risk (ER) of various air pollutants on daily death counts amongst the general population in Singapore from 2001 to 2013. Air pollutants included particulate matters smaller than 10 μm, and 2.5 μm (PM10, PM2.5), carbon monoxide (CO), nitrogen dioxide (NO2), ozone (O3) and sulphur dioxide (SO2). The studied outcomes were non-accidental and cardiovascular mortality. Single-day lag and distributed lag models were studied and adjusted for confounders. Results: In single-day lag models, a 10 μg/m3 increase in particulate matter was associated with significant increases in non-accidental (PM10 ER: 0.627%; 95% confidence interval (CI): 0.260-0.995% and PM2.5 ER: 0.660%; 95% CI: 0.204-1.118%) and cardiovascular mortality (PM10 ER: 0.897; 95% CI: 0.283-1.516 and PM2.5 ER: 0.883%; 95% CI: 0.121-1.621%). This was significant in the elderly ≥ 65 years but not in those &lt; 65 years and were seen in the acute phase of lag 0-5 days. Effects by other pollutants were minimal. For cardiovascular mortality, the effects turned protective at a cumulative lag of 30 days in the elderly and could due to "harvesting". Conclusions: These first contemporary population-based data from an equatorial country with tropical climate show that exposure to particulate air pollution was significantly associated with non-accidental mortality and cardiovascular mortality, especially in the elderly. abstract_id: PUBMED:15016591 Difference in the relation between daily mortality and air pollution among elderly and all-ages populations in southwestern France. Numerous time series studies around the world have reported an association between mortality and particulate air pollution. We investigated the distribution over time of effect of air pollution on short-term mortality among subjects aged 65 years and older and of all ages in Bordeaux, France. Statistical analysis was based on generalized additive models using either loess or penalized spline smoothing. Our study found a significant positive association between air pollution and all nonaccidental mortality and specific mortality in both group of population (all ages and elderly) with a greater effect among the elderly, particularly for respiratory mortality. For this case, we observed a greater effect according to distributed lag models (0-5 days) among the elderly, with an estimated increase of 9.2% in the daily number of deaths for 10 microg/m(3) of daily black smoke [95% CI, 3.4-15.3]. These results contribute to the efforts made to understand how air pollution promotes adverse health effects and to identify susceptible subgroups. abstract_id: PUBMED:27713111 Air Pollution and Deaths among Elderly Residents of São Paulo, Brazil: An Analysis of Mortality Displacement. Background: Evaluation of short-term mortality displacement is essential to accurately estimate the impact of short-term air pollution exposure on public health. Objectives: We quantified mortality displacement by estimating single-day lag effects and cumulative effects of air pollutants on mortality using distributed lag models. Methods: We performed a daily time series of nonaccidental and cause-specific mortality among elderly residents of São Paulo, Brazil, between 2000 and 2011. Effects of particulate matter smaller than 10 μm (PM10), nitrogen dioxide (NO2) and carbon monoxide (CO) were estimated in Poisson generalized additive models. Single-day lag effects of air pollutant exposure were estimated for 0-, 1- and 2-day lags. Distributed lag models with lags of 0-10, 0-20 and 0-30 days were used to assess mortality displacement and potential cumulative exposure effects. Results: PM10, NO2 and CO were significantly associated with nonaccidental and cause-specific deaths in both single-day lag and cumulative lag models. Cumulative effect estimates for 0-10 days were larger than estimates for single-day lags. Cumulative effect estimates for 0-30 days were essentially zero for nonaccidental and circulatory deaths but remained elevated for respiratory and cancer deaths. Conclusions: We found evidence of mortality displacement within 30 days for nonaccidental and circulatory deaths in elderly residents of São Paulo. We did not find evidence of mortality displacement within 30 days for respiratory or cancer deaths. Citation: Costa AF, Hoek G, Brunekreef B, Ponce de Leon AC. 2017. Air pollution and deaths among elderly residents of São Paulo, Brazil: an analysis of mortality displacement. Environ Health Perspect 125:349-354; http://dx.doi.org/10.1289/EHP98. abstract_id: PUBMED:31972932 Houseplant, indoor air pollution, and cardiovascular effects among elderly subjects in Taipei, Taiwan. Previous studies have documented the linkage between houseplant usage and indoor air quality (IAQ) improvement. However, the effect of houseplant on the association between indoor air quality and cardiovascular health is still unknown. We recruited 100 elderly subjects from 100 houses with (50) or without houseplants (50) in Taipei and conducted six 24-hour house visits for each elderly subject between 2015 and 2016. The particulate matter less than or equal to 2.5 μm in diameter (PM2.5), total volatile compounds (TVOCs), heart rate (HR) and blood pressure (BP) were continuously measured in each house visit. The association between indoor air pollution, BP and HR was investigated by mixed-effects. We found that the elderly subjects in houses without houseplants were exposed to higher indoor PM2.5 and TVOCs levels compared to the elderly subjects' exposures in houses with houseplants. Also, the elderly subjects' HR, systolic BP (SBP) and diastolic BP (DBP) were higher in houses without houseplants than those in houses with houseplants. Moreover, high indoor air pollution levels were associated with elevated BP and HR, especially among subjects in houses without houseplants. Accordingly, we concluded that indoor air pollution was associated with cardiovascular effects. Houseplant could improve IAQ and cardiovascular health among elderly subjects in Taipei, Taiwan. abstract_id: PUBMED:10160208 Particulate air pollution and daily mortality: who is at risk? Major episodes of air pollution, such as the London fog incident in 1952, have shown conclusively that relatively high levels of particulate air pollution can lead to serious adverse health effects, including death. Recent data suggest that exposure to respirable particulates [with an aerodynamic diameter &lt; or = 10 microns (PM10)] under the current American standard (150 micrograms/m3 for 24 h) is associated with increased daily mortality. These results have sparked an intense debate regarding the veracity of the reported associations and whether they should be interpreted causally. Most investigators have relied on a time-series approach to investigate the short-term effects of air pollution on daily mortality. Concerns have been expressed that the results may vary with the type of statistical methods used and the potential for uncontrolled and residual confounding effects. Of perhaps greater importance is that cause-specific mortality, temporal aspects of the exposure-disease relationship, and subgroups of the population have not been studied adequately. Under the hypothesis that only persons with impaired physiologic systems should be at risk, the identification of susceptible subgroups is of great importance in discerning causal mechanisms and in setting public health policies. In this paper, the main methodologic issues are discussed and a novel design to identify at-risk subgroups is presented. abstract_id: PUBMED:18813023 Ambient air pollution and daily mortality among survivors of myocardial infarction. Background: Certain subgroups in the general population, such as persons with existing cardiovascular or respiratory disease, may be more likely to experience adverse health effects from air pollution. Methods: In this European multicenter study, 25,006 myocardial infarction (MI) survivors in 5 cities were recruited from 1992 to 2002 via registers, and daily mortality was followed for 6 to 12 years in relation to ambient particulate and gaseous air pollution exposure. Daily air pollution levels were obtained from central monitor sites, and particle number concentrations were measured in 2001 and estimated retrospectively based on measured pollutants and meteorology. City-specific effect estimates from time-series analyses with Poisson regression were pooled over all 5 cities. Results: Particle number concentrations and PM10 averaged over 2 days (lag 0-1) were associated with increased total nontrauma mortality for patients of age 35 to 74 (5.6% [95% confidence interval, 2.8%-8.5%] per 10,000/cm and 5.1% [1.6%-9.3%] per 10 microg/m, respectively). For longer averaging times (5 and 15 days), carbon monoxide and nitrogen dioxide were also associated with mortality. There were no clear associations with ozone or sulfur dioxide. Conclusion: Exposure to traffic-related air pollution was associated with daily mortality in MI survivors. Point estimates suggest a stronger effect of air pollution in MI survivors than among the general population. abstract_id: PUBMED:16847936 Socioeconomic status, particulate air pollution, and daily mortality: differential exposure or differential susceptibility. Background: Short-term increases in particulate air pollution are linked with increased daily mortality and morbidity. Socioeconomic status (SES) is a determinant of overall health. We investigated whether social class is an effect modifier of the PM(10) (particulate matter with diameter &lt;10 micron)-daily mortality association, and possible mechanisms for this effect modification. Methods: Area-based traffic emissions, income, and SES were available for each resident in Rome. All natural deaths (83,253 subjects) occurring in Rome among city residents (aged 35+ years) during the period 1998-2001 were identified. For each deceased individual, all the previous hospitalizations within 2 years before death were available via a record linkage procedure. PM(10) daily data were available from two urban monitoring sites. A case-crossover analysis was utilized in which control days were selected according to the time stratified approach (same day of the week during the same month). Conditional logistic regression was used. Results: Due to the social class distribution in the city, exposure to traffic emissions was higher among those with higher area-based income and SES. Meanwhile, people of lower social class had suffered to a larger extent from chronic diseases before death than more affluent residents, especially diabetes mellitus, hypertension, heart failure, and chronic obstructive pulmonary diseases. Overall, PM(10) (lag 0-1) was strongly associated with mortality (1.1% increase, 95%CI = 0.7-1.6%, per 10 microg/m(3)). The effect was more pronounced among persons with lower income and SES (1.9% and 1.4% per 10 microg/m(3), respectively) compared to those in the upper income and SES levels (0.0% and 0.1%, respectively). Conclusions: The results confirm previous suggestions of a stronger effect of particulate air pollution among people in low social class. Given the uneven geographical distributions of social deprivation and traffic emissions in Rome, the most likely explanation is a differential burden of chronic health conditions conferring a greater susceptibility to less advantaged people. abstract_id: PUBMED:18795161 Season, sex, age, and education as modifiers of the effects of outdoor air pollution on daily mortality in Shanghai, China: The Public Health and Air Pollution in Asia (PAPA) Study. Background: Various factors can modify the health effects of outdoor air pollution. Prior findings about modifiers are inconsistent, and most of these studies were conducted in developed countries. Objectives: We conducted a time-series analysis to examine the modifying effect of season, sex, age, and education on the association between outdoor air pollutants [particulate matter &lt; 10 microm in aerodynamic diameter (PM(10)), sulfur dioxide, nitrogen dioxide, and ozone] and daily mortality in Shanghai, China, using 4 years of daily data (2001-2004). Methods: Using a natural spline model to analyze the data, we examined effects of air pollution for the warm season (April-September) and cool season (October-March) separately. For total mortality, we examined the association stratified by sex and age. Stratified analysis by educational attainment was conducted for total, cardiovascular, and respiratory mortality. Results: Outdoor air pollution was associated with mortality from all causes and from cardiorespiratory diseases in Shanghai. An increase of 10 mug/m(3) in a 2-day average concentration of PM(10), SO(2), NO(2), and O(3) corresponds to increases in all-cause mortality of 0.25% [95% confidence interval (CI), 0.14-0.37), 0.95% (95% CI, 0.62-1.28), 0.97% (95% CI, 0.66-1.27), and 0.31% (95% CI, 0.04-0.58), respectively. The effects of air pollutants were more evident in the cool season than in the warm season, and females and the elderly were more vulnerable to outdoor air pollution. Effects of air pollution were generally greater in residents with low educational attainment (illiterate or primary school) compared with those with high educational attainment (middle school or above). Conclusions: Season, sex, age, and education may modify the health effects of outdoor air pollution in Shanghai. These findings provide new information about the effects of modifiers on the relationship between daily mortality and air pollution in developing countries and may have implications for local environmental and social policies. abstract_id: PUBMED:31433918 Ambient Particulate Air Pollution and Daily Mortality in 652 Cities. Background: The systematic evaluation of the results of time-series studies of air pollution is challenged by differences in model specification and publication bias. Methods: We evaluated the associations of inhalable particulate matter (PM) with an aerodynamic diameter of 10 μm or less (PM10) and fine PM with an aerodynamic diameter of 2.5 μm or less (PM2.5) with daily all-cause, cardiovascular, and respiratory mortality across multiple countries or regions. Daily data on mortality and air pollution were collected from 652 cities in 24 countries or regions. We used overdispersed generalized additive models with random-effects meta-analysis to investigate the associations. Two-pollutant models were fitted to test the robustness of the associations. Concentration-response curves from each city were pooled to allow global estimates to be derived. Results: On average, an increase of 10 μg per cubic meter in the 2-day moving average of PM10 concentration, which represents the average over the current and previous day, was associated with increases of 0.44% (95% confidence interval [CI], 0.39 to 0.50) in daily all-cause mortality, 0.36% (95% CI, 0.30 to 0.43) in daily cardiovascular mortality, and 0.47% (95% CI, 0.35 to 0.58) in daily respiratory mortality. The corresponding increases in daily mortality for the same change in PM2.5 concentration were 0.68% (95% CI, 0.59 to 0.77), 0.55% (95% CI, 0.45 to 0.66), and 0.74% (95% CI, 0.53 to 0.95). These associations remained significant after adjustment for gaseous pollutants. Associations were stronger in locations with lower annual mean PM concentrations and higher annual mean temperatures. The pooled concentration-response curves showed a consistent increase in daily mortality with increasing PM concentration, with steeper slopes at lower PM concentrations. Conclusions: Our data show independent associations between short-term exposure to PM10 and PM2.5 and daily all-cause, cardiovascular, and respiratory mortality in more than 600 cities across the globe. These data reinforce the evidence of a link between mortality and PM concentration established in regional and local studies. (Funded by the National Natural Science Foundation of China and others.). Answer: Yes, subject characteristics do modify the effects of particulate air pollution on daily mortality among the elderly. Studies have identified various factors that can influence the susceptibility of elderly individuals to the harmful effects of air pollution. For instance, a study in Bordeaux found associations between particulate air pollution, measured as black smoke, and cardiorespiratory mortality among the elderly, suggesting that certain individual characteristics may confer susceptibility to the acute effects of urban air pollution (PUBMED:15534498). Similarly, research in Singapore showed that exposure to particulate matter was significantly associated with non-accidental and cardiovascular mortality, particularly in the elderly population (PUBMED:30999903). Further evidence from southwestern France indicated that the elderly population experienced a greater effect of air pollution on short-term mortality, especially for respiratory mortality, compared to the all-ages population (PUBMED:15016591). In São Paulo, Brazil, mortality displacement was observed within 30 days for nonaccidental and circulatory deaths among the elderly, but not for respiratory or cancer deaths, suggesting that the elderly may be more vulnerable to certain health outcomes related to air pollution (PUBMED:27713111). Additionally, a study in Taipei, Taiwan, found that houseplants could improve indoor air quality and cardiovascular health among elderly subjects, indicating that environmental modifications could mitigate the impact of air pollution on this demographic (PUBMED:31972932). Socioeconomic status has also been identified as a potential modifier, with lower social class individuals experiencing a stronger effect of particulate air pollution on mortality (PUBMED:16847936). Moreover, survivors of myocardial infarction, a subgroup of the elderly population, were found to be more susceptible to the effects of traffic-related air pollution, with a stronger effect observed than among the general population (PUBMED:18813023). Finally, factors such as season, sex, age, and education level have been reported to modify the health effects of outdoor air pollution in Shanghai, China, with the elderly, females, and those with lower educational attainment being more vulnerable (PUBMED:18795161). In summary, subject characteristics, including age, pre-existing health conditions, socioeconomic status, and environmental factors, can modify the effects of particulate air pollution on daily mortality among the elderly, indicating the presence of differential susceptibility within this population group.
Instruction: Mandibular Access Osteotomy for Tumor Ablation: Could a More Tissue-Preserving Technique Affect Healing Outcome? Abstracts: abstract_id: PUBMED:27131029 Mandibular Access Osteotomy for Tumor Ablation: Could a More Tissue-Preserving Technique Affect Healing Outcome? Purpose: Paramedial mandibulotomy facilitates access for the resection of tumors in the oral and oropharyngeal space; however, severe complications related to surgical techniques and radiotherapy have been reported for this procedure. This study evaluated whether preservation of the periosteum during a mandibulotomy would decrease postoperative complications owing to the increased healing capacity provided by preserving more tissue. Materials And Methods: Patients who underwent mandibulotomy for surgical tumor ablation from 2007 through 2012 were included in a retrospective controlled cohort study. The trial was comprised of 2 groups: 1 group underwent subperiosteal and 1 group underwent supraperiosteal surgical dissection in the area of the mandibulotomy. The primary predictor variable was surgical technique and the primary outcome variable was surgical complications. The groups were matched according to tumor type, age, and gender. Clinical and radiographic follow-up was performed 12 months after surgery. Complications regarding bone exposure, plate exposure, osteoradionecrosis, nonunion, infection of the soft tissue flap, abscesses, fistulas, and gingival necrosis were recorded. Recordings of other factors, such as age, smoking habits, and timing of radiotherapy and chemotherapy, were performed and the 2 groups were compared for postoperative complications. Data analysis used the χ(2) test (Fisher exact test) to compare the sub- and supraperiosteal groups for postoperative complications. Results: Thirty-two patients were included (16 per group; 14 women and 18 men; mean age, 56 yr). Thirteen patients in the subperiosteal group and 7 in the supraperiosteal group showed complications during the first 12 months. Seven patients had complications that persisted to the 12-month follow-up (6 in the subperiosteal group and 1 in the supraperiosteal group); however, the difference was not statistically relevant. Conclusion: This study found more persistent complications in the subperiosteal group compared with the supraperiosteal group at 12-month follow-up, which could imply that a more tissue-preserving surgical technique promotes mandibular healing in patients undergoing mandibular access osteotomy in combination with radiotherapy. abstract_id: PUBMED:33639503 Mandibular access osteotomy: Gate way to parapharyngeal space - A case report. Introduction: Para pharyngeal tumors often pose a challenge to surgeons for surgical interventions. Maxillofacial access osteotomies offer excellent visualization and permit unhindered surgical manipulation. Access osteotomy allows the surgeon an adequate access of the surgical field to resect the tumor completely and to preserve vital structures. Though numerous techniques exist, selection of the proper technique is the key factor in reestablishing the function and cosmesis. This article describes our experience with mandibular swing approach that has facilitated complete removal of a parapharyngeal space tumor. Case Presentation: 35years old female complained of deviation of tongue to one side and swallowing difficulty. Clinical and radiographic examinations were suggestive of a skull base lesion involving the hypoglossal nerve. After evaluation the tumor was excised through a mandibulotomy approach. Post operatively the patient was relieved completely of the symptoms and without any postoperative sequalae. Clinical Discussion: Accessibility is the main concern while dealing with skull base lesions. But the success of surgery lies on the selection of right approach. Paramedian mandibular swing approach has its own advantages over various other facial osteotomies. The swinging of the mandible gives advantage of accessing neck and skull base together, which is not possible with other facial osteotomies. Conclusion: Management of skull base tumors involve a multidisciplinary approach. Choosing the right approach is often a major dilemma. Access osteotomies of facial skeleton is a hatchway to the skull base lesions. Of which mandibular swing approach is a good option for skull base tumors because of the ease of surgical technique even in inexperienced hands. abstract_id: PUBMED:20214838 Single, subcondylar mandibular osteotomy: a new access route for extensive, benign parapharyngeal neoplasms. Objective: We report a new approach to benign parapharyngeal space tumours: a single, subcondylar mandibular osteotomy. Method: Case report and review of the world literature concerning parapharyngeal space access and the various types of mandibular osteotomy. Results: The use of a single, subcondylar mandibular osteotomy achieves good exposure and satisfactory aesthetic and functional results, using a simple, easily performed technique that is fast and has minimal morbidity. In addition, this technique preserves the submandibular gland and avoids lip-splitting and post-operative intermaxillary fixation. Conclusion: To our knowledge, this is the first report of this approach to the parapharyngeal space. We propose this approach as the first choice for resection of benign neoplasms of the parapharyngeal space. abstract_id: PUBMED:32866578 Double Mandibular Osteotomy for Access to High-Carotid Pathology. Background: Anecdotal experience demonstrates the existence of patients with superiorly located carotid stenosis, neoplasms, or aneurysms where the mandible obstructs effective surgical access using standard techniques. As carotid pathology extends anatomically beyond the limits of standard operative technique, additional exposure becomes paramount to safely and effectively address the lesion. Double mandibular osteotomy (DMO) is one of several techniques to obtain additional exposure to high-carotid pathology; however, there is no large series to address the outcomes of patients undergoing this procedure. Methods: A retrospective case series was performed for all patients undergoing surgery for carotid pathology from 2011-2019 that could not be approached with standard cervical incision. The primary predictor variable was high-anatomic carotid pathology necessitating DMO. The primary outcome variable was early and late complications sustained by patients. Results: Fifteen patients met study criteria and underwent 16 DMOs to access high-carotid pathology including carotid stenosis (n = 8 patients), carotid aneurysm (n = 2 patients), and carotid body tumor (n = 8 patients). Two patients had dual ipsilateral pathology with one patient having both carotid artery stenosis and aneurysm, and the other patient diagnosed with carotid artery stenosis and carotid body tumor. One patient had bilateral carotid artery stenosis, each requiring high anatomic exposure for treatment. Early complications occurred in 8 patients. Five patients experienced significant dysphagia requiring enteral feeding, and 2 patients developed malocclusion directly related to the double mandibular osteotomy. One patient experienced contralateral cortical watershed infarcts. Late complications included one patient developing osteomyelitis of the mandible, and this patient also developed distal mandibular segment screw exposure. The comparison of the outcome groups for categorical predictor variables using Fisher's exact test detected no statistically significant differences for gender, hypertension, hyperlipidemia, type 2 diabetes, chronic obstructive pulmonary disease, tobacco use, chronic kidney disease, or cerebrovascular disease. For the continuous variable comparisons, independent-samples t-tests detected no difference between the complication groups for age, operative time, or years of follow-up. No significant differences were found between the groups for body mass index or intraoperative blood loss. Conclusions: The double mandibular osteotomy provides excellent exposure and surgical access to the distal internal carotid artery for repair of vascular pathology with acceptable outcomes and long-term complications compared with previously reported techniques. Because of the early complications realized with the DMO, we recommend the procedure for symptomatic patients with a high risk of failing medical therapy alone and not appropriate for endovascular treatment as well as those patients with tumors requiring surgical intervention. abstract_id: PUBMED:16327271 Mandibular access osteotomies in oral cancer. Purpose: Mandibulotomy is a recognised surgical approach to tumours of the oral cavity and oropharynx. However, the nature and frequency of complications associated with this procedure varies in many studies. We report our experience with access osteotomies, with special reference to radionecrotic complications. Procedures: A retrospective study of 18 consecutive patients undergoing planned mandibular osteotomies as part of the ablative surgery was performed. Various flaps were used for tissue closure: myocutaneous flap (n = 8), nasolabial flap (n = 1) and radial forearm free vascular flap (n = 6). Primary closure using remaining mucosa was achieved in 3 patients. Results: Complete healing of the osteotomy was noted in 13 patients, and 5 patients (27%) developed major wound healing problems and osteoradionecrosis. Dental problems with resorption of the alveolar bone crest and loosening of the teeth adjacent to the osteotomy were seen in 7 patients (39%). Conclusions: Adequate soft-tissue closure is as important for a successful outcome as the type of osteotomy and fixation method applied. Extraction of the incisors is recommended for easier and safer tissue adaptation, especially when flaps are used for coverage and the osteotomy site will be included in the radiation field. abstract_id: PUBMED:27923112 The Double Mandibular Osteotomy for Vascular and Tumor Surgery of the Parapharyngeal Space. Purpose: The purposes of this study are to describe our experience using a double mandibular osteotomy for access to the parapharyngeal space in vascular and tumor surgery and to report on the outcomes and complications of this procedure. Patients And Methods: We designed and implemented a case series to review the medical records of all patients treated with a double mandibular osteotomy for parapharyngeal space access from 1994 to 2016. Patient demographic characteristics, indications for the procedure, outcomes, and complications were recorded. Results: A total of 17 patients underwent a double mandibular osteotomy procedure for access to the parapharyngeal space during the study period. There were 7 men (41%) and 10 women (59%) comprising the cohort. The average age was 57 years (range, 29 to 75 years). The follow-up period ranged from 6 to 98 months (mean, 40 months), and 7 patients (41%) were tobacco users at the time of surgery. The most common indication was high internal carotid artery stenosis (n = 6) followed by carotid body paraganglioma (n = 3). Average blood loss was 186 mL, and there were no deaths during the study period. Eight postoperative complications were noted in 7 patients (41%). No procedures were aborted or compromised because of inadequate parapharyngeal space access. All patients showed clinical and radiographic signs of healing of the osteotomy sites. Conclusion: The double mandibular osteotomy provides adequate access to the parapharyngeal space for effective tumor removal and high carotid surgical intervention with acceptable patient morbidity and complications. abstract_id: PUBMED:22381299 Transcervical double mandibular osteotomy approach to the infratemporal fossa. Background: In this study, we propose an alternative to the traditional transmandibular lower lip and chin splitting approach for exposing high infratemporal fossa and parapharyngeal space lesions involving the carotid canal and jugular foramen. Methods: We present 2 cases of high skull base tumors removed transcervically with anterior and posterior segmental mandibulotomies preserving the mental nerve without the use of a lip or chin incision. Results: Making the posterior osteotomy in an inverted L configuration is necessary so that the coronoid process does not prevent rotation of the mandible out of the visual field. Both patients had complete tumor resection with access to the carotid canal and jugular foramen and functional preservation of the mental nerve and marginal branch of the facial nerve. Neither patient had malocclusion or other dental complications from the approach. Conclusions: This novel technique is useful for providing excellent access to high infratemporal fossa or parapharyngeal space tumors. It avoids the traditional chin or lip incision and preserves the mental and facial nerves and is a useful procedure in the armamentarium of skull base/cerebrovascular neurosurgeons. abstract_id: PUBMED:36583200 Two cases of parapharyngeal space tumor resected by a double split mandibular osteotomy technique. Parapharyngeal space tumors have poor subjective symptoms and often grow until diagnosed; therefore, mandibular transection may be needed to obtain a wider field of view during surgery. However, if a median lower lip incision is performed for the mandibular transection, esthetic problems occur after surgery. Here, we report two cases of parapharyngeal space tumors that were removed with a mandibular lateral segment-osteotomy technique without median lower lip incision to avoid esthetic problems. Case 1 was a 49-year-old woman. She was aware of a right tonsillar swelling, and an imaging test revealed a tumor lesion 60 mm in size in the right parapharyngeal space. Case 2 was a 40-year-old woman with an abnormal position of the uvula, and an imaging test showed the left parapharyngeal space tumor lesion 45 mm in size. Both cases were diagnosed as a pleomorphic adenoma, and surgery under general anesthesia was performed jointly with otolaryngology and oral surgery. The incision was performed from the lower part of the right auricle to the anterior part of the submandibular area. After the tumor resection, the mandible was repositioned, fixed by plates, and the intermaxillary fixation was performed with a surgical stent. In both cases, slight paralysis of the mandibular branch of the facial nerve and the mental nerve was observed after the operation, but they were improved immediately. One year after the operation, the plates were removed. There have been no recurrences until now. abstract_id: PUBMED:24531251 One-stage treatment to condylar osteochondroma combined with secondary jaw bone deformities by sliding vertical ramus osteotomy and mandibular contouring. Osteochondroma of the mandibular condyle could cause secondary facial asymmetry; besides, it could affect the function of the temporomandibular joint (TMJ) and occlusion. Conventional treatments include total condylectomy or local resection of the lesion and condylar reconstruction. But it cannot effectively ameliorate the facial asymmetry. Thus, this article presents the application of combination surgeries including condylectomy, sliding vertical ramus osteotomy, and mandibular contouring in the treatments of this kind of disease. From 2007 to 2012, 12 patients with osteochondroma of the mandibular condyle were included. All of them accepted condylectomy, sliding vertical ramus osteotomy, and mandibular contouring at 1 stage. Mandibular contouring included mandibular inferior border ostectomy, mandibular outer cortex ostectomy, and horizontal osteotomy genioplasty according to the characters of jaw deformity. In addition, maximal mouth opening, pain in the TMJ, and numbness of the lower lip were recorded preoperatively and postoperatively to evaluate the therapeutic effects. All of patients obtained satisfactory results; TMJ dysfunction, facial asymmetry, and abnormal occlusion were improved greatly. No patients developed recurrence of the tumor during the follow-up period. In conclusion, after condylectomy, sliding vertical ramus osteotomy and mandibular contouring were operated at 1 stage, which effectively ameliorated the condylar osteochondroma patient's facial asymmetry and abnormal occlusion and achieved good therapeutic effects. abstract_id: PUBMED:12804536 Double mandibular osteotomy with coronoidectomy for tumours in the parapharyngeal space. Removal of deep-lobe parotid tumours from the parapharyngeal space is often difficult because of limited surgical access and the critical vascular and neurologic structures nearby. Mandibulotomy, when necessary, is useful for improving wider visibility and control of the vascular bundle and facial nerve, but may cause damage to the inferior dental and lingual nerves. The double mandibular osteotomy with coronoidectomy gives excellent access and avoids damage to these nerves. Answer: Mandibular access osteotomy is a surgical procedure used to facilitate the resection of tumors in the oral and oropharyngeal space. The technique of the osteotomy, particularly in terms of tissue preservation, can have an impact on the healing outcome post-surgery. A study that evaluated whether preserving the periosteum during a mandibulotomy would decrease postoperative complications found that there were more persistent complications in the subperiosteal group compared with the supraperiosteal group at 12-month follow-up. This suggests that a more tissue-preserving surgical technique, such as the supraperiosteal approach, could promote better mandibular healing in patients undergoing mandibular access osteotomy in combination with radiotherapy (PUBMED:27131029). Additionally, various techniques for mandibular access osteotomy have been described, each with its own advantages and potential complications. For instance, the mandibular swing approach is highlighted for its ability to provide access to both the neck and skull base, which is not possible with other facial osteotomies (PUBMED:33639503). A single, subcondylar mandibular osteotomy has been reported as a simple, fast approach with minimal morbidity that preserves the submandibular gland and avoids lip-splitting and post-operative intermaxillary fixation (PUBMED:20214838). The double mandibular osteotomy (DMO) is another technique used to obtain additional exposure to high-carotid pathology. It has been reported to provide excellent exposure and surgical access with acceptable outcomes and long-term complications compared with previously reported techniques (PUBMED:32866578, PUBMED:27923112). In conclusion, the choice of mandibular access osteotomy technique can affect the healing outcome, and more tissue-preserving techniques may lead to fewer postoperative complications and better healing. However, the specific technique chosen should be tailored to the individual patient's needs, the location and extent of the tumor, and the surgeon's experience and preference.
Instruction: Awakening after cardiac arrest and post resuscitation hypothermia: are we pulling the plug too early? Abstracts: abstract_id: PUBMED:24231569 Awakening after cardiac arrest and post resuscitation hypothermia: are we pulling the plug too early? Background: Time to awakening after out-of-hospital cardiac arrest (OHCA) and post-resuscitation therapeutic hypothermia (TH) varies widely. We examined the time interval from when comatose OHCA patients were rewarmed to 37°C to when they showed definitive signs of neurological recovery and tried to identify potential predictors of awakening. Methods: With IRB approval, a retrospective case study was performed in OHCA patients who were comatose upon presentation to a community hospital during 2006-2010. They were treated with TH (target of 33°C) for 24h, rewarmed, and discharged alive. Comatose patients were generally treated medically after TH for at least 48h before any decision to withdraw supportive care was made. Pre-hospital TH was not used. Data are expressed as medians and interquartile range. Results: The 89 patients treated with TH in this analysis were divided into three groups based upon the time between rewarming to 37°C and regaining consciousness. The 69 patients that regained consciousness in ≤48h after rewarming were termed "early-awakeners". Ten patients regained consciousness 48-72h after rewarming and were termed "intermediate-awakeners". Ten patients remained comatose and apneic &gt;72h after rewarming but eventually regained consciousness; they were termed "late-awakeners". The ages for the early, intermediate and late awakeners were 56 [49,65], 62 [48,74], and 58 [55,65] years, respectively. Nearly 67% were male. Following rewarming, the time required to regain consciousness for the early, intermediate and late awakeners was 9 [2,18] (range 0-47), 60.5 [56,64.5] (range 49-71), and 126 [104,151]h (range 73-259), respectively. Within 90 days of hospital admission, favorable neurological function based on a Cerebral Performance Category (CPC) score of 1 or 2 was reported in 67/69 early, 10/10 intermediate, and 8/10 late awakeners. Conclusion: Following OHCA and TH, arbitrary withdrawal of life support &lt;48h after rewarming may prematurely terminate life in many patients with the potential for full neurological recovery. Additional clinical markers that correlate with late awakening are needed to better determine when withdrawal of support is appropriate in OHCA patients who remain comatose &gt;48h after rewarming. abstract_id: PUBMED:29371115 Time to awakening after cardiac arrest and the association with target temperature management. Aim: Target temperature management (TTM) at 32-36 °C is recommended in unconscious survivors of cardiac arrest. This study reports awakening in the TTM-trial. Our predefined hypotheses were that time until awakening correlates with long-term neurological outcome and is not affected by level of TTM. Methods: Post-hoc analysis of time until awakening after cardiac arrest, its association with long-term (180-days) neurological outcome and predictors of late awakening (day 5 or later). The trial randomized 939 comatose survivors to TTM at 33 °C or 36 °C with strict criteria for withdrawal of life-sustaining therapies. Administered sedation in the treatment groups was compared. Awakening was defined as a Glasgow Coma Scale motor score 6. Results: 496 patients had registered day of awakening in the ICU, another 43 awoke after ICU discharge. Good neurological outcome was more common in early (275/308, 89%) vs late awakening (142/188, 76%), p &lt; 0.001. Awakening occurred later in TTM33 than in TTM36 (p = 0.002) with no difference in neurological outcome, or cumulative doses of sedative drugs at 12, 24 or 48 h. TTM33 (p = 0.006), clinical seizures (p = 0.004), and lower GCS-M on admission (p = 0.03) were independent predictors of late awakening. Conclusion: Late awakening is common and often has a good neurological outcome. Time to awakening was longer in TTM33 than in TTM36, this difference could not be attributed to differences in sedative drugs administered during the first 48 h. abstract_id: PUBMED:26784133 Awakening following cardiac arrest: Determined by the definitions used or the therapies delivered? Aims: To investigate patterns of neurologic "awakening" in out-of-hospital cardiac arrest (OHCA) patients using different criteria for prognostication post-arrest. Methods: Data was collected on 194 OHCA survivors to hospital admission. Patients were assigned to one of two groups based on whether they received therapeutic hypothermia (TH). Three separate criteria were used to assess neurologic "awakening": motor-GCS=6, total-GCS ≥ 9, and CPC=1 or 2. Demographics, arrest characteristics and intensive care events were compared using unpaired t-test, Chi-square or nonparametric Wilcoxon rank-sum test as appropriate. Primary outcome was the time from arrest to neurologic awakening. Results: Of 194 OHCA survivors, TH was implemented in 94 patients (48%). Compared to conventional care patients, hypothermia treated patients were more likely to be younger (58 vs. 69 years, p&lt;0.01),), and have a shockable arrest rhythm (27% vs. 10%, p&lt;0.01). Using the three criteria (m-GCS=6, t-GCS ≥ 9 &amp; CPC=1 or 2), median time to awakening for patients in the hypothermia group versus the conventional therapy group were 6 [4,9] vs. 3 [2,5] days, 3 [3,5] vs. 2 [2,3] days, and 3 [3,6] vs. 2 [2,4] days respectively (all p&lt;0.01) and prognostication using these criteria on day 3 yielded discordant results about which patients achieved awakening. Conclusions: Patients undergoing therapeutic hypothermia achieve meaningful neurologic "awakening" beyond 72 h post-arrest. Use of different criteria for the assessment of neurologic "awakening" can yield different prognostication predictions which calls for standardization and validation of a single definition of "awakening" by the resuscitation community. abstract_id: PUBMED:27036662 Influence of the temperature on the moment of awakening in patients treated with therapeutic hypothermia after cardiac arrest. Introduction: Target temperature management (TTM) has shown to reduce brain damage after an out-of-hospital cardiac arrest (CA), but the time to neurological recovery is not defined yet. We sought to determine the time these patients need to regain consciousness, as well as factors associated with a late post-arrest awakening. Methods: We performed a retrospective analysis of patients cooled to 32-34°C during 24h after CA, who regained neurological responsiveness after rewarming. We measured the time until awakening, defined as obedience to verbal commands. Results: We included 163 CA survivors (84.7% male, 60.2 years) who regained consciousness after TTM: target temperature was either 32°C (36.2%), 33°C (56.4%) or 34°C (6.7%). Mean time of awakening was 3.8 days. Thirty-four patients (20.9%) regained neurological responsiveness after 5 days after CA. All of them had been cooled to either 32°C (18 patients) or 33°C (16), and no patient cooled to 34°C awakened after day 5. A lower target temperature was associated with a later awakening (p&lt;0.001). The time to advanced cardiopulmonary resuscitation (CPR) was shorter among the early awakers (p=0.04), but we found no other predictors of an earlier awakening. Conclusions: A high proportion of CA survivors induced to TTM regained consciousness after 5 days, and cooling to a lower target temperature may influence on a late neurological recovery. Therefore, withdrawal of life supporting treatment should be delayed to more than 5 days in patients cooled to 33°C or less. Time to advanced CPR was found to be a predictor of early awakening. abstract_id: PUBMED:37178902 Hypothermia versus normothermia after out-of-hospital cardiac arrest; the effect on post-intervention serum concentrations of sedatives and analgesics and time to awakening. Background: This study investigated the association of two levels of targeted temperature management (TTM) after out-of-hospital cardiac arrest (OHCA) with administered doses of sedative and analgesic drugs, serum concentrations, and the effect on time to awakening. Methods: This substudy of the TTM2-trial was conducted at three centers in Sweden, with patients randomized to either hypothermia or normothermia. Deep sedation was mandatory during the 40-hour intervention. Blood samples were collected at the end of TTM and end of protocolized fever prevention (72 hours). Samples were analysed for concentrations of propofol, midazolam, clonidine, dexmedetomidine, morphine, oxycodone, ketamine and esketamine. Cumulative doses of administered sedative and analgesic drugs were recorded. Results: Seventy-one patients were alive at 40 hours and had received the TTM-intervention according to protocol. 33 patients were treated at hypothermia and 38 at normothermia. There were no differences between cumulative doses and concentration and of sedatives/analgesics between the intervention groups at any timepoint. Time until awakening was 53 hours in the hypothermia group compared to 46 hours in the normothermia group (p = 0.09). Conclusion: This study of OHCA patients treated at normothermia versus hypothermia found no significant differences in dosing or concentration of sedatives or analgesic drugs in blood samples drawn at the end of the TTM intervention, or at end of protocolized fever prevention, nor the time to awakening. abstract_id: PUBMED:34102268 Impact of benzodiazepines on time to awakening in post cardiac arrest patients. Aim: Although guidelines recommend use of short acting sedation after cardiac arrest, there is significant practice variation. We examined whether benzodiazepine use is associated with delayed awakening in this population. Methods: We performed a retrospective single center study including comatose patients treated after in- or out-of-hospital cardiac arrest from January 2010 to September 2019. We excluded patients who awakened within 6 h of arrest, those who arrested due to trauma or neurological event, those with nonsurvivable primary brain injury and those with refractory shock. Our primary exposure of interest was high-dose benzodiazepine (&gt;10 mg of midazolam equivalents per day) administration in the first 72-h post arrest. Our primary outcome was time to awakening. We used Cox regression to test for an independent association between exposure and outcome after controlling for biologically plausible covariates. Results: Overall, 2778 patients presented during the study period, 621 met inclusion criteria and 209 (34%) awakened after a median of 4 [IQR 3-7] days. Patients who received high-dose benzodiazepines awakened later than those who did not (5 [IQR 3-11] vs. 3 [IQR 3-6] days, P = 0.004). In adjusted regression, high-dose benzodiazepine exposure was independently associated with delayed awakening (adjusted hazard ratio 0.63 (95% CI 0.43-0.92)). Length of stay, awakening to discharge, and duration of mechanical ventilation were similar across groups. Conclusion: High-dose benzodiazepine exposure is independently associated with delayed awakening in comatose survivors of cardiac arrest. abstract_id: PUBMED:30303838 Late Awakening in Survivors of Postanoxic Coma: Early Neurophysiologic Predictors and Association With ICU and Long-Term Neurologic Recovery. Objectives: To examine neurophysiologic predictors and outcomes of patients with late awakening following cardiac arrest. Design: Observational cohort study. Setting: Academic ICU. Patients: Adult comatose cardiac arrest patients treated with targeted temperature management and sedation. Interventions: None. Measurement And Main Results: Time to awakening was calculated starting from initial sedation stop following targeted temperature management and rewarming (median 34 hr from ICU admission). Two-hundred twenty-eight of 402 patients (57%) awoke: late awakening (&gt; 48 hr from sedation stop; median time to awakening 5 days [range, 3-23 d]) was observed in 78 subjects (34%). When considering single neurophysiologic tests, late awakening was associated with a higher proportion of discontinuous electroencephalography (21% vs 6% of early awakeners), absent motor and brainstem responses (38% vs 11%; 23 vs 4%, respectively), and serum neuron specific enolase greater than 33 ng/mL (23% vs 8%; all p &lt; 0.01): no patient had greater than 2 unfavorable tests. By multivariable analysis-adjusting for cardiac arrest duration, Sequential Organ Failure Assessment score, and type of sedation-discontinuous electroencephalography and absent neurologic responses were independently associated with late awakening. Late awakening was more frequent with midazolam (58% vs 45%) and was associated with higher rates of delirium (62% vs 39%) and unfavorable 3-months outcome (27% vs 12%; all p = 0.005). Conclusions: Late awakening is frequent after cardiac arrest, despite early unfavorable neurophysiologic signs and is associated with greater neurologic complications. Limiting benzodiazepines during targeted temperature management may accelerate awakening. Postcardiac arrest patients with late awakening had a high rate of favorable outcome, thereby supporting prognostication strategies relying on multiple rather than single tests and that allow sufficient time for outcome prediction. abstract_id: PUBMED:23916554 Time to awakening and neurologic outcome in therapeutic hypothermia-treated cardiac arrest patients. Introduction: Therapeutic hypothermia (TH) has been shown to improve outcomes in comatose Post-Cardiac Arrest Syndrome (PCAS) patients. It is unclear how long it takes these patients to regain neurologic responsiveness post-arrest. We sought to determine the duration to post-arrest awakening and factors associated with times to such responsiveness. Methods: We performed a retrospective chart review of consecutive TH-treated PCAS patients at three hospitals participating in a US cardiac arrest registry from 2005 to 2011. We measured the time from arrest until first documentation of "awakening", defined as following commands purposefully. Results: We included 194 consecutive TH-treated PCAS patients; mean age was 57±16 years; 59% were male; 40% had an initial shockable rhythm. Mean cooling duration was 24±8h and mean rewarming time was 14±13h. Survival to discharge was 44%, with 78% of these discharged with a good neurologic outcome. Of the 85 patients who awakened, median time to awakening was 3.2 days (IQR 2.2, 4.5) post-cardiac arrest. Median time to awakening for a patient discharged in good neurological condition was 2.8 days (IQR 2.0, 4.5) vs. 4.0 days (IQR 3.5, 7.6) for those who survived to discharge without a good neurological outcome (p=0.035). There was no significant association between initial rhythm, renal insufficiency, paralytic use, post-arrest seizure, or location of arrest and time to awakening. Conclusion: In TH-treated PCAS patients, time to awakening after resuscitation was highly variable and often longer than three days. Earlier awakening was associated with better neurologic status at hospital discharge. abstract_id: PUBMED:27860555 Time to Awakening Is Often Delayed in Patients Who Receive Targeted Temperature Management After Cardiac Arrest. Post cardiac arrest, neuroprognostication remains a complex and clinically challenging issue for critical care providers. For this reason, our primary objective in this study was to determine the frequency of survival and favorable neurological outcomes in post-cardiac arrest patients with delayed time to awakening. To assess whether early withdrawal of care may adversely impact survival, we also sought to describe the time to withdrawal of care of non-surviving patients. We performed a retrospective study of patients resuscitated after cardiac arrest in two large academic community hospitals. We performed a structured chart review of patients treated with therapeutic hypothermia (TH) at one hospital from 2009 to 2015 and at a second hospital from 2013 to 2015. Demographics and Utstein style variables were recorded on all patients, as well as temporal variables to characterize the time interval from Return of Spontaneous Circulation (ROSC) to awakening as recorded by ICU nurses and defined as Glasgow Coma Scale (GCS) of &gt;8. Descriptive data were also captured regarding time to withdrawal of care. We pre-hoc defined delayed awakening as &gt;72 hours post ROSC or &gt;72 hours post rewarming. Our primary outcome was survival to hospital discharge with a secondary outcome of a favorable cerebral performance category of 1 or 2. During this study period, 321 patients received TH, with 111 (34.6%) discharged alive and, of these, 67 (68.5%) experienced a good neurological outcome. Awakening more than 72 hours after return of circulation was common with 31 patients surviving to discharge. Of these, 16 of 31 (51.6%) were found to have a good neurological outcome on hospital discharge. Of the patients who died before discharge, 54 (29.5%) had care withdrawn less than 72 hours after ROSC. A delayed time to awakening is not infrequently associated with a good neurological outcome after TH in patients resuscitated from cardiac arrest. abstract_id: PUBMED:27098348 Delayed awakening after cardiac arrest: prevalence and risk factors in the Parisian registry. Purpose: Although prolonged unconsciousness after cardiac arrest (CA) is a sign of poor neurological outcome, limited evidence shows that a late recovery may occur in a minority of patients. We investigated the prevalence and the predictive factors of delayed awakening in comatose CA survivors treated with targeted temperature management (TTM). Methods: Retrospective analysis of the Parisian Region Out-of-Hospital CA Registry (2008-2013). In adult comatose CA survivors treated with TTM, sedated with midazolam and fentanyl, time to awakening was measured starting from discontinuation of sedation at the end of rewarming. Awakening was defined as delayed when it occurred after more than 48 h. Results: A total of 326 patients (71 % male, mean age 59 ± 16 years) were included, among whom 194 awoke. Delayed awakening occurred in 56/194 (29 %) patients, at a median time of 93 h (IQR 70-117) from discontinuation of sedation. In 5/56 (9 %) late awakeners, pupillary reflex and motor response were both absent 48 h after sedation discontinuation. In multivariate analysis, age over 59 years (OR 2.1, 95 % CI 1.0-4.3), post-resuscitation shock (OR 2.6 [1.3-5.2]), and renal insufficiency at admission (OR 3.1 [1.4-6.8]) were associated with significantly higher rates of delayed awakening. Conclusions: Delayed awakening is common among patients recovering from coma after CA. Renal insufficiency, older age, and post-resuscitation shock were independent predictors of delayed awakening. Presence of unfavorable neurological signs at 48 h after rewarming from TTM and discontinuation of sedation did not rule out recovery of consciousness in late awakeners. Answer: The question of whether life support is being withdrawn too early in patients who have experienced cardiac arrest and have undergone post-resuscitation hypothermia (TH) is a significant concern. Studies have shown that the time to awakening after out-of-hospital cardiac arrest (OHCA) and TH can vary widely, and arbitrary withdrawal of life support before 48 hours after rewarming may prematurely terminate life in patients with the potential for full neurological recovery (PUBMED:24231569). Research indicates that late awakening is common and can often result in a good neurological outcome. The time to awakening was found to be longer in patients treated with TH at 33°C compared to those treated at 36°C, but this difference could not be attributed to differences in sedative drugs administered during the first 48 hours (PUBMED:29371115). Furthermore, the use of different criteria for the assessment of neurologic "awakening" can yield different prognostication predictions, which calls for standardization and validation of a single definition of "awakening" (PUBMED:26784133). A retrospective analysis found that a high proportion of cardiac arrest survivors induced to TTM regained consciousness after 5 days, especially those cooled to lower target temperatures, suggesting that withdrawal of life-supporting treatment should be delayed to more than 5 days in patients cooled to 33°C or less (PUBMED:27036662). Additionally, a study found no significant differences in dosing or concentration of sedatives or analgesic drugs in blood samples drawn at the end of the TTM intervention, or at the end of protocolized fever prevention, nor in the time to awakening between patients treated at normothermia versus hypothermia (PUBMED:37178902). The use of high-dose benzodiazepines has been independently associated with delayed awakening in comatose survivors of cardiac arrest (PUBMED:34102268). Moreover, late awakening is frequent after cardiac arrest and is associated with greater neurologic complications, with limiting benzodiazepines during targeted temperature management potentially accelerating awakening (PUBMED:30303838). In summary, the evidence suggests that a cautious approach should be taken before withdrawing life support in patients who have undergone TH after cardiac arrest, as awakening can be delayed and still result in favorable neurological outcomes.
Instruction: Retrieval rates of inferior vena cava (IVC) filters: are we retrieving enough? Abstracts: abstract_id: PUBMED:36976151 Direct superior vena cava puncture for inferior vena cava filter retrieval. Most inferior vena cava (IVC) filters are designed for retrieval from a superior approach. Retrieval becomes technically challenging when the central veins in the chest are occluded. In a patient with thrombosis of the bilateral brachiocephalic veins, the authors describe direct puncture of the superior vena cava (SVC) under fluoroscopy, followed by the successful retrieval of a fractured IVC filter using forceps. A snare inserted into the SVC via the common femoral vein was used as a radiopaque target for direct SVC puncture from the lower neck. Cone beam computed tomography and pullback tractography were used to confirm a safe access trajectory. Thus, direct SVC access may be used for filter retrieval in similar clinical scenarios. abstract_id: PUBMED:35737099 Classification System for Inferior Vena Cava (IVC) Appearance Following Percutaneous IVC Filter Retrieval. Objective: There is no classification system for describing inferior vena cava (IVC) injuries. The objective of this study was to develop a standardized grading system for venographic appearance of the IVC following percutaneous IVC filter retrieval. Methods: A classification system for the appearance of the IVC on cavograms following percutaneous IVC filter removal was developed consisting of two grading elements; luminal characteristics and extravasation. Luminal narrowing from 0% up to 50% from any cause is grade 1; narrowing between 50 and 99% is grade 2; occlusion is grade 3; and avulsion is grade 4. Absence of extravasation is grade A, contained extravasation is grade B, and free extravasation is grade C. This system was then applied retrospectively to pre- and post-IVC filter retrieval cavograms performed at a single institution from October 2004 through February 2019. Results: 546 retrieval attempts were identified with 509 (93.2%) filters successfully retrieved. 449 cases (88.2%) had both pre-retrieval and post-retrieval imaging appropriate for application of the proposed classification system. Inter-rater reliability was 0.972 for luminal characteristics, 0.967 for extravasation, and 0.969 overall. Consensus grading demonstrated a distribution of 97.3% grade 1, 1.3% grade 2, 1.3% grade 3, and 0.0% grade 4 for post-retrieval luminal characteristics. For extravasation classification, 96.4% of the cases were classified as grade A, 2.7% grade B, and 0.9% grade C. Conclusion: A classification system was developed for describing IVC appearance after IVC filter retrieval, and retrospectively validated using a single center dataset. abstract_id: PUBMED:36692006 Retrieval of a Cook Celect inferior vena cava (IVC) filter after prolonged dwell time of 5,117 days: Factors to consider for retrieval of long-dwell IVC filters. Inferior vena cava (IVC) filters are used in certain patients to help prevent the occurrence of pulmonary embolism (PE). IVC filters are generally recommended to be removed once PE prophylaxis is no longer required. Long-dwelling IVC filters are associated with higher complication rates (Vasa 2020; 49: 449), being more difficult to retrieve and associated with higher retrieval complications (Cardiovasc Diagn Ther 2016; 6: 632). This report describes the pre-procedural work-up and removal of an IVC filter with a prolonged dwell time of 5,117 days (14 years, 3 days) using the loop snare advanced retrieval technique. As far as the authors are aware this case is the longest-described successful retrieval of a Cook Celect IVC filter at 5,117 days. abstract_id: PUBMED:30474511 Factors Affecting Inferior Vena Cava Filter Retrieval: A Review. Objective:: Over the last 2 decades, there has been an exponential rise in placement of retrievable inferior vena cava (IVC) filters, while the retrieval rate has remained steadily low. Approaches to increasing filter retrieval rates have been extensively studied. Conclusion:: This review presents an up-to-date review of reported data-driven variables that affect retrieval rates of IVC filters, with a focus on clinical, technical, and process factors. abstract_id: PUBMED:32660360 Inferior vena cava filter - comprehensive overview of current indications, techniques, complications and retrieval rates. Inferior vena cava (IVC) filter has been used to manage patients with pulmonary embolism and deep venous thrombosis. Its ease of use and the expansion of relative indications have led to a dramatic increase in IVC filter placement. However, IVC filters have been associated with a platitude of complications. Therefore, there exists a need to examine the current indications and identify the patient population at risk. In this paper, we comprehensively reviewed the current indications and techniques of IVC filter placement. Further, we examined the various complications associated with either permanent or retrievable IVC filters. Lastly, we examined the current data on filter retrieval. abstract_id: PUBMED:27196734 Observations on failed retrieval of optional inferior vena cava filters. Purpose: To evaluate causes of failed optional inferior vena cava filter (IVCF) retrievals. Methods: Single-center retrospective study. Results: IVCF retrievals were attempted in 26/211 (12%) patients at a mean 42.9 days. There were 9 failures (all OptEase) due to: inability to snare the hook (n=5), noncollapsible IVCF (n=3), and unusual procedural pain (n=1). Median duration of retrieved IVCFs was 31 days compared to 53 days for failures (P&lt;.05). IVCFs aligned with the IVC's cephalocaudal axis were retrieved in 13/16 cases, while misaligned IVCFs were retrieved 4/10 cases (P&lt;.05). Conclusion: Filter duration and misalignment were significantly associated with retrieval failures. abstract_id: PUBMED:27489697 Total laparoscopic retrieval of inferior vena cava filter. While there is some local variability in the use of inferior vena cava filters and there has been some evolution in the indications for filter placement over time, inferior vena cava filters remain a standard option for pulmonary embolism prophylaxis. Indications are clear in certain subpopulations of patients, particularly those with deep venous thrombosis and absolute contraindications to anticoagulation. There are, however, a variety of reported inferior vena cava filter complications in the short and long term, making retrieval of the filter desirable in most cases. Here, we present the case of a morbidly obese patient complaining of chronic abdominal pain after inferior vena cava filter placement and malposition of the filter with extensive protrusion outside the inferior vena cava. She underwent successful laparoscopic retrieval of her malpositioned inferior vena cava filters after failure of a conventional endovascular approach. abstract_id: PUBMED:28674355 The Successful Removal of Two Retrievable Inferior Vena Cava (IVC) Filters after 67 Days in a Patient with a Double IVC. We herein present a case in which two retrievable inferior vena cava (IVC) filters, which were implanted to treat deep-vein thrombosis caused by the compression of a double IVC, were successfully removed on the 67th day after placement. The filters were individually placed in both the left and right IVCs. With a prevalence of only 0.2%, a double IVC is an extremely rare anatomical variation. The long-term effects of IVC filters are unknown, and the placement of a filter potentially introduces the risk of complications. Thus, if the patient's clinical condition allows, the endovascular retrieval of the filter should be considered within a few months after implantation. abstract_id: PUBMED:36439536 Case report: Endoluminal removal of a retrievable conical inferior vena cava filter with a ruptured retraction hook attached to the wall. We report the case of a patient who underwent endovascular retrieval of a conical inferior vena cava (IVC) filter with a ruptured retraction hook that was attached to the IVC wall. A 21-year-old woman with a Celect (Cook) filter, implanted 1,522 days prior, requested retrieval. Preoperative ultrasound and CT examinations showed that the filter was inclined, the retraction hook was attached to the IVC wall, and one of the filter's pedicles was broken. The inferior vena cava was patent, with no thrombus. Old superficial femoral vein thrombosis could be seen in the right lower extremity. The filter retrieval equipment (Gunther Tulip, Cook) failed to capture the retraction hook. By means of a pigtail catheter (with a partly removed catheter tip) and loach guidewire, we applied a modified loop-snare technique to successfully cut the proliferative tissue near the tip of the retraction hook, by which the hook re-entered the inferior vena cava. Although the snare successfully captured the retraction hook and retrieved the filter, the broken pedicle was retained in the inferior vena cava. We used forceps to capture and pull it to the distal end. In the end, the inferior vena cava became patent, with no contrast agent spillage or residual, and no symptomatic pulmonary embolization. A simultaneous occurrence of oblique adherence and fracture is rarely found in the same filter; however, by using the modified loop-snare technique and biopsy forceps technique, we successfully retrieved the filter and broken pedicle. Our case provides a practical auxiliary technique for regular clinical practice. abstract_id: PUBMED:28413581 Robotic-Assisted Inferior Vena Cava Filter Retrieval. Although anticoagulation remains the mainstay of therapy for patients with venous thromboembolism, guidelines recommend the use of inferior vena cava (IVC) filters in those who fail anticoagulation or have contraindications to its use. Short-term use of filters has proven effective in reducing the rate of pulmonary embolism. However, their extended use is associated with a variety of complications such as thrombosis, filter migration, or caval perforation, thus making a case for timely filter retrieval. This is the case of a 68-year-old female with a history of chronic oral anticoagulation use for multiple deep venous thrombi (DVT) and pulmonary emboli (PE) who required cervical and thoracic spinal intervention for spondylosis and foramina stenosis. Given her increased risk of recurrent DVT and PE perioperatively, we elected to place a Cook Celect™ IVC filter (Cook Medical, Bloomington, IN) after oral anticoagulation was stopped for the procedure. Her treatment course was prolonged due to wound-healing complications. We elected to use the Magellan Robotic Catheter System (Hansen Medical, Mountain View, CA) for filter retrieval when she presented 6 months later with caval perforation from the filter struts. With its ease of use, superior mechanical stability, and maneuverability, robot-assisted IVC filter retrieval may be a safer and more reliable substitute for traditional navigation techniques when presented with challenging filter retrievals. Answer: The literature suggests that while the placement of inferior vena cava (IVC) filters has increased dramatically, the retrieval rates have remained relatively low (PUBMED:30474511). This discrepancy is concerning given the variety of complications associated with both permanent and retrievable IVC filters, which makes retrieval desirable in most cases (PUBMED:32660360). Complications can include thrombosis, filter migration, or caval perforation (PUBMED:28413581), and long-dwelling IVC filters are associated with higher complication rates and more difficult retrieval processes (PUBMED:36692006). Factors affecting IVC filter retrieval include clinical, technical, and process factors (PUBMED:30474511). For instance, filter duration and misalignment have been significantly associated with retrieval failures (PUBMED:27196734). Additionally, unusual anatomical variations, such as a double IVC, can present unique challenges for filter retrieval (PUBMED:28674355). Innovative techniques have been developed to address challenging retrieval scenarios, such as direct superior vena cava (SVC) puncture for cases where central veins in the chest are occluded (PUBMED:36976151), laparoscopic retrieval for malpositioned filters (PUBMED:27489697), and robotic-assisted retrieval for cases with complications like caval perforation (PUBMED:28413581). These techniques demonstrate the evolving strategies to improve retrieval rates. Despite these advancements, the need for a standardized approach to assess IVC injuries post-retrieval led to the development of a classification system for describing IVC appearance after filter retrieval, which was retrospectively validated using a single-center dataset (PUBMED:35737099). This system may help in better understanding and documenting the outcomes of retrieval attempts. Overall, the literature indicates that there is a gap between the number of IVC filters placed and the number retrieved, suggesting that more effort may be needed to increase retrieval rates and reduce the risk of long-term complications associated with IVC filters.
Instruction: Does acute maternal stress in pregnancy affect infant health outcomes? Abstracts: abstract_id: PUBMED:37078447 Maternal mental health mediates the effect of prenatal stress on infant temperament: The Harvey Mom Study. Prenatal maternal stress and mental health problems are known to increase risk for developmental psychopathology in offspring, yet pathways leading to risk or resiliency are poorly understood. In a quasi-experimental design, we prospectively examined associations between disaster-related prenatal stress, maternal mental health symptoms, and infant temperament outcomes. Mothers who were pregnant during Hurricane Harvey (N = 527) reported on objective hardships (e.g., loss of belongings or income, evacuation, home flooding) related to the storm and subsequent mental health symptoms (anxiety/depression, posttraumatic stress) across time. At a postpartum assessment, mothers reported on their infant's temperament (negative affect, positive affect, orienting/regulatory capacity). Greater objective hardship indirectly predicted higher levels of infant orienting/regulatory capacity through its association with increased maternal posttraumatic stress symptoms. Greater objective hardship also indirectly predicted higher levels of infant negative affect through its association with increased maternal anxiety/depression symptoms across time. Our findings suggest a psychological mechanism linking prenatal stress with specific temperamental characteristics via maternal mental health symptoms. Findings point to the importance of high-quality assessment and mental health services for vulnerable women and young children. abstract_id: PUBMED:30454636 Epigenetics, maternal prenatal psychosocial stress, and infant mental health. This paper provides a summary of literature on epigenetic effects and infant health outcomes of maternal psychosocial stress during pregnancy. A search of literature yielded a large body of publications between 2008 and 2018. Relevant articles were selected, and additional sources were located from ancestry searches of reference lists. Results implicate maternal prenatal stress as a source of epigenetic mechanisms that affect fetal brain development and program risk for emotional dysregulation and mental disorders over a lifetime and across generations. Implications for nursing practice are explored at multiple levels of policy advocacy, public education, primary prevention, screening and intervention. abstract_id: PUBMED:31227075 Reprint of "Epigenetics, maternal prenatal psychosocial stress, and infant mental health". This paper provides a summary of literature on epigenetic effects and infant health outcomes of maternal psychosocial stress during pregnancy. A search of literature yielded a large body of publications between 2008 and 2018. Relevant articles were selected, and additional sources were located from ancestry searches of reference lists. Results implicate maternal prenatal stress as a source of epigenetic mechanisms that affect fetal brain development and program risk for emotional dysregulation and mental disorders over a lifetime and across generations. Implications for nursing practice are explored at multiple levels of policy advocacy, public education, primary prevention, screening and intervention. abstract_id: PUBMED:36508054 Maternal Early Life Adversity and Infant Stress Regulation: Intergenerational Associations and Mediation by Maternal Prenatal Mental Health. Early life adversity is a potent risk factor for poor mental health outcomes across the lifespan, including offspring vulnerability to psychopathology. Developmentally, the prenatal period is a sensitive window in which maternal early life experiences may influence offspring outcomes and demarcates a time when expectant mothers and offspring are more susceptible to stressful and salutary influences. This prenatal plasticity constituted the focus of the current study where we tested the association of maternal early life adversity with infant stress regulation through maternal prenatal internalizing symptoms and moderation by prenatal social support. Mother-infant dyads (n = 162) were followed prospectively and mothers completed assessments of social support and depressive and anxiety symptoms across pregnancy. Infants completed standardized stress paradigms at one month and six months. There were several key findings. First, maternal prenatal depressive symptoms significantly mediated predictions of infant cortisol reactivity to the heel stick at one month from maternal early life adversity: specifically, maternal early life adversity positively predicted depressive symptoms in pregnancy, which in turn predicted dampened infant cortisol reactivity. Second, prenatal social support did not significantly moderate predictions of depressive or anxiety symptoms in pregnancy from maternal early life adversity nor did it alter the associations of maternal depressive or anxiety symptoms with infant stress regulation. These results suggest that maternal prenatal mental health is a key mechanism by which maternal early life adverse experiences affect offspring risk for psychopathology. We discuss potential clinical and health implications of dysregulated infant cortisol reactivity with respect to lifespan development. abstract_id: PUBMED:28898707 The effect of prenatal substance use and maternal contingent responsiveness on infant affect. Background: The effects of prenatal substance exposure on neurobehavioral outcomes are inherently confounded by the effects of the postnatal environment, making it difficult to disentangle their influence. The goal of this study was to examine the contributing effects of prenatal substance use and parenting style (operationalized as contingent responding during the play episodes of the Still-face paradigm [SFP]) on infant affect. Methods: A prospective cohort design was utilized with repeated assessment of substance use during pregnancy and the administration of the SFP, which measures infant response to a social stressor, at approximately 6months of age. Subjects included 91 dyads classified into four groups: 1) Control (n=34); 2) Medication assisted therapy for opioid dependence (MAT; n=19); 3) Alcohol (n=15); 4) Alcohol+MAT (n=23). Mean % of positive infant affect and mean % of maternal responsiveness (watching, attention seeking, and contingent responding) was compared among the five SFP episodes across the four study groups by MANOVA. Mixed effects modelling was used to estimate the contributing effects of the study groups and maternal responsiveness on infant affect. Results: Maternal contingent responding was associated with increase (β̂=0.84; p&lt;0.0001) and attention seeking with decrease (β̂=-0.78; p&lt;0.0001) in infant positive affect. The combined effect of prenatal exposures and covariates explained 15.8% of the variability in infant positive affect, while the model including contingent responding and covariates explained 67.1% of the variability. Conclusions: Higher maternal responsiveness was a much stronger predictor of infant behavior than prenatal exposures, providing the basis for future intervention studies focusing on specific parenting strategies. abstract_id: PUBMED:36794393 In the pandemic from the womb. Prenatal exposure, maternal psychological stress and mental health in association with infant negative affect at 6 months of life. For mother-infant health especially, the pandemic has brought multiple stressors inside a susceptible psychobiological system. We study the longitudinal associations between maternal prenatal and postpartum: (a) COVID-19 stressful events exposure, (b) pandemic psychological stress, and (c) mental health and infants' negative affect. A sample of 643 Italian pregnant women completed a web-based survey from April 8th to May 4th, 2020 and a follow-up at 6 months after delivery. Maternal assessment covered prenatal and postpartum measures for: COVID-19 stressful events exposure, pandemic psychological stress, mental health symptoms (i.e., depression, anxiety, posttraumatic stress disorder) and postpartum, social support and report of infants' negative affect. Maternal mental health symptoms during pregnancy, at the peak of pandemic, is longitudinally associated with infant negative affect, with postpartum mental health mediating this association. Also, maternal COVID-19 stressful events exposure in postpartum is associated with negative affect at 6 months mediated by postpartum mental health symptoms. Maternal pandemic psychological stress during pregnancy predicted mental health symptoms in postpartum. The study supports the association between pandemic-related maternal health across pregnancy and postpartum and offspring's development (i.e., negative affect). It also puts the spotlight on mental health risk in women experiencing lockdown during pregnancy, especially when feeling high psychological stress in pregnancy or when directly exposed to COVID-19 stressful events postpartum. abstract_id: PUBMED:37670606 Maternal Perinatal Stress Trajectories and Negative Affect and Amygdala Development in Offspring. Objective: Maternal psychological stress during pregnancy is a common risk factor for psychiatric disorders in offspring, but little is known about how heterogeneity of stress trajectories during pregnancy affect brain systems and behavioral phenotypes in infancy. This study was designed to address this gap in knowledge. Methods: Maternal anxiety, stress, and depression were assessed at multiple time points during pregnancy in two independent low-risk mother-infant cohorts (N=115 and N=2,156). Trajectories in maternal stress levels in relation to infant negative affect were examined in both cohorts. Neonatal amygdala resting-state functional connectivity MRI was examined in a subset of one cohort (N=60) to explore the potential relationship between maternal stress trajectories and brain systems in infants relevant to negative affect. Results: Four distinct trajectory clusters, characterized by changing patterns of stress over time, and two magnitude clusters, characterized by severity of stress, were identified in the original mother-infant cohort (N=115). The magnitude clusters were not associated with infant outcomes. The trajectory characterized by increasing stress in late pregnancy was associated with blunted development of infant negative affect. This relationship was replicated in the second, larger cohort (N=2,156). In addition, the trajectories that included increasing or peak maternal stress in late pregnancy were related to stronger neonatal amygdala functional connectivity to the anterior insula and the ventromedial prefrontal cortex in the exploratory analysis. Conclusions: The trajectory of maternal stress appears to be important for offspring brain and behavioral development. Understanding heterogeneity in trajectories of maternal stress and their influence on infant brain and behavioral development is critical to developing targeted interventions. abstract_id: PUBMED:36974514 How do we improve maternal and child health outcomes in Ghana? Maternal and infant mortality includes a number of health challenges in Ghana, with outcomes among the worst in the subregion and the world. Our aim here was to provide insights into how Ghana has approached these challenges, with a view to making suggestions for the future. Ghana has made significant gains in reducing infant and maternal deaths in the past decade through initiatives like the Free Maternal Care Policy, the Community-based Health Planning Services, and the National Health Insurance Policy. These policies have improved financial access to maternal and obstetric health services, facility-based delivery, and antenatal care services in particular. However, a number of challenges still hinder access to maternal and child health outcomes. Poor infrastructure, human resource challenges, poor access to essential medicines, poor quality of care, and superstitious and cultural beliefs have been noted in the literature. We suggest that while providing the necessary human and financial resources, other initiatives including the promotion of maternal health education, supervised home delivery, and zero maternal death interventions should be encouraged to help improve maternal and child health outcomes in Ghana. abstract_id: PUBMED:34937600 Effects of maternal exposure to acute stress on birth outcomes: a quasi-experiment study. Numerous studies have shown associations between maternal stress and poor birth outcomes, but evidence is unclear for causal inference. Natural disasters provide an opportunity to study effects of quasi-randomized hardship with an accurate measure of onset and duration. In a population-based quasi-experimental study, we examined the effect of maternal exposure to the January 1998 Québec ice storm on birth outcomes by comparing pregnant mothers who lived in an area hard hit by the ice storm with those in two unaffected regions. In a total of 147,349 singleton births between 1995 and 2001, we used a difference-in-differences method to estimate the effects of the ice storm on gestational age at delivery (GA), preterm birth (PTB), weight-for-gestational-age z-scores (BWZ), large for gestational age (LGA), and small for gestational age (SGA). After adjusting for maternal and sociodemographic characteristics, there were no differences between the exposed and the unexposed mothers for birth outcomes. The estimated differences (exposed vs. unexposed) were 0.01 SDs (95% CI: -0.02, 0.05) for BWZ; 0.10% point (95% CI: -0.95%, 1.16%) for SGA; 0.25% point (95% CI: -0.78%, 1.28%) for LGA; -0.01 week (95% CI: -0.07, 0.05) for GA; and 0.16% point (95% CI: -0.66%, 0.97%) for PTB. Neither trimester-specific nor dose-response associations were observed. Overall, exposure to the 1998 Québec ice storm as a proxy for acute maternal stress in pregnancy was not associated with poor birth outcomes. Our results suggest that acute maternal hardship may not have a substantial effect on adverse birth outcomes. abstract_id: PUBMED:26152890 Psychosocial Stress During First Pregnancy Predicts Infant Health Outcomes in the First Postnatal Year. Objectives: To evaluate the impact of psychosocial stress during pregnancy on infant health outcomes in the first postnatal year. Methods: A sample of 3000 women completed a stress inventory (the Psychosocial Hassles Scale) during their third trimester before first childbirth. Infant health outcomes were measured via maternal report at 1, 6 and 12 months postpartum. Poisson regression was used to model the effect of maternal stress during pregnancy on infant health outcomes in the first year, controlling for age, race/ethnicity, education, insurance coverage, marital status, and cigarette smoking during pregnancy. Results: Women who were younger, minority, unmarried, publicly insured and without a college degree were more likely to report high levels of prenatal stress. High prenatal stress was a significant predictor of maternal reporting of gastrointestinal illness (p &lt; 0.0001), respiratory illness (p = 0.025), and total illness in the first year (p &lt; 0.0001). High prenatal stress was also a significant predictor of urgent care visits (p &lt; 0.0001) and emergency department visits (p = 0.001). It was not a significant predictor of hospitalizations (p = 0.36). Conclusions: Maternal prenatal stress is associated with increased maternal reporting of infant illness, as well as increased frequency of both urgent care visits and emergency department visits. Answer: The relationship between acute maternal stress during pregnancy and infant health outcomes is complex and multifaceted. Some studies suggest that maternal stress can have significant effects on infant health and development. For instance, maternal mental health symptoms during pregnancy, such as anxiety and depression, have been associated with infant temperament outcomes, including higher levels of negative affect and orienting/regulatory capacity in infants (PUBMED:37078447). Additionally, maternal prenatal psychosocial stress has been implicated as a source of epigenetic mechanisms that can affect fetal brain development and program risk for emotional dysregulation and mental disorders over a lifetime and across generations (PUBMED:30454636; PUBMED:31227075). Maternal early life adversity has also been linked to infant stress regulation, with maternal prenatal mental health mediating the association between maternal adversity and infant cortisol reactivity (PUBMED:36508054). Furthermore, maternal psychological stress during pregnancy is considered a common risk factor for psychiatric disorders in offspring, and different patterns of stress trajectories during pregnancy have been associated with infant negative affect and amygdala development (PUBMED:37670606). However, not all studies find a direct association between acute maternal stress and adverse infant health outcomes. For example, a study on the effects of maternal exposure to the 1998 Québec ice storm did not find an association between acute maternal stress in pregnancy and poor birth outcomes such as gestational age at delivery, preterm birth, and weight-for-gestational-age (PUBMED:34937600). Similarly, while high prenatal stress has been linked to increased maternal reporting of infant illness and increased frequency of urgent care and emergency department visits, it was not a significant predictor of hospitalizations (PUBMED:26152890). In summary, while there is evidence to suggest that maternal stress during pregnancy can affect infant health outcomes, particularly in relation to temperament, stress regulation, and potential risk for mental health issues, the impact of acute maternal stress on more immediate birth outcomes and health issues may not be as clear-cut. It is important to consider the type, timing, and severity of stress, as well as other mediating factors such as maternal mental health and responsiveness, when evaluating the potential effects on infant health.
Instruction: Does the position of shoulder immobilization after reduced anterior glenohumeral dislocation affect coaptation of a Bankart lesion? Abstracts: abstract_id: PUBMED:25894458 Does the position of shoulder immobilization after reduced anterior glenohumeral dislocation affect coaptation of a Bankart lesion? An arthrographic comparison. Background: The position of immobilization after anterior shoulder dislocation has been a controversial topic over the past decade. We compared the effect of post-reduction immobilization, whether external rotation or internal rotation, on coaptation of the torn labrum. Materials And Methods: Twenty patients aged &lt;40 years with primary anterior shoulder dislocation without associated fractures were randomized to post-reduction external rotation immobilization (nine patients) or internal rotation (11 patients). After 3 weeks, magnetic resonance arthrography was performed. Displacement, separation, and opening angle parameters were assessed and analyzed. Results: Separation (1.16 ± 1.11 vs 2.43 ± 1.17 mm), displacement (1.73 ± 1.64 vs 2.28 ± 1.36 mm), and opening angle (15.00 ± 15.84 vs 27.86 ± 14.74 °) in the externally rotated group were decreased in comparison to the internally rotated group. A statistically significant difference between groups was seen only for separation (p = 0.028); p values of displacement and opening angle were 0.354 and 0.099, respectively. Conclusion: External rotation immobilization after reduction of primary anterior shoulder dislocation could result in a decrease in anterior capsule detachment and labral reduction. abstract_id: PUBMED:25855657 Arm Abduction Provides a Better Reduction of the Bankart Lesion During Immobilization in External Rotation After an Initial Shoulder Dislocation. Background: Shoulder dislocation often recurs, especially in the younger population. Immobilization in external rotation, in which a Bankart lesion is displaced in the anterior, medial, and inferior directions, was introduced as a new method of nonoperative treatment, but its clinical efficiency is controversial. In terms of reducing the lesion, it is reasonable to incorporate not only external rotation, which makes the anterior soft tissues tight to push the lesion posteriorly and laterally, but also abduction, which makes the inferior soft tissues tight to push the lesion superiorly. Hypothesis: Abducting the arm during immobilization in external rotation will improve the reduction of a Bankart lesion. Study Design: Controlled laboratory study. Methods: There were 37 patients with initial shoulder dislocation enrolled in this study. After reduction, MRI was taken in 4 positions of the shoulder: adduction and internal rotation (Add-IR), adduction and external rotation (Add-ER), 30° of abduction and 30° of external rotation (Abd-30ER), and 30° of abduction and 60° of external rotation (Abd-60ER). On radial slices, the separation, displacement of the labrum, and opening angle of the capsule were measured. Results: Add-ER improved the reduction of the anterior labrum but not the inferior labrum when compared with Add-IR. Both Abd-30ER and Abd-60ER improved the reduction of the inferior labrum as compared with Add-IR. Furthermore, Abd-60ER improved the reduction more than Add-ER. Conclusion: Among the 4 positions tested, Abd-60ER is the best position in terms of reducing the Bankart lesion. Clinical Relevance: Abducting the shoulder during immobilization in external rotation is demonstrated to improve the reduction of the Bankart lesion. Therefore, this position is expected to reduce the recurrence rate after initial dislocation of the shoulder. Future clinical trials are necessary. abstract_id: PUBMED:10199277 Position of immobilization after dislocation of the shoulder. A cadaveric study. Background: After reduction of a shoulder dislocation, the torn edges of a Bankart lesion need to be approximated for healing during immobilization. The position of immobilization has traditionally been adduction and internal rotation, but there is little direct evidence to support or discredit the use of this position. The purpose of the present study was to determine the relationship between the position of the arm and the coaptation of the edges of a simulated Bankart lesion created in cadaveric shoulders. Methods: Ten thawed fresh-frozen cadaveric shoulders were used for experimentation. All of the muscles were removed to expose the joint capsule. A simulated Bankart lesion was created by sectioning the anteroinferior aspect of the capsule from the labrum. With linear transducers attached to the anteroinferior and inferior portions of the Bankart lesion, the opening and closing of the lesion were recorded with the arm in 0, 30, 45, and 60 degrees of elevation in the coronal and sagittal planes as well as with the arm in rotation from full internal to full external rotation in 10-degree increments. Results: With the arm in adduction, the edges of the simulated Bankart lesion were coapted in the range from full internal rotation to 30 degrees of external rotation. With the arm in 30 degrees of flexion or abduction, the edges of the lesion were coapted in neutral and internal rotation but were separated in external rotation. At 45 and 60 degrees of flexion or abduction, the edges were separated regardless of rotation. Conclusions: The present study demonstrated that, in the cadaveric shoulder, there was a so-called coaptation zone in which the edges of a simulated Bankart lesion were kept approximated without the surrounding muscles. abstract_id: PUBMED:37857390 Anterior shoulder instability with Hill-Sachs on-track. Bankart or Bankart-remplissage? Introduction: in patients with anterior glenohumeral (GH) instability together with an off-track or engaging Hill-Sachs (HS) defect, Bankart-remplissage (B-R) surgery reduces the recurrence rate when compared to Bankart (B) surgery alone. There is controversy regarding whether the recurrence rate also decreases in patients with on-track or non-engaging Hill-Sachs defects. Objective: to compare the recurrence rate and clinical evolution of patients with anterior glenohumeral instability with 'on-track' Hill-Sachs defect treated with either B or B-R surgery. Material And Methods: non-randomized, retrospective, single-center cohort study of patients with anterior glenohumeral instability and on-track Hill-Sachs defect, operated between January 2010 and December 2018. Patients operated with B versus B-R were compared. Recurrence, complications and re-operation were recorded. In addition, VAS, SSV, WOSI and qDASH scores were obtained and compared in both groups. Results: of the 105 patients who met the inclusion criteria, 78 (74.3%) patients had a complete follow-up (52 B and 26 B-R, 4.3 years median follow-up). There was a higher recurrence rate in group B compared to B-R, with this difference not reaching statistical significance (17.3% vs 7.7%, p = 0.21). There were no significant differences in residual pain, feeling of instability, complications or VAS, qDASH, SSV or WOSI scores between both groups. In the subgroup analysis, patients who practiced contact sports and were operated with B showed higher recurrence rates (24.1% vs 0%, p = 0.08) and complications (41.4% vs 18.2%, p = 0.16) when compared to B + R, although these differences were not significant. Conclusion: there were no significant differences in recurrence rates and functional evolution between patients with anterior glenohumeral instability operated with B or B-R surgery. Comparative, prospective studies should be performed to establish definitive recommendations. abstract_id: PUBMED:32246607 Treatment of anterior shoulder instability with remplissage for Hill-Sachs injuries and Bankart injury compared to pure Bankart injuries Background: Anterior shoulder dislocation occurs in more than 90% of the time, the main cause is traumatic, describing two main lesions in this pathology: Bankarts and Hill-Sachss injury, the recurrence rate is not similar in open repair and with a possible advantage of arthroscopic surgery with less loss of movement range, lower risk of subscapular muscle damage, faster return to daily activities and increased patient satisfaction. Objective: Assessing functionality, mobility and stability of the shoulder in patients treated: arthroscopic Bankart repair versus arthroscopic Bankart repair + remplissage. Methods: Clinical records of patients with shoulder instability were reviewed Hill-Sachs and Bankart lesions were doumented; 21 post-surgical patients and were physically examined to evaluate the range of motion, Rowe functional scales and Western Ontario Shoulder Instability Index were used. 13 months of follow up as an average. Results: There was no recurrence of dislocation with either technique, greater satisfaction was observed in the remplissage group; however, the limitation of the motion arc is greater. Conclusion: Both groups reduce instability, control pain and mostly satisfy patients in the 13-month follow-up. abstract_id: PUBMED:34332055 Anterior Shoulder Instability Part I-Diagnosis, Nonoperative Management, and Bankart Repair-An International Consensus Statement. Purpose: The purpose of this study was to establish consensus statements via a modified Delphi process on the diagnosis, nonoperative management, and Bankart repair for anterior shoulder instability. Methods: A consensus process on the treatment using a modified Delphi technique was conducted, with 65 shoulder surgeons from 14 countries across 5 continents participating. Experts were assigned to one of 9 working groups defined by specific subtopics of interest within anterior shoulder instability. Results: The independent factors identified in the 2 statements that reached unanimous agreement in diagnosis and nonoperative management were age, gender, mechanism of injury, number of instability events, whether reduction was required, occupation, sport/position/level played, collision sport, glenoid or humeral bone-loss, and hyperlaxity. Of the 3 total statements reaching unanimous agreement in Bankart repair, additional factors included overhead sport participation, prior shoulder surgery, patient expectations, and ability to comply with postoperative rehabilitation. Additionally, there was unanimous agreement that complications are rare following Bankart repair and that recurrence rates can be diminished by a well-defined rehabilitation protocol, inferior anchor placement (5-8 mm apart), multiple small-anchor fixation points, treatment of concomitant pathologies, careful capsulolabral debridement/reattachment, and appropriate indications/assessment of risk factors. Conclusion: Overall, 77% of statements reached unanimous or strong consensus. The statements that reached unanimous consensus were the aspects of patient history that should be evaluated in those with acute instability, the prognostic factors for nonoperative management, and Bankart repair. Furthermore, there was unanimous consensus on the steps to minimize complications for Bankart repair, and the placement of anchors 5-8 mm apart. Finally, there was no consensus on the optimal position for shoulder immobilization. Level Of Evidence: Level V, expert opinion. abstract_id: PUBMED:18212348 The effect of glenohumeral position on the shoulder after traumatic anterior dislocation. Background: Previous cadaveric studies suggest that positioning the shoulder in an externally rotated position reduces displaced Bankart lesions through a coaptation effect. Hypothesis: We hypothesized that positioning the glenohumeral joint in an externally rotated position creates contact pressure between the subscapularis and the anterior labrum. Study Design: Descriptive laboratory study. Methods: Eight cadaveric shoulders were used. Contact pressure between the subscapularis and labrum was measured in varying glenohumeral positions using a Tekscan pressure monitor. The position of the anterior band of the inferior glenohumeral ligament was also digitized in those positions. All shoulders were tested in the intact condition, following a surgically created Bankart lesion and following anterior shoulder dislocation. These conditions were also verified by measuring glenohumeral translation and joint forces. Results: For all 8 specimens, the contact pressure between the subscapularis and the anterior labrum was negligible with the humerus externally rotated up to 90 degrees at all abduction angles in intact, surgically created Bankart, and dislocated specimens. There were several glenohumeral positions where the anterior band of the inferior glenohumeral ligament strain in the intact specimens was similar to the postdislocation condition, that is, no statistically significant difference between intact and dislocated specimens. These positions included 30 degrees of glenohumeral abduction with external rotation between 0 degrees and 60 degrees, as well as 45 degrees of abduction with external rotation of 0 degrees and 60 degrees. The anterior band of the inferior glenohumeral ligament strain and glenohumeral anterior-posterior translation measurements revealed that the surgically created Bankart lesion does not simulate the conditions after anterior dislocation of the shoulder. Conclusion: External rotation of the shoulder does not create contact pressure between the subscapularis and the anterior labrum before or after anterior dislocation. Clinical Relevance: The efficacy of external rotation immobilization after anterior-inferior shoulder dislocation is not likely to be related to coaptation of the Bankart lesion by the subscapularis. abstract_id: PUBMED:29248765 Chronic anterior shoulder instability with significant Hill-Sachs lesion: Arthroscopic Bankart with remplissage versus open Latarjet procedure. Background: The objective of this study was to compare short-term shoulder stability after arthroscopic Bankart repair with remplissage versus the open Latarjet procedure in patients who had chronic anterior shoulder instability with a significant Hill-Sachs lesion. Hypothesis: The dislocation recurrence rate is higher after Bankart repair with remplissage than after open Latarjet. Patients And Methods: An observational non-randomised retrospective cohort study was conducted at two surgical centres in patients treated for chronic anterior shoulder instability with a significant Hill-Sachs defect between January 2009 and July 2014. The study compared 43 patients managed by arthroscopic Bankart repair and remplissage and 43 patients managed with open Latarjet. The two groups were matched on age at surgery and on follow-up duration. All patients were evaluated by independent observers based on a questionnaire including recurrences, range of motion, and functional outcomes (Shoulder Subjective Value [SSV], Walch-Duplay score, and Rowe score). Results: Mean follow-up was 47.3 months (range, 24-67 months). The recurrence rate at last follow-up was not significantly different between the two groups (9.3% versus 11.2%; P=0.67). The Bankart group had significantly greater loss of external rotation and a significantly higher proportion of patients with residual pain (21% versus 9%, P=0.023). The SSV, Walch-Duplay score, and Rowe score values were similar between groups. Discussion: In patients who had chronic anterior shoulder instability with a significant Hill-Sachs lesion, arthroscopic Bankart repair with remplissage and open Latarjet were reliable and safe procedures associated with low and similar recurrence rates. However, loss of external rotation and residual pain were significantly more common with the combined Bankart-remplissage procedure. Level Of Evidence: III; comparative retrospective study. abstract_id: PUBMED:20661401 Twelve years' experience of the mini-Bankart repair for recurrent anterior dislocation of the shoulder. Stabilization for recurrent anterior shoulder dislocation can be achieved through either an open or arthroscopic approach. The former tends to have a lower recurrence rate but longer rehabilitation.The technique of mini-Bankart repair has been used at this establishment since 1996. We retrospectively reviewed the patients that had undergone this procedure. We describe our experience of the mini-Bankart procedure and the results in 24 patients with a mean follow-up of 56 months (range, 12-144 months).The technique is a direct mini-approach to the shoulder joint, preserving the inferior portion of subscapularis. Where present, a Bankart lesion is repaired with two GII Mitek anchors (Ethicon) and the capsule reefed. There were no incidences of repeat anterior dislocation, and the average time period taken to return to work was 8.8 weeks. We recommend this technique due to its low recurrence rate and satisfactory return to normal function. abstract_id: PUBMED:23325970 Analysis of the functional results of arthroscopic Bankart repair in posttraumatic recurrent anterior dislocations of shoulder. Background: The Bankart lesion represents the most common form of labro-ligamentous injury in patients with traumatic dislocations of the shoulder leading to shoulder instability. We report the clinical outcome of arthroscopic repair of Bankart lesion in 50 patients. Materials And Methods: Sixty five patients with posttraumatic anterior dislocation of shoulder were treated by arthroscopic repair from Jan 2005 to Nov 2008. Fifty patients, with an average age of 26.83 years (range 18-45 years), were reviewed in the study. The average followup period was 27 months (range 24-36 months). University of California Los Angeles shoulder rating scale was used to determine the outcome after surgery. The recurrence rates, range of motion, as well as postoperative function and return to sporting activities were evaluated. Results: Thirty six patients (72.0%) had excellent results, whereas seven patients (14.0%) had good results. The mean pre- and postoperative range of external rotation was 80.38° and 75.18°, respectively. Eighty-six percent patients had stability compared with the normal sided shoulder and were able to return to sports. There were no cases of redislocation observed in this study; however, three cases had mild laxity of the joint. Conclusion: Arthroscopic Bankart repair with the use of suture anchors is a reliable treatment method, with good clinical outcomes, excellent postoperative shoulder motion and low recurrence rates. Answer: Yes, the position of shoulder immobilization after reduced anterior glenohumeral dislocation does affect the coaptation of a Bankart lesion. Studies have shown that external rotation immobilization after reduction of primary anterior shoulder dislocation could result in a decrease in anterior capsule detachment and labral reduction, with a statistically significant difference in separation compared to internal rotation immobilization (PUBMED:25894458). Additionally, arm abduction during immobilization in external rotation has been demonstrated to improve the reduction of the Bankart lesion, with the position of 30° of abduction and 60° of external rotation (Abd-60ER) being the best in terms of reducing the lesion (PUBMED:25855657). A cadaveric study also indicated that with the arm in adduction, the edges of a simulated Bankart lesion were coapted in the range from full internal rotation to 30 degrees of external rotation, suggesting a so-called coaptation zone without the surrounding muscles (PUBMED:10199277). However, another study found that external rotation of the shoulder does not create contact pressure between the subscapularis and the anterior labrum before or after anterior dislocation, suggesting that the efficacy of external rotation immobilization is not likely related to coaptation of the Bankart lesion by the subscapularis (PUBMED:18212348). Despite these findings, there was no consensus on the optimal position for shoulder immobilization in the context of anterior shoulder instability (PUBMED:34332055). Overall, the evidence suggests that the position of immobilization can influence the anatomical coaptation of a Bankart lesion, with external rotation and abduction positions showing potential benefits in terms of lesion reduction and stability.
Instruction: Do doctors vote? Abstracts: abstract_id: PUBMED:30872948 Effective Vote Markets and the Tyranny of Wealth. What limits should there be on the areas of life that are governed by market forces? For many years, no one seriously defended the buying and selling votes for political elections. In recent years, however, this situation has changed, with a number of authors defending the permissibility of vote markets (e.g. Freiman 2014). One popular objection to such markets is that they would lead to a tyranny of wealth, where the poor are politically dominated by the rich. In a recent paper, Taylor (Res Publica 23(3):313-328, 2017. doi:10.1007/s11158-016-9327-0) has argued that this objection can be avoided if certain restrictions are placed on vote markets. In this paper we will argue that this attempt to rebut an argument against vote markets is unsuccessful. Either vote markets secure their purported benefits but then they inevitably lead to a tyranny of wealth, or they are restricted so heavily that they lack the features that have been claimed to make vote markets attractive in the first place. Using Taylor's proposal as a test case, we make the more general claim that vote markets cannot avoid the tyranny of wealth objection and bring about their supposed benefits at the same time. abstract_id: PUBMED:32518108 Universal vote-by-mail has no impact on partisan turnout or vote share. In response to coronavirus disease 2019 (COVID-19), many scholars and policy makers are urging the United States to expand voting-by-mail programs to safeguard the electoral process. What are the effects of vote-by-mail? In this paper, we provide a comprehensive design-based analysis of the effect of universal vote-by-mail-a policy under which every voter is mailed a ballot in advance of the election-on electoral outcomes. We collect data from 1996 to 2018 on all three US states that implemented universal vote-by-mail in a staggered fashion across counties, allowing us to use a difference-in-differences design at the county level to estimate causal effects. We find that 1) universal vote-by-mail does not appear to affect either party's share of turnout, 2) universal vote-by-mail does not appear to increase either party's vote share, and 3) universal vote-by-mail modestly increases overall average turnout rates, in line with previous estimates. All three conclusions support the conventional wisdom of election administration experts and contradict many popular claims in the media. abstract_id: PUBMED:36589468 The Asian American Vote in 2020: Indicators of Turnout and Vote Choice. What were the indicators of voter turnout and presidential vote choice among Asian Americans in 2020? We argue that 2020 was a unique year in which race was salient for Asian Americans due to the rise of anti-Asian attitudes attributed to the COVID-19 pandemic and the opportunity to elect a vice presidential candidate of Asian descent. Because of this, racial considerations played a unique role that informed Asian American political participation and attitudes in this election. Using data from the 2020 Collaborative Multiracial Post-Election Survey, we identify the individual-level factors associated with turnout and presidential vote choice among Asian Americans. We find that stronger perceptions of racial discrimination were related to a higher likelihood of turnout and voting in support of the Democratic Party, especially among Asian immigrants relative to the native-born. This study offers new insight for when we can expect racial considerations to inform the politics of Asian Americans, who are the fastest growing racial group in the United States and therefore an increasingly important bloc of the electorate. Supplementary Information: The online version contains supplementary material available at 10.1007/s11109-022-09844-9. abstract_id: PUBMED:33144746 Vote-by-Mail: COVID-19 and the 2020 Presidential Primaries. In this year of pandemic, it seems assured that a record number of citizens will choose to vote by mail. But approval of this method of voting appears increasingly divided along partisan lines, thanks in part to President Trump's declamations. Evidence from the presidential primaries held earlier this year indicates that allegiance to the president, as well as relative lack of concern about the COVID-19 virus, made voters less likely to choose to vote by mail. abstract_id: PUBMED:37628260 Nonequilibrium Thermodynamics of the Majority Vote Model. The majority vote model is one of the simplest opinion systems yielding distinct phase transitions and has garnered significant interest in recent years. This model, as well as many other stochastic lattice models, are formulated in terms of stochastic rules with no connection to thermodynamics, precluding the achievement of quantities such as power and heat, as well as their behaviors at phase transition regimes. Here, we circumvent this limitation by introducing the idea of a distinct and well-defined thermal reservoir associated to each local configuration. Thermodynamic properties are derived for a generic majority vote model, irrespective of its neighborhood and lattice topology. The behavior of energy/heat fluxes at phase transitions, whether continuous or discontinuous, in regular and complex topologies, is investigated in detail. Unraveling the contribution of each local configuration explains the nature of the phase diagram and reveals how dissipation arises from the dynamics. abstract_id: PUBMED:35185309 Closest to the People? Incumbency Advantage and the Personal Vote in Non-Partisan Elections. Do incumbents dominate non-partisan elections because of an especially large personal vote? This question has important implications for understanding the causes of incumbent success and the benefits or drawbacks of non-partisan elections. This paper uses a natural experiment, combined with three original datasets, to estimate the size, persistence, and consequences of the personal vote in a large non-partisan city election. We first use individual-level survey data to show that individuals assigned quasi-randomly to a new incumbent are substantially less likely to support the incumbent. We use a second survey, one year later, to demonstrate the persistence of this effect. Finally, we use historical election results to simulate the electoral consequences of the personal vote; we find that the personal vote is sufficiently large to affect one in four incumbent races. We conclude that the personal vote, while large and important, is not sufficient to explain incumbent dominance in non-partisan contests. abstract_id: PUBMED:24850994 Voting at 16: Turnout and the quality of vote choice. Critics of giving citizens under 18 the right to vote argue that such teenagers lack the ability and motivation to participate effectively in elections. If this argument is true, lowering the voting age would have negative consequences for the quality of democracy. We test the argument using survey data from Austria, the only European country with a voting age of 16 in nation-wide elections. While the turnout levels of young people under 18 are relatively low, their failure to vote cannot be explained by a lower ability or motivation to participate. In addition, the quality of these citizens' choices is similar to that of older voters, so they do cast votes in ways that enable their interests to be represented equally well. These results are encouraging for supporters of a lower voting age. abstract_id: PUBMED:27499350 Mental illness and the right to vote: a review of legislation across the world. The right to vote is an important right signifying freedom of thought as well as full citizenship in any setting. Right to vote is enshrined and protected by international human rights treaties. The right of 'everyone' to take part in the political process and elections is based on universal and equal suffrage. Although these International Conventions have been ratified by the large majority of United Nations Member States, their application across the globe is by no means universal. This study sets out to examine the domestic laws of UN Member States in order to explore whether individuals with mental health problems have the right to vote in actuality and, thu,s can participate in political life. Through various searches, electoral laws and Constitutions of 193 Member States of the United Nations were studied. The authors were able to find legislation and/or Constitutional provisions in 167 of the 193 Member States. Twenty-one countries (11%) only placed no restrictions on the right to vote by persons with mental health problems. Over one third of the countries (36%) deny all persons with any mental health problems a right to vote without any qualifier. Some of these discriminatory attitudes are reflected in the multiplicity of terms used to describe persons with mental health problems. Another 21 countries (11%) denied the right to vote to detained persons; of these, nine Member States specifically denied the right to vote to persons who were detained under the mental health law, while the remainder denied the right to vote to all those who were interdicted or judicially interdicted. It would appear that in many countries the denial of voting rights is attributed to a lack of ability to consent by the individuals with mental illness. Further exploration of explanation is required to understand these variations, which exist in spite of international treaties. abstract_id: PUBMED:29230185 The Relationship between the Brexit Vote and Individual Predictors of Prejudice: Collective Narcissism, Right Wing Authoritarianism, Social Dominance Orientation. The Leave campaign in the U.K., which advocated exiting the European Union, emphasized anxiety over immigration and the need to take control of the U.K.'s borders. Citizens who expressed concerns about immigration to the U.K. were more likely to vote to leave. Two correlational studies examined the previously unexplored question of whether the Brexit vote and support for the outcome of the E.U. referendum were linked to individual predictors of prejudice toward foreigners: British collective narcissism (a belief in national greatness), right wing authoritarianism, and social dominance orientation. The results converged to indicate that all three variables were independently related to the perceived threat of immigrants and, via this variable, to the Brexit vote and a support for the outcome of the E.U. referendum. These variables explained the variance in the perceived threat of immigrants and support for the Brexit vote over and above other previously examined predictors such as age, education, or ethnicity, as well as, national identification and national attachment. abstract_id: PUBMED:35994833 Use of VOTE score in predicting outcomes in pediatric obstructive sleep apnea. Objectives: Obstructive Sleep Apnea (OSA) affects 1-4% of the pediatric population in the U.S. Drug-Induced Sleep Endoscopy (DISE) is widely used to localize the level(s) of obstruction. The VOTE classification system is used to grade obstructions found at the velum, oropharynx, base of tongue, and epiglottis and has been validated in adults. This study aims to determine if the VOTE score has any predictive value in pediatric OSA postoperative outcomes. Methods: A retrospective chart review of 129 patients from January 7, 2016 to 05/30/2020 was performed. Included patients were between the ages of 2 and 17, undergoing DISE, and if they had preoperative and postoperative polysomnography (PSG) data. Excluded patients did not meet one of the above or had other comorbidities contributing to their sleep apnea. 53 patients were included. Results: Pearson's Correlation tests compared intraoperative VOTE score to postoperative BMI, AHI, and O2 nadir and their relationships. We found a weakly positive correlation between the VOTE and postoperative AHI with a coefficient of 0.35 and a p-value of 0.01. We found a relationship between postoperative O2 nadir and AHI, with a coefficient of -0.627 and a p-value &lt;0.0001. Finally, a paired, two-tailed t-test compared the mean change between preoperative and postoperative BMIs (+1.6), oAHIs (-23.5), and O2 nadirs (+14), all with p-value &lt;0.0001. Conclusion: We demonstrated a correlation between VOTE and improved postoperative AHI and a relationship between improved postoperative AHI and postoperative O2 nadir. The validity of VOTE may be proven with larger sample size. Alternatively, a different scoring system may be required for pediatric OSA. Answer: The provided abstracts do not directly address the question of whether doctors vote. The abstracts focus on various aspects of voting behavior and electoral systems, such as the impact of vote markets on political equality (PUBMED:30872948), the effects of universal vote-by-mail on turnout and vote share (PUBMED:32518108), factors influencing Asian American voter turnout and presidential vote choice (PUBMED:36589468), the influence of COVID-19 on vote-by-mail preferences (PUBMED:33144746), thermodynamics of the majority vote model (PUBMED:37628260), incumbency advantage in non-partisan elections (PUBMED:35185309), voting rights for individuals under 18 and those with mental illness (PUBMED:24850994, PUBMED:27499350), predictors of prejudice related to the Brexit vote (PUBMED:29230185), and the use of the VOTE score in predicting outcomes in pediatric obstructive sleep apnea (PUBMED:35994833). None of these abstracts provide information on the voting behavior of medical professionals, such as doctors. To answer the question "Do doctors vote?" one would need to look at research specifically examining the political engagement and voting patterns of medical professionals, which is not covered in the provided abstracts.
Instruction: Is histological examination of tissue removed by general practitioners always necessary? Abstracts: abstract_id: PUBMED:9277607 Is histological examination of tissue removed by general practitioners always necessary? Before and after comparison of detection rates of serious skin lesions. Objectives: To examine whether histological examination of all tissue removed by general practitioners in minor surgery increases the rate of detection of clinically important skin lesions, and to assess the impact of such a policy on pathologists' workload. Design: Before and after comparison. Setting: Stratified random sample of 257 general practitioner partnerships from the catchment areas of 19 English pathology laboratories. Subjects: Tissue removed in minor surgery by general practitioners during the control period (September 1992 to February 1993) and intervention period (September 1993 to February 1994). Intervention: General practitioners referred to their local pathology laboratory all solid tissue removed in all minor surgery, irrespective of their previous policy. Main Outcome Measures: Numbers of specimens referred for histology by general practitioners during intervention and control periods; numbers of primary malignant melanomas, non-melanoma malignancies, premalignant lesions, and benign lesions. Results: 257/330 partnerships participated (response rate 78%). During the intervention period 5723 specimens were sent, compared with 4430 during the control period. The referral rate increased by an estimated 1.34 specimens per 1000 patient years (95% confidence interval 0.93 to 1.76, P &lt; 0.0001). General practitioners sent 204 specimens that were malignant (including 16 malignant melanomas) in the control period and 188 that were malignant (including 15 malignant melanomas) during the intervention period (change in total number of malignancies, -1.0 per 100,000 patient years (-5.9 to 3.8, non-significant). Conclusions: The intervention was associated with a substantial increase in laboratory workload, all of which was accounted for by increases in non-serious lesions. This observation should be taken into account when considering the merits of a policy requiring histological examination in every case. abstract_id: PUBMED:35172736 Physical examination performed by general practitioners in 5 community health service institutions in Beijing: an observational study. Background: Physical examination is a core component of consultation. Little is known about the status quo of physical examinations performed by general practitioners in community health service institutions in China. The aim of this study was to investigate general practitioners' performance of physical examinations in consultations. Methods: An observational study was conducted in 5 community health service institutions in Beijing between November 2019 and January 2020. Eleven general practitioners were observed for one workday. Information of consecutive consultations was recorded including patient characteristics, reasons for encounter, physical examinations performed by general practitioners, length of consultation time and time spent on specific activities in consultations. Results: A total of 682 consultations of 11 general practitioners were recorded. Physical examinations were performed in 126 consultations (15.8%). Physical examination was more likely to be performed in patients visiting with symptoms (P &lt; 0.001). Majority of the 126 physical examinations were distributed in "Head, face, and neck examination" (n = 54, 42.9%) and "Cardiovascular examination" (n = 55, 43.7%). No physical examination was performed on skin, male genitalia, female breasts and genitalia, and neurological systems. Total 2823 min of activities were observed and recorded. General practitioners only spent 3.1% of the recorded time on physical examination, which was less than the time spent on taking history (18.2%), test (4.9%), diagnosis (22.7%), therapy (38.4%), and health education (8.6%). The average time spent on physical examinations was 0.8±0.4 min per consultation. Conclusion: Physical examination was insufficiently performed by general practitioners in community health service institutions in Beijing. More time and commitment should be advocated for improving the quality of physical examinations in primary care. abstract_id: PUBMED:37016344 Survey on the management of childhood epilepsy among general practitioners in the area of Marrakech. Background: Epilepsy is a common chronic neurological disorder in the pediatric population and its evolution can be fatal. It represents a major public health problem as well as an economic burden for the families of affected children, health systems and the overall economies of countries. This further accentuates the role that general practitioners can play in the management of childhood epilepsy in the face of the persistent lack of neurologists and neuro-pediatricians in our country. Methods: In order to assess the knowledge and therapeutic habits of general practitioners, we carried out a descriptive and cross-sectional study with general practitioners practicing in the two healthcare sectors: public and private, and in two settings: urban and rural, during the year 2018. The data was collected through a pre-established survey. Results: In total, 155 general practitioners responded to the survey. For 85.2% of physicians, the diagnosis of childhood epilepsy was based on interrogation, physical examination, and EEG. While brain imaging would be systematic regardless of the type of epilepsy for 45.2% of doctors. Only 6 doctors (3.9%) had knowledge of the latest classifications of the "ILAE". For treatment, the majority of physicians (65.5%) adopted first-line monotherapy with valproate in leading position. Almost half of the doctors (48.4%) found that education of parents and children was always necessary. None of the GPs interviewed in our series assessed the academic impact of epilepsy. Only 32% of doctors had received continuing education on epilepsy. Conclusion: The data from our study demonstrates that continuing education on the management of childhood epilepsy and the greater involvement of general practitioners were essential elements in improving care. abstract_id: PUBMED:38103976 Is systematic histological examination of the cholecystectomy specimen always necessary? Introduction: The histological examination (HE) of all cholecystectomy specimens removed for cholelithiasis is a widespread practice to rule out unrecognized gallbladder cancer. (GBC). But this dogmatic practice has been called into question by recent published data. The aim of this literature review was to answer two questions: (1) can HE be omitted in specific cases; (2) under what conditions is a selective strategy indicated? Methods: A review of the literature was carried out that included selected multicenter studies, registry studies, or meta-analyses. A reliable technique for the surgeon's macroscopic examination of the specimen would allow the selection of dubious cases for HE. The cost-effectiveness of selective HE was discussed. The PICO methodology (population, intervention, comparator, outcome) was used in the selection of articles that compared routine and selective histological examination. Results: If cases from countries with a high prevalence of gallbladder cancer are excluded and in the absence of high-risk situations (advanced age, female gender, calcified or porcelain gallbladder, acute cholecystitis, polyps, abnormalities noted intra-operatively), the macroscopic examination of the gallbladder in the operating room has a reliability approaching 100% in the majority of published studies. This would make it possible to omit systematic HE without compromising the diagnosis and prognosis of patients with unsuspected GBC and with a very favorable cost-effectiveness ratio. Conclusion: Through a selection of patients at very low risk of incidentally-discovered cancer and a routine macroscopic examination of the opened gallbladder, the strategy of selective HE could prove useful in both clinical and economic terms. abstract_id: PUBMED:29509219 Observance of the practice of digital rectal examination. Survey of general practitioners in East-central Tunisia. Introduction: Digital rectal examination (DRE) is a simple gesture, used for diagnosis of several diseases. However, some general practitioners (GPs) are practicing it less and less often. Aim: To estimate the rate of unrealized DRE and to analyze the factors preventing threir achievement. Methods: This is a prospective observational study conducted among 105 GPs practicing in Sousse. We used a pre-established pre-tested and self-administered questionnaire. Results: We identified 551 DRE that were indicated but unrealized. There was a significant influence between the non-realization of DRE on the one hand, and on the other hand: the lack of experience; the closeness of the specialist; the lack of training and the lack of conviction of the importance of this examination. Embarrassment during the realization of the DRE was felt in 69.3% of cases. Factors associated with this embarrassment were: female practitioner; the young age of the practitioner; the rural practice and the nature of internship placements. Conclusion: Although it is a simple and inexpensive gesture, the DRE remains neglected by many physicians. Several factors appear to influence the achievement of the DRE. Ongoing continuing medical education seems necessary, especially as we found a lack of belief in the importance of this examination. abstract_id: PUBMED:22198648 Carpal tunnel syndrome diagnosed by general practitioners: an observational study. The aim of this study was to investigate the results of both clinical testing and standardised nerve conduction studies performed on patients with Carpal tunnel syndrome (CTS) complaints, who had been referred to the neurologist by their general practitioners. Analysis of the data of neurological examination and electrodiagnostic tests (EDX) were performed on patients that had been referred by general practitioners. A total of 232 patients with clinically defined CTS, who had been referred by general practitioners, were seen by a neurologist and subsequently underwent electrodiagnostic testing. The diagnosis of CTS made by general practitioners was clinically confirmed by the neurologist in 187 of 232 (81%) patients. In these 187 patients, EDX confirmed CTS clinical diagnosis in 180. In 40 (17%), the neurologists disagreed with the clinical diagnosis of CTS because signs and symptoms were not those of clinical CTS. We showed that general practitioners are very well capable of making a clinical diagnosis of CTS. Therefore, direct referral of patients by general practitioners for nerve conduction studies to have their diagnosis of CTS confirmed is a desirable and time-saving procedure. abstract_id: PUBMED:32913165 Self-reported confidence with ocular examination and management of eye diseases by general medical practitioners. Aims: The aim of the study is to determine the confidence of general practitioners (GPs) with ophthalmic exam and management of eye diseases. Materials And Methods: Using self-administered questionnaire, information on sociodemographics, medical practice experience, confidence with eye exam, and management of eye diseases was obtained from GP at the General Outpatient Department. Responses on level of confidence were ranked with Likert scale and analyzed with the Statistical Package for Social Science, version 23. Results: Twenty-two GPs with mean medical practice experience of 17.4 ± 8.5 years participated. Twelve (54.5%) GPs routinely examined patients' eyes. Pen torch assessment of ocular surface was most commonly performed eye exam, 1 (4.6%) did visual acuity, while none performed ophthalmoscopy. Seventeen (77.3%) GPs rated themselves average or higher in interpreting pen torch examination of ocular surface. Expressed diagnostic confidence was highest for pterygium, 19 (86.4%), and low for interpreting visual acuity, 8 (36.4%); 13 (59.1%) were confident with diagnosing cataract. While all GPs (100.0%) were not confident with diagnosing and managing posterior segment diseases, 19 (86.4%) felt that they could confidently manage allergic and bacterial conjunctivitis, respectively. Seventeen (77.3%) GPs thought their undergraduate exposure in ophthalmology was inadequate and 21 (95.5%) felt that update courses in ophthalmology were necessary. Conclusions: Half of the GPs performed eye examination. Self-reported confidence in ophthalmoscopy, diagnosis, and management of posterior segment diseases was low among GPs. Diagnostic confidence was highest for pterygium. Continuing ophthalmic education and provision of basic ophthalmic equipment are recommended to improve confidence of GP in management of ocular disorders. abstract_id: PUBMED:24272756 The role of histology in forensic autopsies: is histological examination always necessary to determine a cause of death? In England and Wales there is a conflict between the law and advice from regulatory bodies in relation to the sampling of human tissue for histological examination following medico-legal post-mortem examinations. Considering the results of previous publications, we performed a specific study to investigate the role of histology in determining the cause of death in cases at a forensic unit. A retrospective study of 500 adult forensic cases was performed. Cases were categorized by the role the histological examination played in determining a cause of death and its contributory factors. Furthermore, cause of death, manner of death, organ systems involved, and discrepancies were assessed. Of the 500 cases, histology was undertaken in 287 cases (58 %). Microscopic examination provided the cause of death in 2 % of cases where histology had been undertaken, and it added to the cause of death in 8 %. In 61 % of cases microscopy confirmed the macroscopic findings, and in 30 % it did not influence the medical cause of death. Histological examination of all organs in all forensic cases for the purpose of providing a medical cause of death is not supported. Practice guidance should be adjusted to reflect that, while histological examination is essential in certain circumstances, the decision to retain material for histology should be made on a case by case basis at the pathologist's discretion. abstract_id: PUBMED:9531199 The utility of histological examination of tissue removed during elective joint replacement. A preliminary assessment. The utility of histological examination of tissue removed during elective joint replacement has not been determined. During a one-year period, tissue removed during 168 total joint replacements was submitted for histological examination. The clinical and histological diagnoses, the cost of the histological study, and the clinical course were determined for all joints. Degenerative joint disease, rheumatoid arthritis, and avascular necrosis accounted for 98 per cent of the histological diagnoses. There were sixteen discrepancies between the clinical and histological diagnoses. The histological diagnosis did not affect the treatment of fifteen of these joints. However, the treatment was altered for one joint that had a clinical diagnosis of degenerative joint disease and a histological diagnosis of osteomyelitis; on review, the initial histological diagnosis was determined to be incorrect. In 1996 dollars, the cost of histological examination for all 168 joints was $10,698.24. Although there would be considerable cost-savings on a population basis if histological examination were not performed, this savings must be weighed against the effect of a misdiagnosis on the management of a particular patient. abstract_id: PUBMED:25829322 Is there a need for a histological examination of the adenoid tissue after adenoidectomy? Objectives: In Germany there is no uniform practice regarding the histological examination of removed tissue after an adenoidectomy. In addition, the unique benefits of routine histological examination of adenoid tissue cannot be ascertained from current literature due to varying opinion and evidence. Methods: This study was approved by the Medical Association and Ethics Committee in Cottbus, Brandenburg, Germany. We evaluated the pre- and intra-operative examination findings (direct and indirect epipharyngoscopy, digital palpation, intraoperative macroscopic assessment and examination of removed tissue) and the histopathological findings of excised adenoid tissue. A statistical analysis of obtained results was undertaken thereafter. Results: It was found that in all young patients included in the study, the histological results invariably represented lymphoepithelial hypertrophic mucosa with a varying inflammatory response, matching a clinical picture of adenoid vegetation and in keeping with the pre- and intraoperative investigations and findings. Conclusion: The characteristic history of adenoid vegetation in combination with a classic pre- and intraoperative clinical examination and an inconspicuous macroscopic examination of removed tissue, renders routine histopathological workup dispensable. On suspicion of deviant pathology a histopathological examination is recommended and should be undertaken. Answer: The necessity of histological examination of tissue removed by general practitioners is not always clear-cut. According to a study examining the impact of referring all tissue removed by general practitioners for histological examination, the intervention led to a substantial increase in laboratory workload without a significant increase in the detection of serious lesions, suggesting that routine histological examination may not always be necessary (PUBMED:9277607). Similarly, in the context of cholecystectomy specimens, a literature review suggested that systematic histological examination might be omitted in specific low-risk cases, with macroscopic examination in the operating room being a reliable alternative, potentially offering clinical and economic benefits (PUBMED:38103976). In the case of adenoidectomy, a study found that routine histopathological examination of adenoid tissue was dispensable when the clinical history and pre- and intraoperative examinations were consistent with adenoid vegetation, and the macroscopic examination of removed tissue was inconspicuous (PUBMED:25829322). However, histopathological examination is recommended if there is suspicion of deviant pathology. Overall, while histological examination is an important diagnostic tool, the decision to perform it should be based on clinical judgment, the likelihood of serious pathology, and the potential impact on patient management. It may not be necessary in every case, particularly when clinical and macroscopic assessments do not suggest serious pathology, and when considering the associated costs and workload for pathology laboratories.
Instruction: Are urodynamic studies really needed during bladder augmentation follow-up? Abstracts: abstract_id: PUBMED:18774747 Are urodynamic studies really needed during bladder augmentation follow-up? Objective: We assessed clinical and urodynamic outcomes, over a minimum 10-year follow-up period, of neuropathic bladder patients treated with a bladder augmentation (BA) to determine if periodic urodynamic studies are needed. Material And Methods: Thirty-two patients with poorly compliant bladders underwent BA at a mean age of 11 years (2.5-18). Mean follow-up was 12 years (10-14.5) and mean patient age at the end of the study was 22 years (12.2-33). During follow-up all patients were controlled at regular intervals with urinary tract imaging, serum electrolyte and creatinine levels, cystoscopy and urodynamic studies. Preoperative, 1-year post-BA and latest urodynamic studies results were compared. Results: Urodynamic studies at 1-year post-BA showed a significant increase in bladder capacity and a decrease in end-filling detrusor pressure compared with preoperative values (396 vs 106 ml; 10 vs 50 cm H(2)O, P&lt;0.0001). The increase in bladder capacity was more significant at the end of the study than after 1 year (507.8 vs 396 ml, P&lt;0.002). Thirteen patients had phasic contractions after 1 year and 11 at the end (not significant, NS), and these contractions were more frequent with colon than with ileum (NS). At the end of follow-up, phasic contraction pressure had decreased while trigger volume had increased (35 vs 28 cm H(2)O; 247 vs 353 ml, NS). All patients are dry and have normal renal function, except one who had mild renal insufficiency before BA. Conclusion: BA improves bladder capacity and pressure, and these changes are maintained over time (although phasic contractions do not disappear). Repeated urodynamic studies are only necessary when upper urinary tract dilatation or incontinence does not improve. abstract_id: PUBMED:35348841 Improvement of bladder function after bladder augmentation surgery: a report of 26 years of clinical experience. Introduction: To assess the long-term effect of bladder augmentation surgery in patients with spina bifida and to identify risk factors for severe bladder dysfunction requiring bladder augmentation. Methods: A retrospective analysis was performed on 178 patients with spina bifida, 23 of them underwent bladder augmentation. Surgery outcome was evaluated according to urodynamic assessments at three follow-up time points per patient up to 120 months postoperatively. The results were compared to the preoperative situation and to the non-operated control group. Bladder function was evaluated using the modified Hostility score. To identify risk factors for bladder dysfunction requiring bladder augmentation, characteristics such as type of spina bifida, lesion level and therapy of bladder dysfunction were analyzed. Results: A high spinal lesion level is a risk factor for requiring bladder augmentation. In the BA group, significantly more thoracic lesions were found than NBA group, BA: 26.1%, NBA: 8.4% (p = 0.021). With bladder augmentation surgery, the modified Hostility score decreased from a preoperative median value of 4.3 ± 1.4 to 1.6 ± 1.0 at the third postoperative follow-up (FU3 = 61-120 months after surgery). In the reference group, the score of the last urological assessment was 2.0 ± 1.5. The age at which clean intermittent catheterization or anticholinergic medication started had no significant influence on the decision to perform bladder augmentation. Discussion/conclusion: Spina bifida patients with bladder augmentation had a significant improvement of the bladder function even at long-term follow-up. A high level of spinal lesion was a predisposing factor for requiring a bladder augmentation. abstract_id: PUBMED:26391472 Augmentation cystoplasty: Urodynamic and metabolic outcomes at 10-year follow-up. Objective: To review the urodynamic outcomes, renal function and metabolic complications after augmentation cystoplasty with at least 10 years of follow-up. Methods: Augmentation cystoplasty performed in two tertiary referral centers from 1995 to 2004 were reviewed. Ten years or more postoperative course was studied by review of the clinical notes, urodynamic reports and laboratory results. Results: A total of 40 patients were included in this study. The mean age at surgery was 43 years, and 47.5% of patients were female. Median follow up was 13 years. Bladder capacity significantly increased from 283 ± 151 to 492 ± 123 mL (P &lt; 0.01), with a percentage change of +130%. The compliance of the bladder was increased by 87%, and detrusor overactivity decreased by 54.2%. There were no significant changes in preoperative and postoperative estimated glomerular filtration rate (68.3 mL/min vs. 76.6 mL/min, P = 0.798). Three patients (7.5%) had more than one episode of symptomatic urinary tract infection per year. Conclusion: The present study confirms the effectiveness of augmentation cystoplasty in increasing bladder capacity, improving bladder compliance and reducing detrusor overactivity. The preservation of renal function and low metabolic complication rate provide solid evidence for carrying out this time-honored procedure in patients with neurogenic or non-neurogenic bladder dysfunction. abstract_id: PUBMED:26707251 Urodynamic changes after bladder augmentation surgery in paediatric patients with myelomeningocele due to neurogenic bladder Background: Augmentation cystoplasty is a successful surgical procedure for the management of neurogenic bladder in children in order to improve urodynamic parameters. The aim of this article is to describe urodynamic changes after augmentation cystoplasty in children with myelomeningocele. Materials And Methods: A descriptive cross-sectional study including children aged 8-16 years with a myelomeningocele operated on for augmentation cystoplasty surgery with sigmoid colon segments due to a neurogenic bladder from the years 2003-2013. A urodynamic study was conducted before and after the surgical procedure. Non-probabilistic sample of consecutive cases. Descriptive statistics with frequencies and percentages, medians, and ranges. Inferential intra-group comparison with the Wilcoxon test and inter-group with Mann-Whitney U. SPSS 20.0 statistical package. Results: The study included 50 patients, of whom 25 were male and 25 were female, with a median age of 12 years (range, 6-15 years). Bladder capacity improved from 52.8% to 95.9% (p&lt;0.001), uninhibited contractions 1.4-1.8, contraction intensity 47-8.5 (p&lt;0.001), mean pre-surgical and post-surgical filling pressure 40.8cm H2O and 11.0cm H2O, respectively (p&lt;0.001), mean emptying pressure 48.5 vs. 3.6cm H2O (p&lt;0.001), and bladder accommodation 4.6 vs. 41.3cm H2O (p&lt;0.001). Conclusions: Augmentation cystoplasty with sigmoid colon significantly improved urodynamic parameters, such as bladder accommodation and filling pressure in children with myelomeningocele-associated neurogenic bladder. abstract_id: PUBMED:17985371 Does the type of bladder augmentation influence the resolution of pre-existing vesicoureteral reflux? Urodynamic studies. Aims: The type of bladder augmentation on pre-existing vesicoureteral reflux (VUR) was assessed. The effects of urodynamic changes on the resolution of VUR following augmentation cystoplasty performed with various gastrointestinal segments were examined. It was queried whether elimination of high-pressure bladder is sufficient to resolve pre-existing reflux. Methods: A retrospective record review of patients who underwent bladder augmentation between 1987 and 2004. Patients were divided into two groups. Group I included patients who had a simultaneous augmentation and ureteral reimplantation. Group II included patients with reflux in whom only a bladder augmentation was performed. Pre-and post-augmentation urodynamic results were compared in both groups. The outcome of VUR and the role of various gastrointestinal (GI) segments on the resolution of VUR were studied. Results: Sixty-three patients underwent bladder augmentation during the study period. Twenty-six of them had VUR before augmentation. There were 10 patients in Group I and 16 patients in Group II. In Group I, VUR ceased in all patients, while in group II, VUR resolved in 14 patients and persisted in two patients. Small and large bowel segments used for augmentation had no effect on the resolution of VUR but the results of gastrocystoplasties were less favorable. Urodynamically there was no significant difference between the various augmentation cystoplasties. Conclusions: Bladder augmentation alone without simultaneous antireflux repair is usually sufficient for the resolution of pre-existing reflux. The various GI segments used for augmentation have no effect on urodynamic results and the resolution of VUR. abstract_id: PUBMED:3656556 Small bowel augmentation in children with neurogenic bladder: an initial report of urodynamic findings. Augmentation cystoplasty is a useful adjunct in the treatment of the neurogenic bladder. After preoperative urodynamic and clinical assessment, 10 children with neurogenic bladders underwent small bowel augmentation. Presenting clinical problems included incontinence and hydronephrosis. Urodynamic studies after small bowel augmentation demonstrated that bladder capacity increased an average of 184 per cent. Maximum intravesical pressures diminished considerably compared to preoperative values and no patient suffered significant uninhibited contractions. Nine children are totally dry and 1 has stress incontinence. Our initial findings demonstrate the efficacy of small bowel augmentation in the treatment of lower urinary tract neurogenic dysfunction owing primarily to alterations in detrusor function. abstract_id: PUBMED:32874978 Surgical outcomes of bladder augmentation: A comparison of three different augmentation procedures. Background: Augmentation cystoplasty is indispensable in many pediatric diseases, especially neurogenic bladder. Various methods and materials are used to augment the bladder, and these methods are associated with different shortcomings and complications. Aim: The present study reported the mid-term outcomes of patients undergoing various bladder augmentation procedures in a single institution, and assessed whether seromuscular cystoplasty lined with urothelium (SCLU) provided better urodynamic results than auto-augmentation (AA). Methods: A retrospective review of 96 patients undergoing various augmentation methods between 2003 and 2018 was performed. The patients were divided into three groups according to the type of augmentation, and their outcomes were compared. All patients developed neurogenic bladder due to myelomeningocele or sacrococcygeal teratoma. The clinical data of all patients were collected. Results: The mean ages at surgery in the three groups (standard cystoplasty [SC], SCLU, AA) were 10.8, 7.5, and 4.8 years, respectively, with mean follow-ups of 36, 61, and 36 mo, respectively. The mean preoperative and postoperative bladder capacities of the SC, SCLU, and AA groups were 174 ± 11.7 vs. 387 ± 13.7 (P &lt; 0.0001), 165 ± 12.2 vs. 240 ± 14.7 (P = 0.0002), and 138 ± 16.7 vs. 181 ± 9.9 (P = 0.0360), respectively. Compared with the AA group, the SCLU procedure did not have better postoperative urodynamic parameters. Incontinence was reduced in most patients. The mean times of clean intermittent catheterization per day in the SC, SCLU, and AA groups were 5.6, 7.8, and 8.2, respectively. The main complications of the SC group were recurrent urinary tract infections (8%) and bladder calculi (6%). Re-augmentation was done in patients in the SCLU (8) and AA (3) groups. Conclusion: SC provided sufficient bladder capacity and improved compliance with acceptable complications. After AA and SCLU, the patients acquired limited increases in bladder capacity and compliance with a high rate of re-augmentation. Compared with AA, SCLU did not yield better postoperative urodynamic parameters. abstract_id: PUBMED:12478134 Long-term urodynamics followup of bladder augmentation for neurogenic bladder. Purpose: Augmentation enterocystoplasty is well tolerated by patients with neurogenic bladder in whom conservative therapy has failed. However, few studies exist on long-term urodynamic evaluation of these patients. We assessed the clinical and urodynamic outcomes of patients with neurogenic bladder treated with augmentation enterocystoplasty with at least 4 years of followup. Materials And Methods: A total of 26 patients with neurogenic voiding dysfunction underwent augmentation enterocystoplasty alone or in conjunction with various continence or antireflux techniques. Clinical outcomes regarding incontinence, medications, catheterization schedule, subsequent interventions, bowel function and patient satisfaction were addressed. Urodynamic evaluation was performed to assess the long-term durability of bladder augmentation. Results: Mean followup was 8.0 years (range 4 to 13). All but 1 patient (96%) in our series had near or complete resolution of urinary incontinence. Mean total bladder capacity +/- SD increased from 201 +/- 106 to 615 +/- 204 ml. (p &lt;0.001) and mean maximum detrusor pressure decreased from 81 +/- 43 to 20 +/- 12 cm. H O (p &lt;0.01). Mean interval between catheterizations was 5 hours, with volumes ranging from 314 to 743 ml. Only 2 patients (8%) needed a low dose of oxybutynin postoperatively to maintain continence consistently. Of the 26 patients 23 (88%) reported no significant change in bowel function and nearly all patients expressed extreme satisfaction with urological management. A subsequent urological procedure was required in 12 patients (46%) at a mean of 4.4 years after initial surgery.(2) Conclusions: Bladder augmentation provides durable clinical and urodynamic improvement for patients with neurogenic bladder dysfunction refractory to conservative therapy. Furthermore, there is a high level of patient satisfaction with bladder augmentation. abstract_id: PUBMED:18631885 Bladder augmentation and artificial sphincter implantation: urodynamic behavior and effects on continence. Objective: To quantify changes in bladder capacity, pressure and compliance after isolated bladder augmentation or augmentation associated with implantation of an artificial sphincter, and to compare the various types of augmentation. Patients And Methods: Preoperative and postoperative urodynamic studies were performed in a group of 38 patients (18 males and 20 females; age range 2-19 years), who underwent a type of bladder augmentation. Results: The bladder improved in capacity in all patients (mean values: initial 137 ml, final 336 ml, individual increase 229 ml; 434%) except two, in which the augmentation was done with ureter. The mean pressure improved (initial 32 cm of H(2)O, final 14, decrease per patient 18 cm of H2O; 49%). The curve of compliance, progressively increasing typical of hyperreflexia and poor compliance, present in 70% of the cases preoperatively, improved in 78% cases postoperatively, although there were several different patterns. Urodynamic behavior was analyzed with regard to the tissue used for augmentation (ileum, ureter or sigmoid colon). In the sigmoid colon group, there were no significant differences in the urodynamic behavior of the bladder neo-reservoir in relation to the configuration used. Conclusion: With bladder augmentation comes an increase in bladder capacity, a reduction in pressure, and an improvement in compliance and continence. The level of change in capacity, pressure and compliance varies with the tissue used and the length and caliber of the insert. When the procedure is carried out using sigmoid colon tissue, there are no noteworthy differences among the various possible configurations. abstract_id: PUBMED:35051024 Video-Urodynamic Characteristics and Predictors of Switching from Botulinum Neurotoxin a Injection to Augmentation Enterocystoplasty in Spinal Cord Injury Patients. Botulinum neurotoxin type A (BoNT-A) injection and augmentation enterocystoplasty (AE) are alternative and effective management strategies for neurogenic detrusor overactivity (NDO) refractory to pharmacotherapy. A great majority of patients with spinal cord injury (SCI) may, however, prefer BoNT-A injections to AE, due to the less invasive characteristics. In this study we evaluated the influence of various video-urodynamic study (VUDS) parameters in SCI patients who continuously received repeat BoNT-A detrusor injections or switched to AE to improve their bladder conditions. We compared the changes in the urodynamic parameters before and after each mode of treatment. In this retrospective study, all SCI patients with refractory NDO who had received at least one BoNT-A injection were enrolled. VUDS was performed before and after both BoNT-A injection and AE. All of the urodynamic parameters of the storage and micturition-including the bladder capacity of every sensation, maximal flow rate (Qmax), post-voiding residual volume, detrusor pressure at Qmax, and bladder contractility index-were recorded. A total of 126 patients, including 46 women and 80 men, with a mean age of 41.8 ± 13.1 years, were recruited for this study. All of the patients receiving either BoNT-A injection or AE had a statistically significant increase of bladder capacity at every time-point during filling and a decrease in detrusor pressure at Qmax during voiding. Patients who switched from BoNT-A to AE had greater improvements in their urodynamic parameters when compared with those who continued with BoNT-A injections. Accordingly, SCI patients receiving BoNT-A injections but experiencing few improvements in their urodynamic parameters should consider switching to AE to achieve a better storage function and bladder capacity. Answer: Urodynamic studies are used to assess the function of the bladder and urinary tract by measuring various aspects such as bladder capacity, pressure, and compliance. The necessity of urodynamic studies during bladder augmentation follow-up is debated, and the decision may depend on individual patient outcomes and the presence of specific symptoms or complications. According to the studies provided, bladder augmentation (BA) has been shown to improve bladder capacity and pressure, and these changes are maintained over time. However, phasic contractions do not disappear after BA (PUBMED:18774747). The study suggests that repeated urodynamic studies are only necessary when upper urinary tract dilatation or incontinence does not improve, indicating that routine urodynamic studies may not be required for all patients during follow-up. Another study confirms the effectiveness of augmentation cystoplasty in increasing bladder capacity, improving bladder compliance, and reducing detrusor overactivity, with preservation of renal function and low metabolic complication rates. This evidence supports the procedure's long-term benefits in patients with neurogenic or non-neurogenic bladder dysfunction (PUBMED:26391472). In pediatric patients with myelomeningocele, augmentation cystoplasty with sigmoid colon significantly improved urodynamic parameters such as bladder accommodation and filling pressure (PUBMED:26707251). Additionally, bladder augmentation alone without simultaneous antireflux repair is usually sufficient for the resolution of pre-existing vesicoureteral reflux (VUR), and the various gastrointestinal segments used for augmentation have no effect on urodynamic results and the resolution of VUR (PUBMED:17985371). Overall, while urodynamic studies can provide valuable information on the functional outcomes of bladder augmentation, their routine use during follow-up may not be necessary for all patients. Instead, they may be more appropriately utilized in cases where there is a lack of improvement in symptoms or complications such as upper urinary tract dilatation or persistent incontinence (PUBMED:18774747). It is important to consider individual patient circumstances and the clinical judgment of the healthcare provider when determining the need for urodynamic studies during follow-up after bladder augmentation.
Instruction: Is alcohol a risk factor for liver cirrhosis in HBsAg and anti-HCV negative subjects? Abstracts: abstract_id: PUBMED:9314123 Is alcohol a risk factor for liver cirrhosis in HBsAg and anti-HCV negative subjects? Collaborative Groups for the Study of Liver Diseases in Italy. Background/aims: In order to evaluate the association between alcohol intake and the risk of liver cirrhosis in the absence of B and C hepatitis viruses, we analyzed data from three hospital-based case-control studies performed in various Italian areas. Methods: From the case and control series we excluded HBsAg and/or anti-HCV positive patients. Cases were 221 cirrhotic patients admitted for the first time to hospital for liver decompensation. Controls were 614 patients admitted to the same hospitals during the same period as the cases for acute diseases unrelated to alcohol. Alcohol consumption was expressed as lifetime daily alcohol intake (LDAI). Results: We found a dose-effect relationship between LDAI and the risk of liver cirrhosis (LC). Considering the extreme LDAI categories (LDAI = 0 g: lifetime teetotallers and LDAI &gt; or = 100 g), the LC odds ratio (OR) increased from 1.0 (reference category) to 44.7 (95% confidence interval: 95% CI: 20.0-99.9). An increased risk of LC associated with the female gender independent of alcohol consumption was also observed (OR = 2.9; 95% CI: 1.8-4.6). Conclusions: Alcohol intake acts as a risk factor for symptomatic liver cirrhosis also in the absence of HBV and/or HCV infection. Besides alcohol and viruses, some unknown gender-related factors might be involved in the occurrence of the disease. abstract_id: PUBMED:21977168 Cirrhosis and Chronic Viral Hepatitis as Risk Factors for Hepatocellular Carcinoma: Romanian Single-clinic Experience. Introduction: Hepatocellular carcinoma (HCC) represents the fifth most common cancer worldwide, while being the third leading cause of death by cancer. The primary risk factor for HCC seems to be liver cirrhosis. A large majority of these patients have a history of viral hepatitis. Materials And Methods: We selected a study lot consisting of 244 patients diagnosed with HCC, admitted between 2006 and October 2009 in the Emergency County Hospital of Craiova, Romania along with an age and gender matched control group, consisting of patients with no history of HCC or other malignancies. We interviewed all subjects regarding their alcohol consumption and background environment. All subjects underwent hepatitis B surface antigen (Hbs Ag) and anti-HCV antibodies (Anti-HCV Atb) serological determinations. Results: The study group consisted of 148 males and 96 females. Liver cirrhosis (LC) was present in 84% of the study lot, 10% associated viral B hepatitis (HBV) and 6% viral C hepatitis (HCV), with no signs of LC. We found LC to be an important risk factor for HCC (RR 6.53, CI 95% 3.18-13.38). The RR and 95% CI of HCC were 4.51 (2.48-8.21) for HbsAg positivity. We noticed a strong correlation (Chi-square test, p&lt;0.001) between the rural environment and the association with LC. HVB was also more present in patients coming from rural areas (p&lt; 0.01). Alcohol intake was present in 89% of the whole lot, being more correlated with the presence of LC as well as with HbsAg positivity (RR 9.165, CI 95% 4.43-18.92). Conclusion: Cirrhosis proved to be the primary risk factors for HCC. We underline the fact that HCC was found to be directly associated with viral hepatitis, without evident LC. Further studies are needed in order to establish if intensified HCC screening, especially in rural areas, is required in patients with newly diagnosed viral hepatitis. The increased prevalence of HBV infections might encourage HBV vaccinations as an efficient tool to prevent HCC. abstract_id: PUBMED:7684812 Comparison of HBsAg, anti-HCV and anti-HDV positivity in diverse groups Three hundred eighty three serum samples obtained from various patient groups and health care personnel were tested for HBsAg, Anti-HCV and Anti-HDV by ELISA technique. Anti-HCV antibodies were found 32.4% of chronic haemodialysis patients, 23% of the patients with chronic viral hepatitis and cirrhosis, 8.3% of health care personnel and less frequently detected in other groups. Anti-HDV antibodies were found 23% of the patients with chronic viral hepatitis and cirrhosis, 2.7% of blood donors, 8.1% of chronic haemodialysis patients and 1.3% of the patients who have malignancy. Anti-HDV antibodies were not detected in other groups. abstract_id: PUBMED:17697013 HCV infection is a risk factor for gallstone disease in liver cirrhosis: an Italian epidemiological survey. We assessed the prevalence of gallbladder disease (i.e. gallstones plus cholecystectomy) among patients with liver disease and its association with the severity and aetiology of hepatic injury. Subjects, referred to 79 Italian hospitals, were enrolled in a 6-month period. The independent effect of the severity and aetiology of liver disease on gallstone disease prevalence was assessed by multiple logistic regression analysis. Overall, 4867 subjects tested anti-hepatitis C virus (HCV) positive alone, 839 were hepatitis B virus surface antigen (HBsAg) alone, and 652 had an excessive alcohol intake. The prevalence of gallstone disease was 23.3% in anti-HCV-positive patients, 12.4% in HBsAg positive and 24.2% in subjects reporting excessive alcohol intake, respectively. Gallstone disease prevalence increased by age in each aetiological category. The proportion of patients with gallstone disease who had a cholecystectomy was the highest in HCV+ subjects. After adjusting for the confounding effect of age and body mass index, compared with patients with less severe liver disease, subjects with HCV-related cirrhosis, but not those with alcohol-related cirrhosis, were more likely to have gallstone disease. Subjects with HCV-related cirrhosis (OR 2.13, 95% CI: 1.38-3.26) were more likely to have gallstone disease when compared with those with HBV-related cirrhosis. HCV infection is a risk factor for gallstone disease. In Italy, the high prevalence of HCV infection among cirrhotic patients has important implications, as cholecystectomy in these subjects is associated with high risk of morbidity and mortality. abstract_id: PUBMED:27472708 Association between anti-HBc positivity and hepatocellular carcinoma in HBsAg-negative subjects with chronic liver disease: A meta-analysis. A meta-analysis was performed to ascertain to what extent hepatitis B surface antigen (HBsAg)-negative/anti-hepatitis B core (anti-HBc)-positive subjects with chronic liver disease are at a higher risk of developing hepatocellular carcinoma (HCC) than the anti-HBc-negative.All studies included had to fulfill the following characteristics and inclusion criteria: they investigated the relationship between HBsAg-negative/anti-HBc-positive serology and the occurrence of HCC, whether a case-control or cohort study, they provided relative risk (RR) or odds ratios (ORs) and 95% confidence intervals (CIs), were available as a full text written in English, and were published and indexed up to April 2015.Twenty-six original studies met the inclusion criteria, allowing a meta-analysis on 44,553 patients. The risk of HCC among the 9986 anti-HBc-positive subjects was 67% higher than in the 34,567 anti-HBc-negative (95% CI = 1.44-1.95, P &lt; 0.0001). The results were similar when groups of patients with a different stage of liver disease (patients with chronic liver disease, patients with cirrhosis), with different ethnicity (Asian and non-Asian) and etiology (HCV and non-HCV) were considered. The risk of HCC was significantly higher in the 651 anti-HBs/anti-HBc-positive patients (RR = 1.36; 95% CI = 1.17-1.58, P = 0.03) and in the 595 anti-HBs-negative/anti-HBc-positive subjects (RR = 2.15; 95% CI = 1.58-2.92, P &lt; 0.0001) than in the 1242 anti-HBs/anti-HBc negative. However, the RR from 8 studies indicated that the risk of HCC was 35% lower among the anti-HBs/anti-HBc-positive subjects compared to the anti-HBs-negative/anti-HBc-positive (RR = 0.65; 95% CI = 0.52-0.8, P &lt; 0.0001).This meta-analysis shows that in HBsAg-negative subjects with chronic liver disease, anti-HBc positivity is strongly associated with the presence of HCC, an association observed in all subgroups according to the stage of the disease, etiology, and ethnicity. abstract_id: PUBMED:12793707 Analysis of risk factors for hepatocellular carcinoma that is negative for hepatitis B surface antigen (HBsAg). Objective: To clarify risk factors for hepatocellular carcinoma (HCC) other than hepatitis B surface antigen (HBsAg). Patients And Methods: We investigated serum HBV-DNA and other factors in 146 patients with liver cirrhosis (LC) or HCC who were HBsAg negative. We analyzed the clinical background of the patients, status of hepatitis B (HBV) viral markers and platelet count as well as the presence of an HBV-DNA fragment by PCR and elucidated risk factors for HCC generation using a logistic regression model. Results: Among ten factors, we determined that four represented a significant risk for HBsAg negative HCC: male gender, total alcohol consumption, total cigarettes smoked, and the presence of an HBV-DNA fragment. Multivariate analysis showed that among the four factors, the HBV-DNA fragment was an independent factor associated with HCC. Conclusion: The presence of an HBV-DNA fragment irrespective of the status of antibodies to either HBsAg (anti-HBs) or hepatitis B core antigen (anti-HBc) is a pivotal factor associated with the development of HCC. abstract_id: PUBMED:30361062 The burden of HBV infection in HCV patients in Italy and the risk of reactivation under DAA therapy. Background: There is increasing awareness of HBV reactivation in HCV-RNA-positive/HBV-coinfected patients with chronic liver disease (CLD) treated with oral direct-acting antivirals (DAAs). Aim: To provide figures on the prevalence of HBV markers in HCV-RNA-positive subjects in Italy, where these findings are lacking. Methods: All subjects aged ≥18 years with CLD consecutively referring to Italian liver units located throughout country were prospectively enrolled in two national surveys in 2001 and 2014. Results: The total number of HCV-RNA-positive cases was 6984; 356 (5.1%) subjects vaccinated against HBV were excluded. A total of 6628 cases were evaluated. The prevalence rates of HBsAg, isolated anti-HBc and anti-HBc/anti-HBs-positivity were 2.9%, 8.1% and 14.7%, respectively. Among the estimated one million HCV-RNA-positive subjects in Italy, a substantial number of subjects are at risk of HBV reactivation due to DAA therapy. The prevalence of liver cirrhosis was higher than that of CLD in HBsAg-positive subjects (4.4% vs. 2.6%, p &lt; 0.01) but not in those positive for other HBV markers. Conclusions: These findings outline the burden of HBV markers among HCV-RNA-positive subjects in Italy, where in 2017 reimbursement for DAA therapy by the National Health System became universal for all patients with chronic HCV infection. HBV vaccination coverage should be greatly extended, since nearly two thirds of subjects in this study resulted negative for any HBV marker. abstract_id: PUBMED:15602732 Liver histology in patients with HBsAg negative anti-HBc and anti-HCV positive chronic hepatitis. The liver histology of 68 consecutive anti-HCV/HCV-RNA positive chronic hepatitis patients who were HBsAg/anti-HBs negative, anti-HBc positive (Case bC group) was compared with that of 68 anti-HCV/HCV-RNA positive chronic hepatitis patients who were HBsAg/anti-HBc negative (control C group). The patients were pair-matched by age (+/-5 years), sex, and risk factors for the acquisition of parenteral infection. Case bC group showed a significantly higher mean fibrosis score (2.3 +/- 1.1) than control C group (1.5 +/- 1.1, P &lt;0.001) and more histological evidence of cirrhosis (22% vs. 7.3%, P &lt;0.05). In addition, the patients in Case bC group showed more severe inflammation of the portal tracts (3.5 +/- 0.8 vs. 3.0 +/- 1.1, P &lt;0.005) and there was a higher prevalence of patients with rhomboid-shaped hepatocytes (26.4% vs. 2.7%, P &lt;0.005), acidophilic bodies (33.8% vs. 1.4%, P &lt;0.0001), sinusoidal inflammation (29.4% vs. 10.3%, P &lt;0.01), lymphoid follicles in the portal tracts (72% vs. 44.1%, P &lt;0.05), Kupffer cell proliferation (29.4% vs. 11.8%, P &lt;0.05), bile duct damage (44.1% vs. 10.3%, P &lt;0.0001), and ductular proliferation (30.9% vs. 2.7%, P &lt;0.001) than in control C group. No difference in these histological features was observed between HBV-DNA negative and positive patients in Case bC group. The data suggest that anti-HBc positive patients with HCV chronic infection have a significantly higher degree of liver fibrosis, and that hepatocellular apoptosis, bile duct damage, and ductular proliferation correlate with the presence of this antibody in the serum. abstract_id: PUBMED:33447635 HBcAb Positivity Increases the Risk of Severe Hepatic Fibrosis Development in HIV/HCV-Positive Subjects From the ICONA Italian Cohort of HIV-Infected Patients. Background: The aim of this study was to investigate the impact of anti-HBc (HBcAb) positivity on the progression of liver fibrosis (Fibrosis-4 score &gt;3.25) in the Italian cohort of HIV-infected individuals naïve to antiretroviral treatment (ICONA). Methods: All patients with FIB-4 &lt;3.25 at baseline were evaluated prospectively: 6966 people with HIV (PWH) were screened and classified based on hepatitis B virus (HBV) and hepatitis C virus (HCV) serology. Results: Patients who were HBcAb+/HCV-/HBs antigen (HBsAg)- and HCV+/HBcAb+/HBsAg- or HBsAg+/HBcAb+/HCV- had CD4+ cell counts below the nadir and significantly higher prevalence of AIDS diagnosis at baseline than the other groups (P &lt; .0001). A Cox regression model adjusted for age, HIV transmission mode, country of birth, and alcohol consumption showed a higher relative risk (HR) of progression to FIB-4 &gt;3.25 in HCV+/HBcAb+/HBsAg- patients (HR, 7.2; 95% CI, 3 8-13.64). Conclusions: HBcAb+ contributes to liver damage in HIV+/HCV+/HBcAb+/HBsAg- subjects. A careful monitoring for signs of previous HBV infection is needed in this kind of patients. abstract_id: PUBMED:14530785 Habitual betel quid chewing as a risk factor for cirrhosis: a case-control study. Betel quid chewing, part of traditional Taiwanese culture, is common in 10%-20% of the human population worldwide. In this case-control study we assessed the independent and interactive role of habitual betel quid chewing and chronic hepatitis B virus (HBV) and hepatitis C virus (HCV) infection on risk of cirrhosis. Subjects enrolled included 210 pairs of sex- and age-matched cirrhotic patients and healthy controls. Information on risk factors was obtained through serologic examination of hepatitis B surface antigen (HBsAg) and antibodies to hepatitis C virus (anti-HCV), and a standardized personal interview with a structured questionnaire. Univariate analysis indicated that betel quid chewing, HBsAg+, anti-HCV+, alcohol drinking, and smoking are significant risk factors for cirrhosis. Multivariate analysis indicated that betel quid chewing (odds ratio [OR], 3.56), HBsAg (OR 20.37), and anti-HCV (OR 31.43) are independent risk factors for cirrhosis. Most betel quid chewers habitually drink alcohol. Although our analysis indicates that betel quid chewing acts independently from alcohol as a risk factor for cirrhosis, the confounding effect of alcohol cannot be excluded entirely by our study. There was an additive effect of the interaction between betel quid chewing and the presence of either HBsAg or anti-HCV. Moreover, a higher risk of cirrhosis was associated with longer duration of betel quid chewing and greater amount of betel quid consumed (each p for trend &lt;0.0001). In conclusion, betel quid chewing appears to be an independent risk factor for cirrhosis. There is an additive interaction between betel quid chewing and chronic HBV/HCV infection. Answer: Yes, alcohol is a risk factor for liver cirrhosis in HBsAg and anti-HCV negative subjects. A study conducted by the Collaborative Groups for the Study of Liver Diseases in Italy found a dose-effect relationship between lifetime daily alcohol intake (LDAI) and the risk of liver cirrhosis (LC). The study excluded patients who were positive for HBsAg and/or anti-HCV and found that the odds ratio (OR) for liver cirrhosis increased significantly with higher alcohol intake, reaching an OR of 44.7 for those with an LDAI of 100 grams or more. This suggests that alcohol intake acts as a risk factor for symptomatic liver cirrhosis even in the absence of HBV and/or HCV infection (PUBMED:9314123).
Instruction: Are quality of family life and disease severity related in childhood atopic dermatitis? Abstracts: abstract_id: PUBMED:12428837 Are quality of family life and disease severity related in childhood atopic dermatitis? Background: Atopic dermatitis (AD) can be traumatizing to family life. Little is known about the relationship between quality of life in AD and disease severity. Objective: To document family quality of life and relate this to severity of AD in children, for a 6-month period from a given point in time. Study Design: These data are part of a longitudinal study conducted in two parts of the UK to investigate risk factors for AD severity and its impact on quality of life. Subjects: and methods Thetargetedpopulation comprised children with AD aged 5-10 years in a primary-care setting. The general practitioners identified potential subjects and the UK diagnostic criteria for AD were used to verify the diagnosis. Both the children and their parents were interviewed. Eczema severity was assessed using a modified form of the SCORAD (SCORe Atopic Dermatitis) Index (SCORAD-D) from which parents' score of itching and sleep loss were excluded. The quality of family life was quantified by the Dermatitis Family Impact (DFI) questionnaire. These two parameters were evaluated on two occasions 6 months apart. Analysis: Multiple regression analysis was used to investigate the relationship between the quality of family life and the severity of the AD in the children, at a specific point in time and over the following 6-month period. Results: Of the 116 children attending the first visit, mean age 8 years, 106 attended the second visit (91%) and were included in the analysis. Quality of family life was shown to be significantly affected in 48 (45%) cases at the first visit and 38 (36%) cases at the second visit. The initial means of the DFI and SCORAD-D were 2.4 and 8.2, respectively. Six months later the mean final DFI and SCORAD-D were 1.9 and 7.7, respectively. Using multiple regression on the first and second visits, each unit increase in SCORAD-D was associated with 0.21 [95% confidence interval (CI) 0.06-0.37 P = 0.008] and 0.37 (95% CI 0.15-0.59, P = 0.001) units increase in quality of family life, respectively. This relationship remained significant even after adjustment for potential confounders (black skin, social class, sex, child's age, family size and location) each unit increase in SCORAD-D led to a 0.25 unit (95% CI 0.11-0.4, P = 0.001) and 0.23 unit (95% CI 0.05-0.42, P = 0.014) increase in DFI on the first and second visits, respectively. Changes in the DFI scores were significantly related to changes in the SCORAD-D scores (regression coefficient; 0.17 (95% CI 0.06-0.29, P = 0.002). Conclusions: We show that quality of family life is related to the severity of AD in children. This confirms the importance of parental assessment of the impact of the disease in the management of AD, because the disease affects the entire family. Also, these results show the response of DFI to change predictably with disease severity. This may imply that the DFI questionnaire could be used as an extra measure of outcome in everyday clinical practice as well as in research studies. abstract_id: PUBMED:14501437 Relationship between quality of life and disease severity in atopic dermatitis/eczema syndrome during childhood. Purpose Of Review: Atopic dermatitis is a chronic relapsing itchy disease that has an effect on children and families' quality of life. The purpose of this review is to give workers in the field an easy-to-read, up-to-date and accessible summary of recent advances in the relationship between quality of life and disease severity in atopic dermatitis/eczema syndrome during childhood. This review, however, concentrates on original articles that have been published since 1 April 2002, and reports a relationship between disease severity and quality of life scores in childhood atopic dermatitis. Recent Findings: It has been shown that childhood atopic dermatitis has a major impact on children and families' quality of life. Researchers also reported that scores on quality of life and disease severity were positively correlated at a point in time and over a period of time. They also showed that children and quality of life measures have improved after medical interference. Summary: Despite the limitation of the studies reviewed, it is logical to conclude that childhood atopic dermatitis has affected children and families' quality of life. These studies have also reported that disease severity and quality of life are correlated in childhood atopic dermatitis at a point in time and over time. However, these aspects should be studied further in order to evaluate the long-term impact of the disease and its variation during childhood. Disease severity and quality of life should be measured independently. abstract_id: PUBMED:16893440 Quality of life and childhood atopic dermatitis: the misery of living with childhood eczema. The misery of living with atopic eczema (syn. dermatitis, AD) cannot be overstated for it may have a profoundly negative effect on the health-related quality of life (HRQoL) of children and their family unit in many cases. As it is one of the commonest chronic relapsing childhood dermatosis (UK lifetime prevalence 16-20% by 20 years), with increasing worldwide prevalence, this has major social and financial implications for individuals, healthcare providers and society as a whole. This review explores the impact of AD on the lives of children and their family units and the use of some of the recently developed HRQoL measures, which have enabled investigation and categorisation of the physical, psychological and psycho-social effects of childhood eczema across all aspects of life. These effects include symptoms of itching and soreness, which cause sleeplessness in over 60%. Sleep deprivation leads to tiredness, mood changes and impaired psychosocial functioning of the child and family, particularly at school and work. Embarrassment, comments, teasing and bullying frequently cause social isolation and may lead to depression or school avoidance. The child's lifestyle is often limited, particularly in respect to clothing, holidays, staying with friends, owning pets, swimming or the ability to play or do sports. Restriction of normal family life, difficulties with complicated treatment regimes and increased work in caring for a child with eczema lead to parental exhaustion and feelings of hopelessness, guilt, anger and depression. The hidden costs involved in eczema management can be significant and have particular impact on lower income families. The impairment of quality of life caused by childhood eczema has been shown to be greater than or equal to other common childhood diseases such as asthma and diabetes, emphasising the importance of eczema as a major chronic childhood disease. HRQoL measures are proving to be valuable tools for use in the clinical setting, as outcome measures for pharmaceutical studies, for health economics and audit purposes. It is therefore recommended that in future, they should be used in conjunction with objective measures of severity, as part of the assessment process of a child with atopic eczema. Lack of information on eczema and treatments heightens parental anxiety. Education of all individuals involved in the care of children with eczema is fundamental in the management of AD and it is essential to provide simple clear, unambiguous information on treatment and disease management in order to reduce the negative impact on HRQoL. abstract_id: PUBMED:21078107 The impact of childhood atopic dermatitis on the patients' family. Atopic dermatitis is a common childhood disease that impairs quality of life. The study aimed to clarify the impact of childhood atopic dermatitis on family life and to correlate severity of atopic dermatitis with family life. A cross-sectional survey was conducted at Qassim Region of Saudi Arabia over a period of 4 months extending from April to July 2009. The parents of children with atopic dermatitis were asked through a validated "Dermatitis Family Impact Questionnaire" about the impact of the disease on their life. For each questionnaire, a total score of 0 to 5 is considered as normal quality of life, 6 to 10 as low, 11 to 20 as moderate and &gt;20 as high alteration in quality of life. The severity of the disease was evaluated using the SCORAD index. A total of 447 children with atopic dermatitis were included in the study. Their mean age was 65.9 months. Males constituted 57% of the patients. The mean score for quality of life in affected families was 13.9 (minimum 2, maximum 25). Based on our suggested classification, only 15 (3.4%) had normal quality of life, 104 (23.3%) were mildly affected, 297 (66.4%) were moderately affected, while 31 (6.9%) reported severe alternation in their quality of life. Sleep, monthly expenditure, and food preparation were the activities showing the highest level of disturbance. The disturbance in quality of life was significantly correlated to increasing severity of the disease. The study has emphasized the importance of investigating the quality of life of atopic dermatitis families. A simple questionnaire is a useful guide for appropriate management of the disease. abstract_id: PUBMED:17433002 Atopic dermatitis: quality of life of young Italian children and their families and correlation with severity score. The aim of this study was to determine the ways in which atopic dermatitis (AD) affects the lives of young Italian children and their families, in terms of quality of life, and correlate it with AD severity and the perception of severity as estimated by the family. The parents of 45 children aged 3-84 months affected by AD were asked to complete two validated questionnaires after clinical examination. The first questionnaire was about the child's quality of life (Infants' Dermatitis Quality of Life Index); the second regarded the family's quality of life (Dermatitis Family Impact questionnaire). In a further question parents were asked to estimate the severity of the disease of the child. Children's quality of life appeared slightly-moderately altered (mean score 10.2) compared with the value of a control group (3.3), and itching, sleep problems and the influence of the disease on the child's mood were the cause of greatest discomfort for the child. Family quality of life appeared moderately altered (mean score 11) compared with the value of the control group (7.4). The greatest problem was the disturbed sleep of the family members. Other important problems were the economic cost for the management of the disease and the tiredness and irritability caused by the disease in parents. Analysis of the responses confirms the incorrect estimation of the severity of the disease perceived by the family. In our opinion, the two questionnaires may be useful in clinical practice to understand better the difficulties suffered by a family with a child affected by AD. They also provide data that may help to improve the clinical approach for the child and the family, and to assess the degree of under-/overestimation of the disease by the family. abstract_id: PUBMED:35527656 Severity of disease and the quality of life indexes in infants with atopic dermatitis. Background: Atopic dermatitis (AD) is a chronic, relapsing, inflammatory skin disease. In this study, the relationship between the severity of AD and the quality of life (QoL) of patients and their families were evaluated; also, the factors that predict the severity of AD and the QoL index were determined. Methods: Infants with AD were enrolled in the study. Pruritus, sleep disturbance, and dermatitis severity scores were obtained. The QoL of infants was assessed using the Infants' Dermatitis Quality of Life Index (IDQOL), while the Family Dermatology Life Quality Index (FDLQI) was used to assess the impact of disease on the QoL of parents. Results: 122 patients diagnosed with AD were enrolled in the study. The correlation analysis of score showed a positive correlation between IDQOL and FDLQI (r = 0.444, p &lt; 0.0001). Positive correlations between the pruritus, sleep disturbance, Severity Scoring of Atopic Dermatitis (SCORAD), Eczema Area and Severity Index (EASI), and dermatitis severity scores and the QoL indexes were found. While pruritus, sleep disturbance scores, and EASI were the most significant parameters for predicting a severe SCORAD score, IDOQL, FDLQI, and SCORAD were the most significant parameters for predicting severe EASI. In the analysis of QoL indexes, sleep disturbance and FDLQI were the most significant parameters for predicting severe IDQOL index, while IDQOL was the most significant parameter for predicting severe FDLQI scores. Conclusions: AD has a negative effect on the QoL of infants and their parents. Pruritus and sleep disturbance should be evaluated during clinical practice due to their strong relationship with disease severity and QoL index. abstract_id: PUBMED:34947965 Atopic Dermatitis Severity, Patient Perception of the Disease, and Personality Characteristics: How Are They Related to Quality of Life? Introduction: Atopic dermatitis (AD) is a chronic, relapsing inflammatory skin condition that greatly affects patients' quality of life, psychological condition, and social relationships. Materials And Methods: To analyze different aspects of AD patients' quality of life, we used the SCORing Atopic Dermatitis (SCORAD) index (for AD severity), the Dermatology Life Quality Index (DLQI), the World Health Organization Quality of Life Brief Version (WHOQOL-BREF), the Brief Illness Perception Questionnaire (Brief IPQ), and the Crown-Crisp Experiential Index (CCEI) to analyze personality traits. The study included 84 AD patients, 42 with clinical manifestations and 42 in remission. Results: SCORAD values correlated positively and linearly with DLQI (r = 0.551; p &lt; 0.001) and with disease impact on life, disease control, and disease symptoms (r = 0.350-0.398; p ≤ 0.023). DLQI was also related to certain personality characteristics (free-floating anxiety disorder, obsession, somatization, and depression (p ≤ 0.032)). Symptomatic AD patients had a significantly more impaired DLQI than asymptomatic patients (p &lt; 0.001) and the two groups differed in some IPQ dimensions, but they did not differ significantly concerning the WHOQOL-BREF dimensions and personality traits (CCEI). Conclusion: Since AD patient quality of life was dependent not only on disease severity but was also influenced by patient personality characteristics (anxiety disorder, obsession, somatization, depression), many factors need to be taken into account to create effective, patient-specific therapy regimens. abstract_id: PUBMED:12440273 Childhood atopic dermatitis: a measurement of quality of life and family impact. Atopic dermatitis is the commonest skin disease in children, causing psychological, social and functional disability to them and their families. This study assessed the family impact and quality of life using the translated Malay version of The Dermatology Family Impact (DFI) and The Children's Dermatology Life Quality Index (CDLQI) questionnaires. Seventy-two children, aged between 6 months and 16 years attending the Paediatric Dermatology Clinic at the Paediatric Institute and the Dermatology Department, Hospital Kuala Lumpur participated in this study. Thirty-nine patients (54.2%) were males and 33 patients (45.8%) were females. The median age of the patients was 74 months (Q1 6, Q3 104 months). The median age of diagnosis was 22 months (Q1 1, Q3 36 months). The median disease duration was 44 months (Q1 3, Q3 65). The severity of eczema was assessed using the SCORAD severity index (maximum score = 83). The median SCORAD (European Task Force On Atopic Dermatitis) score was 36 (n = 72, SD = 16.2). The majority of patients in this study suffered from moderately severe eczema (n = 40, mean, SCORAD = 29.3) followed by severe eczema (n = 27, mean SCORAD = 54.3). The mildly affected patients formed the minority group (n = 5, mean SCORAD = 9.0). The family impact was shown to be greater in severe atopic dermatitis compared to moderate atopic dermatitis (Anova, p = 0.02). The children's quality of life impairment was also greater in severe atopic dermatitis compared to moderate atopic dermatitis (Anova p = 0.08). This study confirms that quality of life and family impact are related to the severity of atopic dermatitis. abstract_id: PUBMED:23166413 Quality of life and disease severity are correlated in patients with atopic dermatitis. Quantification of quality of life (QOL) related to disease severity is important in patients with atopic dermatitis (AD), because the assessment provides additional information to the traditional objective clinical scoring systems. To document the impact of AD on QOL for both children and adults as well as to quantify the relationship with disease severity, QOL assessments were performed over a 6-month period on 415 patients with AD. A questionnaire derived from the Infants' Dermatitis Quality of Life Index (IDQOL), the Children's Dermatology Life Quality Index (CDLQI) and the Dermatology Life Quality Index (DLQI) was used to determine the QOL for 71 infants, 197 children and 147 adults, respectively. To measure AD severity, both the Rajka &amp; Langeland scoring system and the Scoring of Atopic Dermatitis (SCORAD) index were used. The mean scores were as follows: 7.7 ± 5.5 for IDQOL, 6.6 ± 6.3 for CDLQI, and 10.7 ± 7.9 for DLQI. In conclusion, these QOL scores are correlated with AD severity scores as estimated by the Rajka &amp; Langeland severity score and the SCORAD. The outcome of the QOL instruments in this study demonstrates that atopic dermatitis of both children and adults affects their QOL. abstract_id: PUBMED:26724875 An International Multi-center Study on Self-assessed and Family Quality of Life in Children with Atopic Dermatitis. Atopic dermatitis (AD) is a common childhood chronic inflammatory skin condition that greatly affects the quality of life (QoL) of affected children and their families. The aim of our study was to assess QoL and family QoL of children with AD from 4 different countries and then compare the data, evaluating the effects of AD severity and age of children. Data on the Children's Dermatology Life Quality Index (CDLQI) and the Dermatitis Family Impact (DFI) questionnaires and the SCORAD index of 167 AD children 5-16 years old from Ukraine, Czech Republic, Singapore, and Italy was used for the study. SCORAD correlated with the CDLQI in all 4 countries and with DFI in all countries except Singapore. Only in Czech children did the CDLQI correlate with their age. No significant correlations between age and DFI results were found. AD symptoms and expenditures related to AD were highly scored in all countries. Impact of AD on friendship and relations between family members were among the lower scored items, and family problems did not increase proportionately with duration of AD in any of the four countries. Self-assessed health-related QoL of children with AD in our study correlated better in most cases with disease severity than family QoL results. Parents of school children with AD were generally less stressed, tired, and exhausted than parents of preschool children. These data together with results showing that duration of AD in children does not affect relations between parents and other family members is optimistic news for families with children with AD who did not recover until adolescence. Answer: Yes, quality of family life and disease severity are related in childhood atopic dermatitis (AD). Multiple studies have documented a significant correlation between the severity of AD in children and the impact on the quality of life for both the children and their families. One study showed that quality of family life was significantly affected in a considerable number of cases, and this impact was related to the severity of AD as measured by the SCORAD-D index. The study found that each unit increase in SCORAD-D was associated with an increase in the Dermatitis Family Impact (DFI) questionnaire score, indicating a worse quality of family life (PUBMED:12428837). Another study confirmed that childhood AD has a major impact on the quality of life of children and their families, and that quality of life and disease severity scores were positively correlated both at a point in time and over a period of time (PUBMED:14501437). Further research has highlighted the profound negative effect of AD on the health-related quality of life (HRQoL) of children and their families, with symptoms such as itching and soreness causing sleeplessness in over 60% of cases. This sleep deprivation can lead to mood changes and impaired psychosocial functioning, affecting school and work performance, and can also lead to social isolation and depression (PUBMED:16893440). A cross-sectional survey in Saudi Arabia found that the disturbance in quality of life was significantly correlated with increasing severity of the disease, with activities such as sleep, monthly expenditure, and food preparation showing the highest level of disturbance (PUBMED:21078107). In Italy, the quality of life of young children and their families was found to be moderately altered by AD, with the greatest problems being disturbed sleep for family members and the economic cost of managing the disease (PUBMED:17433002). A study that evaluated the relationship between the severity of AD and the quality of life of patients and their families found positive correlations between pruritus, sleep disturbance, severity scores, and quality of life indexes. It concluded that AD has a negative effect on the quality of life of infants and their parents (PUBMED:35527656). In summary, the severity of childhood atopic dermatitis is closely related to the quality of family life, with more severe cases of AD generally leading to a greater negative impact on the quality of life for both the affected children and their families.
Instruction: Is Cinacalcet a cost-effective treatment in severe secondary hyperparathyroidism in patients on hemodialysis? Abstracts: abstract_id: PUBMED:22445709 Cost-effectiveness of cinacalcet hydrochloride for hemodialysis patients with severe secondary hyperparathyroidism in Japan. Background: Cinacalcet effectively reduces elevated levels of parathyroid hormone (PTH) in patients with secondary hyperparathyroidism (SHPT), even those with severe disease for whom parathyroidectomy can be the treatment of choice. The objective of this study was to estimate the cost-effectiveness of cinacalcet treatment in hemodialysis patients with severe SHPT in Japan. Study Design: Cost-effectiveness analysis. Setting & Population: Patients with severe SHPT (intact PTH &gt;500 pg/mL) who were receiving hemodialysis in Japan. Model, Perspective, & Timeframe: A Markov model was constructed from the health care system perspective in Japan. Patients were followed up over their lifetime. Dialysis costs were not included in the base case. Intervention: Cinacalcet as an addition to conventional treatment compared to conventional treatment alone. In both arms, patients underwent parathyroidectomy if intact PTH level was &gt;500 pg/mL for 6 months and they were eligible for surgery. Outcomes: Costs, quality-adjusted life-years (QALYs), and incremental cost-effectiveness ratios (ICERs). Results: ICERs for cinacalcet for those who were eligible for surgery and those who were not were $352,631/QALY gained and $21,613/QALY gained, respectively. Sensitivity and scenario analyses showed that results were fairly robust to variations in model parameters and assumptions. In the probabilistic sensitivity analysis, cinacalcet was cost-effective in only 0.9% of simulations for those eligible for surgery, but in more than 99.9% of simulations for those ineligible for surgery, if society would be willing to pay $50,000 per additional QALY. Limitations: Data for the long-term effect of cinacalcet on patient-level outcomes are limited. The model predicted rates for clinical events using data for the surrogate biochemical end points. Conclusions: The use of cinacalcet to treat severe SHPT is likely to be cost-effective for only those who cannot undergo parathyroid surgery for medical or personal reasons. abstract_id: PUBMED:18816209 Is Cinacalcet a cost-effective treatment in severe secondary hyperparathyroidism in patients on hemodialysis? Background: A previous study using cinacalcet, as compared to vitamin D alone, showed a better reduction response of PTH levels and a significant diminution of secondary effects. The objective of present study was to evaluate the additional cost of adding cinacalcet to the standard treatment of patients with severe secondary hyperparathyroidism (SHPT) taking into account the treatment goals achieved. Methods: 12 month prospective study of 23 patients with severe SHPT. Two treatment regimens were considered: standard treatment (m 0) and standard treatment plus cinacalcet (m 12). Four consequences of inadequate control of SHPT were registered: parathiroid hormone (PTH), Calcium (Ca), Phosphorus (P) and the Ca x P product serum levels. Treatment effectiveness was measured as percentage of patients who achieved treatment goal according to each indicator: PTH &lt; 800 pg/mL, PTH between 150 and 300 pg/mL, Calcium &lt; 9.5 mg/dL, Phosphorus &lt; 5.5 mg/dL, and Ca x P product &lt; 55. Annual and monthly costs were calculated for both treatment regimens using Spanish 2007 tariffs, and taking into account the dose reduction in some other treatments. Results are presented as incremental costs and cost per patient who achieved treatment goal. Results: At 12 month it was observed a higher percentage of patients who achieved simultaneously the 4 therapeutic goals with respect to basal moment, from 0% to 52.1%. Cinacalcet allowed to save costs in concomitant drugs, achieving a total saving of 149 euros per patient and month. At 12 month, Cinacalcet achieved a reduction of percentage of patients with PTH &gt; 800 pgr/mL with half of costs than standard treatment (651.35 euros vs 1363.68 euros). It was not possible to calculate the cost for PTH indicator since at the study onset, there was no patient who achieved a level between 150 and 300 pg/mL. Cinacalcet allowed reaching treatment goals in Calcium, Phosphorus and Ca x P product in a more cost-effective way (2164.2 euros vs 2684.8 euros). Conclusions: Although Cinacalcet is expensive,p atients treated with Cinacalcet showed a minor cost per patient who achieved treatment goal than patients without Cinacalcet. The ability of cinacalcet to reduce PTH secretion, along with the reductions in the serum Ca, P, and Ca x P product, provides an alternative to the traditional treatment paradigm, and should be a welcomed addition in the management of SHPT. abstract_id: PUBMED:31014177 The effectiveness of cinacalcet: a randomized, open label study in chronic hemodialysis patients with severe secondary hyperparathyroidism. Background: Secondary hyperparathyroidism (SHPT) is associated with high incidences of cardiovascular disease, bone fracture, and mortality. This study was conducted to demonstrate the effectiveness of cinacalcet treatment on chronic kidney disease-mineral bone disorder (CKD-MBD) markers in chronic hemodialysis patients with severe SHPT. Methods: In phase 1, 30 adult HD patients were randomized to cinacalcet or control groups for 12 weeks to explore the achievement of &gt;30% reduction of iPTH. In phase 2, 45 patients were participated to further explore the effect of cinacalcet on CKD-MBD parameters for 24-week follow up and 12 additional weeks after cinacalcet discontinuation. Results: In phase 1, the baseline serum iPTH levels were not different [1374 (955, 1639) pg/mL in the control group vs. 1191 (1005, 1884) pg/mL in the cinacalcet group], the percentage of patients achieving iPTH target were significantly higher in the treatment group [80% vs. 13%, p = .001]. In phase 2, the significant reductions of iPTH, FGF-23, tartrate-resistant acid phosphatase 5b, and slightly decreased size of parathyroid gland and stabilized vascular calcification were observed at 24-week follow up and markedly rebounded after discontinuation of cinacalcet. Conclusions: The effectiveness of cinacalcet were still obviously demonstrated even in chronic HD patients with severe SHPT. In addition, the improvements of bone markers and FGF-23, and stabilization of vascular calcification were observed. Therefore, cinacalcet can provide salutary effects on CKD-MBD in severe SHPT and might be an initially effective PTH-lowering therapy prior to surgical parathyroidectomy as well as an alternative treatment in the patients unsuitable for surgery. Clinical Trial Registration: ClinicalTrials.gov: NCT02056730. Date of registration: February 4, 2014. abstract_id: PUBMED:33026204 Long-term efficacy and safety of etelcalcetide in hemodialysis patients with severe secondary hyperparathyroidism Introduction: Etelcalcetide has proven effective and well tolerated in the treatment of secondary hyperparathyroidism (IPS) in patients on hemodialysis (HD). Since long-term studies are scarce, we assessed the efficacy and safety of etelcalcetide in the treatment of severe IPS in a group of HD patients over a 12-month period. Patients and Methods: We selected 24 HD patients with PTH levels &gt; 500 pg/mL (range 502-2148 pg/mL), despite following a therapy with cinacalcet and/or vitamin D analogues. The initial dosage of etelcalcetide was 7.5 mg/week, then it was adjusted based on the trend of the levels of the total albumin-corrected serum calcium (CaALb_c) and PTH. Treatment was temporarily suspended if CaALb_c levels were &lt;7.5 mg/dL or if hypocalcemia was symptomatic. CaALb_c, phosphorus, PTH and total alkaline phosphatase (t-ALP) were measured monthly. The main endpoint was the decrease in PTH levels &gt;30% compared to baseline values. Results: At F-U, the reduction in PTH levels was &gt; 30% in 83% of our patients. PTH levels decreased from 1169 ± 438 to 452±241 pg/mL at F-U (P &lt;0.001). The percentage of reduction in PTH levels at F-U was -56 ± 25%. CaALb_c and phosphate levels decreased from 9.8 ± 0.4 mg/dL to 9.0 ± 0.6 mg/dL (P &lt;0.001), and from 6.1 ± 1.3 mg/dL to 4.9 ± 1.3 mg/dL (P &lt;0.01), respectively. The main side effect was hypocalcaemia, but never so severe as to require the interruption of treatment. Hypocalcemia was more pronounced in patients with higher basal levels of PTH and t-ALP. During the study, the percentage of patients treated with calcium carbonate increased from 33% to 54% and that of patients treated with paricalcitol from 33% to 79%. At F-U the average weekly dosage of etelcalcetide was 21.0 ± 9.5 mg (range 7.5-37.5 mg/week). Conclusions: The treatment of severe IPS with etelcalcetide has been proved effective and safe in the long term. Hypocalcaemia, the most frequent side effect, was more evident in patients with the most severe forms of IPS and was probably due to a reduction in bone turnover rather than to the direct effect of etelcalcetide. abstract_id: PUBMED:35277957 Parathyroidectomy vs Cinacalcet Among Patients Undergoing Hemodialysis. Context: Parathyroidectomy (PTx) and cinacalcet are both effective treatments for secondary hyperparathyroidism in hemodialysis patients, but limited data exist comparing the long-term outcomes of these interventions. Objective: We aimed to compare the risk of mortality among hemodialysis patients who underwent PTx and those who started treatment with cinacalcet. Methods: In this prospective cohort study, comprising patients from the Japanese Society for Dialysis Therapy Renal Data Registry, patients who had intact parathyroid hormone (PTH) levels ≥ 300 pg/mL in late 2007 and underwent PTx or started treatment with cinacalcet in 2008 to 2009 were matched by propensity score at 1:3. PTx and cinacalcet were compared for all-cause mortality within 6 years. Results: Among eligible patients, 894 patients who underwent PTx were matched with 2682 patients who started treatment with cinacalcet. The median baseline intact PTH levels were 588 pg/mL and 566 pg/mL in the PTx and cinacalcet groups, respectively. PTx resulted in greater reductions in intact PTH, calcium, and phosphorus levels compared with cinacalcet. During the 6-year follow-up period, 201 patients (22.5%) in the PTx group and 736 patients (27.4%) in the cinacalcet group died. PTx was associated with a lower risk of mortality compared with cinacalcet (hazard ratio, 0.78 [95% CI, 0.67-0.91]; P = 0.002). This association was more pronounced in patients with intact PTH levels ≥ 500 pg/mL and in patients with serum calcium levels ≥ 10.0 mg/dL (both P for interaction &lt; 0.001). Conclusion: PTx compared with cinacalcet is associated with a lower risk of mortality, particularly among patients with severe secondary hyperparathyroidism. abstract_id: PUBMED:25352299 The incremental treatment of ESRD: a low-protein diet combined with weekly hemodialysis may be beneficial for selected patients. Background: Infrequent dialysis, namely once-a-week session combined with very low-protein, low-phosphorus diet supplemented with ketoacids was reported as a useful treatment schedule for ESRD patients with markedly reduced residual renal function but preserved urine output. This study reports our findings from the application of a weekly dialysis schedule plus less severe protein restriction (standard low-protein low-phosphorus diet) in stage 5 CKD patients with consistent dietary discipline. Methods: This is a multicenter, prospective controlled study, including 68 incident CKD patients followed in a pre-dialysis clinic with Glomerular Filtration Rate 5 to 10 ml/min/1.73/ m2 who became unstable on the only medical treatment. They were offered to begin a Combined Diet Dialysis Program (CDDP) or a standard thrice-a-week hemodialysis (THD): 38 patients joined the CDDP, whereas 30 patients chose THD. Patients were studied at baseline, 6 and 12 months; hospitalization and survival rate were followed-up for 24 months. Results: Volume output and residual renal function were maintained in the CDDP Group while those features dropped quickly in THD Group. Throughout the study, CDDP patients had a lower erythropoietin resistance index, lower β2 microglobulin levels and lower need for cinacalcet of phosphate binders than THD, and stable parameters of nutritional status. At 24 month follow-up, 39.4% of patients were still on CDDP; survival rates were 94.7% and 86.8% for CDDP and THD patients, respectively, but hospitalization rate was much higher in THD than in CDDP patients. The cost per patient per year resulted significantly lower in CDDP than in THD Group. Conclusions: This study shows that a CDDP served to protect the residual renal function, to maintain urine volume output and to preserve a good nutritional status. CDDP also blunted the rapid β2 microglobulin increase and resulted in better control of anemia and calcium-phosphate abnormalities. CDDP was also associated with a lower hospitalization rate and reduced need of erythropoietin, as well as of drugs used for treatment of calcium-phosphate abnormalities, thus leading to a significant cost-saving. We concluded that in selected ESRD patients with preserved urine output attitude to protein restriction, CDDP may be a beneficial choice for an incremental hemodialysis program. abstract_id: PUBMED:29216631 Effect of Cinacalcet Combined with Calcitriol on the Clinical Outcome and Bone Metabolism in Patients on Hemodialysis with Severe Secondary Hyperparathyroidism. Objective: To observe the clinical outcome and the effect of bone metabolism of cinacalcet combined with calcitriol in maintenance hemodialysis (MHD) patients with severe secondary hyperparathyroidism (SHPT). Methods: Thirty MHD patients with SHPT were enrolled into the study. All patients were given cinacalcet 25-75 mg and 0.5 μg calcitriol daily. Serum Ca, P, intact parathyroid hormone (iPTH), and bone metabolic markers were measured. The clinical symptoms and their changes were investigated. Results: The baseline levels of iPTH, Ca, and P were 1,787.3 ± 1,321 pg/mL, 2.54 ± 0.19 mmol/L, and 2.06 ± 0.15 mmol/L respectively. After 3 months treatment, iPTH decreased by 70%. Serum Ca and P fell to 2.39 ± 0.17 and 1.56 ± 0.50 mmol/L (p &lt; 0.05), respectively. After 6 months, the bone-specific alkaline phosphatase, osteocalcin, and β-cross levels were decreased by 50, 37, and 49% respectively compared with corresponding values before treatment. A decline in the bone density patients was inhibited. Conclusion: Cinacalcet combined with low dose calcitriol can improve high calcium, high phosphorus, and high iPTH in MHD patients with severe SHPT and also improve bone metabolism. It can be used as a favorable choice for SHPT treatment. abstract_id: PUBMED:21566299 A novel dose regimen of cinacalcet in the treatment of severe hyperparathyroidism in hemodialysis patients. During the recent years, cinacalcet has markedly improved the management of hyperparathyroidism in patients on hemodialysis. However, to the best of our knowledge, there are no specific studies addressing the dose regimen of cinacalcet. The aim of the study was to evaluate the efficacy of cinacalcet on the achievement of targets in the treatment of hyperparathyroidism in two different dosage schedules. Twenty-seven adult patients who were on hemodialysis for more than four months and with severe secondary hyperparathyroidism (intact parathyroid hormone (iPTH) &gt;88 pmol/L) resistant to conventional treatment were included in this prospective study. We used the targets of K/DOQI-clinical guidelines as optimal target of iPTH, calcium and phosphate. Group 1 received a single daily administration of 30 mg of cinacalcet along with the main meal as the starting dose, and the dose was titrated thereafter monthly. Group 2 received cinacalcet with the main meal twice weekly starting with a dose of 90 mg on the first day of the week and 120 mg at midweek and titrated thereafter monthly. The levels of iPTH decreased significantly (P = 0.0001) from 124.00 ± 44.77 pmol/L to 37.78 ± 12.49 pmol/L and from 109.61 ± 53.13 pmol/L to 33.93 ± 12.03 pmol/L after 12 weeks in groups 1 and 2, respectively. After 12 weeks, alkaline phosphatase declined significantly (P = 0.0001) from 143.42 ± 75.20 IU/L to 87.42 ± 14.46 IU/L in group 1 (P = 0.013), and from 148.00 ± 108.49 IU/L to 101.61 ± 46.62 IU/L in group 2 (P = 0.05). There were no significant differences between the reductions of iPTH, calcium phosphate product and alkaline phosphatase levels in both the groups in the vertical comparison at the end of the study. There was no noteworthy difference in side effects between both the groups. Our results indicate that cinacalcet twice weekly is reasonably safe and effective in suppressing high PTH levels in hemodialysis patients, with fewer side effects. abstract_id: PUBMED:29879701 Treatment Based on Cinacalcet Reduces Oxidative Stress in Hemodialysis Patients with Secondary Hyperparathyroidism. Background/aims: Oxidative stress is one of the leading factors contributing to increased mortality in patients with chronic kidney disease (CKD) and secondary hyperparathyroidism (sHPT). Cinacalcet is now commonly used in the treatment of sHPT in patients with CKD. The aim of this study was to assess the influence of treatment with cinacalcet on the oxidative stress markers in patients on hemodialysis with sHPT. Methods: In 58 hemodialysed patients with sHPT (parathyroid hormone [PTH] &gt; 300 pg/mL) plasma Advanced Oxidation Protein Products (AOPP), serum total antioxidant capacity - ImAnOx (TAS/TAC), serum PTH, calcium and phosphate concentrations were assessed before the first dose of cinacalcet and after 6 months of treatment. Results: Serum PTH concentration decreased significantly from 895 (748-1,070) to 384 (289-510) pg/mL after 6 months of treatment; p &lt; 0.0001. Mean serum concentrations of -calcium and phosphate remained stable. Plasma AOPP concentration decreased significantly from 152 (126-185) to 49 -(43-57) µmol/L after 6 months of treatment; p &lt; 0.0001. ImAnOx significantly increased from 260 (251-270) to 272 (264-280) µmol/L; p = 0.04. After 6 months of treatment, a significant, positive correlation was found between ImAnOx and the daily dose of cinacalcet (r = 0.30; p = 0.02). Also, the change of serum ImAnOx during treatment with cinacalcet significantly correlated with the daily dose of cinacalcet r = 0.35; p = 0.01. No significant correlations were found between plasma AOPP concentration or ImAnOx and PTH, or their changes in time. Conclusions: (1) Six-month treatment based on cinacalcet seems to reduce oxidative stress markers in maintenance hemodialysis patients with sHPT. (2) This benefit may be related rather to the direct action of cinacalcet than to the serum PTH concentration decrease. abstract_id: PUBMED:29067991 Treatment of secondary hyperparathyroidism with paricalcitol with or without cinacalcet in hemodialysis patients. INTRODUCTION Secondary hyperparathyroidism (SHPT) is a common hormonal disorder associated with chronic kidney disease (CKD). The treatment of SHPT should lead to a reduction in parathormone concentrations by calcimimetics or active vitamin D administration and stabilization of calciumand phosphate metabolism. In the event of failure of conservative treatment, complete or partial parathyroid resection should be considered. OBJECTIVES The aim of the study was to assess the beneficial effects of a combination treatment with paricalcitol and cinacalcet in comparison with paricalcitol alone. PATIENTS AND METHODS A total of 64 hemodialyzed patients (mean [SD] age, 58 [16] years) with inadequate control of serum parathyroid hormone levels were treated with intravenous paricalcitol, while 16 patients simultaneously received oral cinacalcet. Laboratory tests (intact parathormone [iPTH], calcium, phosphorus) were performed on a monthly basis. In the study, iPTH, calium, phosphorus, and alkaline phosphatase levels were assessed at baseline and after 24 weeks of treatment with paricalcitol alone or in combination with cinacalcet. RESULTS In both groups, a significant decrease in the iPTH level was observed. Although paricalcitol affects calcium levels, no hypercalcemia was observed. The combination treatment did not result in a significant lowering of iPTH levels in comparison with paricalcitol alone. CONCLUSIONS Treatment of SHPT with intravenous paricalcitol in patients on hemodialysis is effective and has a good safety profile. The combination of paricalcitol and cinacalcet does not improve the outcomes. Moreover, the combined treatment does not affect calcium and phosphorus concentrations. The cost‑effectiveness of therapy should also be considered. Answer: The cost-effectiveness of cinacalcet for treating severe secondary hyperparathyroidism (SHPT) in hemodialysis patients appears to vary based on patient eligibility for parathyroid surgery and the specific healthcare system. In Japan, cinacalcet was found to be cost-effective for hemodialysis patients with severe SHPT who were ineligible for parathyroid surgery, with an incremental cost-effectiveness ratio (ICER) of $21,613 per quality-adjusted life-year (QALY) gained. However, for those eligible for surgery, the ICER was much higher at $352,631 per QALY gained, making it cost-effective in only 0.9% of simulations if society is willing to pay $50,000 per additional QALY (PUBMED:22445709). In a Spanish study, cinacalcet was shown to be cost-saving when added to standard treatment, with a total saving of 149 euros per patient per month. It was more cost-effective in achieving treatment goals for calcium, phosphorus, and the calcium x phosphorus product compared to standard treatment alone (PUBMED:18816209). A study on the effectiveness of cinacalcet in chronic hemodialysis patients with severe SHPT demonstrated improvements in bone markers and stabilization of vascular calcification, suggesting that cinacalcet can provide beneficial effects on CKD-MBD in severe SHPT (PUBMED:31014177). Comparatively, a prospective cohort study in Japan found that parathyroidectomy (PTx) was associated with a lower risk of mortality compared with cinacalcet treatment, particularly among patients with severe SHPT, suggesting that PTx might be a more effective intervention in certain patient populations (PUBMED:35277957). Overall, while cinacalcet can be an effective treatment for SHPT in hemodialysis patients, its cost-effectiveness is influenced by patient-specific factors, such as eligibility for surgery, and the healthcare system's willingness to pay for additional QALYs. It may also be cost-saving in certain contexts, such as when it leads to reductions in the use of other medications. However, PTx may offer better long-term outcomes for some patients, which is an important consideration in evaluating the overall cost-effectiveness of SHPT treatments.
Instruction: Can FDG PET predict radiation treatment outcome in head and neck cancer? Abstracts: abstract_id: PUBMED:30911435 Feasibility of rapid integrated radiation therapy planning with follow-up FDG PET/CT to improve overall treatment assessment in head and neck cancer. Inflammatory changes and residual disease are difficult to distinguish after high dose, definitive radiotherapy of head and neck malignancies. FDG uptake located within a high dose field may more likely represent inflammatory changes, and FDG uptake outside of the radiation field could represent unsuspected and under treated disease. In situ knowledge of the precise radiotherapy fields, therefore, may be useful in distinguishing these etiologies. This study aimed to evaluate the clinical feasibility of rapid integration of radiation treatment field images during follow-up FDG PET/CT imaging. Twenty head and neck cancer patients who underwent radiation therapy were identified. A MIM based workflow was created which fused the radiation treatment CT, including the planning volumes and isodose curves, into the follow-up imaging. Two board certified physicians, blinded to treatment outcome, reviewed the follow-up exams, half with the treatment information and half without. Each exam was scored for recurrent or residual disease, confidence of the read and a qualitative assessment to the overall usefulness of the treatment plan. Interpretation accuracy improved from 80 to 90% with integration of the treatment plan. Similarly, the sensitivity improved from 71% to 86%, while the specificity increased from 85% to 92%. Confidence also increased by 0.7 on a 5 point scale for both readers. Data demonstrate the clinical feasibility of rapidly incorporating radiation treatment dosimetry into follow-up FDG PET/CT exams in patients with head and neck cancer. Preliminary results demonstrated a simple, efficient method which improved accuracy of interpretation and overall reader confidence. abstract_id: PUBMED:21461734 Can FDG PET predict radiation treatment outcome in head and neck cancer? Results of a prospective study. Purpose: In head and neck cancer (HNC) various treatment strategies have been developed to improve outcome, but selecting patients for these intensified treatments remains difficult. Therefore, identification of novel pretreatment assays to predict outcome is of interest. In HNC there are indications that pretreatment tumour (18)F-fluorodeoxyglucose (FDG) uptake may be an independent prognostic factor. The aim of this study was to assess the prognostic value of FDG uptake and CT-based and FDG PET-based primary tumour volume measurements in patients with HNC treated with (chemo)radiotherapy. Methods: A total of 77 patients with stage II-IV HNC who were eligible for definitive (chemo)radiotherapy underwent coregistered pretreatment CT and FDG PET. The gross tumour volume of the primary tumour was determined on the CT (GTV(CT)) and FDG PET scans. Five PET segmentation methods were applied: interpreting FDG PET visually (PET(VIS)), applying an isocontour at a standardized uptake value (SUV) of 2.5 (PET(2.5)), using fixed thresholds of 40% and 50% (PET(40%), PET(50%)) of the maximum intratumoral FDG activity (SUV(MAX)) and applying an adaptive threshold based on the signal-to-background (PET(SBR)). Mean FDG uptake for each PET-based volume was recorded (SUV(mean)). Subsequently, to determine the metabolic volume, the integrated SUV was calculated as the product of PET-based volume and SUV(mean). All these variables were analysed as potential predictors of local control (LC), regional recurrence-free survival (RRFS), distant metastasis-free survival (DMFS), disease-free survival (DFS) and overall survival (OS). Results: In oral cavity/oropharynx tumours PET(VIS) was the only volume-based method able to predict LC. Both PET(VIS) and GTV(CT) were able to predict DMFS, DFS and OS in these subsites. Integrated SUVs were associated with LC, DMFS, DFS and OS, while SUV(mean) and SUV(MAX) were not. In hypopharyngeal/laryngeal tumours none of the variables was associated with outcome. Conclusion: There is no role yet for pretreatment FDG PET as a predictor of (chemo)radiotherapy outcome in HNC in daily routine. However, this potential application needs further exploration, focusing both on FDG PET-based primary tumour volume, integrated SUV and SUV(MAX) of the primary tumour. abstract_id: PUBMED:25210280 Post-treatment appearances, pitfalls, and patterns of failure in head and neck cancer on FDG PET/CT imaging. Majority of patients with head and neck cancer are treated with combined treatment regimes such as surgery, radiation therapy, and chemotherapy. The loss of structural symmetry and imaging landmarks as a result of therapy makes post-treatment imaging a daunting task on conventional modalities like computed tomography (CT) scan and magnetic resonance imaging (MRI) as well as on 18 Fluorine-Fluorodeoxyglucose Positron Emission Tomography/Computed Tomography (18F FDG PET/CT). Combined multimodality treatment approach causes various tissue changes that give rise to a spectrum of findings on FDG PET/CT imaging, which are depicted in this atlas along with a few commonly encountered imaging pitfalls. The incremental value of FDG PET/CT in detecting locoregional recurrences in the neck as well as distant failures has also been demonstrated. abstract_id: PUBMED:29573279 FDG-PET metabolic tumor parameters for the reirradiation of recurrent head and neck cancer. Objective: The utility of fluorodeoxyglucose positron emission tomography (FDG-PET) imaging to predict outcome has been well-established for patients undergoing definitive radiation in the initial management of head and neck cancer. However, the usefulness of this modality in the recurrent setting remains uncertain. We sought to evaluate the prognostic value of metabolic tumor parameters measured on FDG-PET in patients treated by reirradiation for recurrent head and neck cancer. Methods: Thirty-four tumors occurring in 29 patients were reirradiated and were evaluable. The most common disease sites were the oropharynx (n = 9), oral cavity (N = 8), and nasopharynx (n = 6). Potential correlations of FDG-PET maximum standardized uptake value (SUV) and metabolic tumor volume (MTV) with survival endpoints were explored. Multivariate analysis was performed using Cox proportional hazard. Results: The median MTV, SUV, and gross tumor volume, as delineated using the 50% SUVmax threshold, were 366 cc (range, 0-3,567 cc), 8.1 (range, 0-25), and 48.5 cc (range, 1-190 cc), respectively. On univariate analysis, both MTVs (as measured as a continuous variable and using the median value of 266 cc as cutoff) were predictive of decreased overall survival (P &lt; 0.05 for both). A strong trend toward decreased progression-free survival (P = 0.05), in-field control (P = 0.06), and locoregional control (P = 0.07) was also observed with increasing MTV when analyzed as a continuous variable. Multi-variate analysis confirmed MTV as an independent predictor of mortality. Conclusion: The PET-derived parameter MTV may serve as a potentially valuable factor for risk stratification and for guiding treatment in future reirradiation trials. Level Of Evidence: 4. Laryngoscope, 128:2345-2350, 2018. abstract_id: PUBMED:21892279 Integrated FDG PET/CT: Utility and Applications in Clinical Oncology. Accurate diagnosis and staging are essential for an optimal management of cancer patients. Positron emision tomography with 2-deoxy-2-fluorine-18-fluoro-D-glucose ((18)FDG-PET) and, more recently, (18)FDG-PET/computed tomography ((18)FDG-PET/CT) have emerged as powerful imaging tools in oncology, because of the valuable functional information they provide. The combined acquisition of PET and CT has synergistic advantages over its isolated constituents and minimizes their limitations. It decreases examination times by 25%-40%, leads to a higher patient throughput and unificates two imaging procedures in a single session. There is evidence that (18)FDG-PET/CT is a more accurate test than either of its components for the evaluation of various tumors. It is a particularly valuable tool for detection of recurrence, especially in asymptomatic patients with rising tumor markers and those with negative or equivocal findings on conventional imaging tests. Yet, there are some limitations and areas of uncertainty, mainly regarding the lack of specificity of the (18)FDG uptake and the variable (18)FDG avidity of some cancers. This article reviews the advantages, limitations and main applications of (18)FDG-PET/CT in oncology, with especial emphasis on lung cancer, colorectal cancer, lymphomas, melanoma and head and neck cancers. abstract_id: PUBMED:36300564 Identifying the location of locoregional recurrences after definitive radiotherapy for head and neck cancer using metabolic parameters of baseline and mid-treatment 18F-FDG-PET scans. Introduction: Tumour recurrences after treatment of head and neck squamous cell carcinoma (HNSCC) are more likely to originate from regions of high-baseline FDG-PET uptake. Mid-treatment functional imaging can potentially predict for higher risk of tumour recurrence. The aim of this study is to correlate the location of locoregional tumour recurrence with baseline FDG-PET metabolic volumes and mid-treatment FDG-PET metabolic volumes in patients with HNSCC following definitive radiotherapy. Methods: A total of 23 patients with 26 local and/or regional recurrences underwent baseline (W0-PET) and mid-treatment (W3-PET) 18F-FDG PET scans as part of their radiotherapy. FDG-PET-based metabolic volumes (MTV20%, MTV40%, MTV60%, MTV80%, SUV2.5, SUVpeak and PET_EDGE) were delineated onto the FDG-PET scans. The recurrence nidus was identified on FDG-PET at the time of recurrence (REC-PET). DIR-based fusion was performed for REC-PET to W0-PET, and REC-PET to W3-PET. The location of the recurrence nidus was correlated with the FDG-PET volumes. Further analysis included a comparison of the recurrence density to FDG-PET metabolic volumes. Results: Most recurrences occurred within the MTV20%, MTV40% and SUV 2.5 volumes. Sixty-nine per cent of recurrences (18 of 26) occurred within both the W0 MTV40% and W3 MTV40% volumes. A higher recurrence density was seen for iso-SUV contours closer to the maximum SUV for both W0 and W3. For a number of the FDG-PET volumes, including MTV20%, MTV40% and SUV2.5, the recurrence density was improved for W3 compared to W0, however, this improvement was small in magnitude. The average volume of MTV40% contours was considerably smaller than MTV20% and SUV2.5 contours. Conclusion: The metabolic parameters of SUV2.5, MTV20% and MTV40% delineated on the baseline and mid-treatment FDG-PET scans encompassed the majority of recurrences. The MTV40% is significantly smaller, hence, we prefer this volume for future dose escalation studies. abstract_id: PUBMED:31047367 Interobserver Variability in the Delineation of Gross Tumour Volume and Specified Organs-at-risk During IMRT for Head and Neck Cancers and the Impact of FDG-PET/CT on Such Variability at the Primary Site. Purpose: To assess interobserver variability (IOV) in the delineation of structures during intensity-modulated radiation therapy for head and neck squamous cell carcinoma and the impact of fluorodeoxyglucose positron emission tomography/computed tomography (FDG-PET/CT) on such variability at the primary site. Methods: Three experienced head-neck radiation oncologists, blinded to each other, delineated the primary site gross tumour volume (GTV) and specified organs-at-risk (OARs) viz parotid gland, spinal cord, and cochlea in 10 patients with pharyngolaryngeal cancers suited for definitive intensity-modulated radiation therapy. IOV was assessed by concordance index (CI), Dice similarity coefficient (DSC), and Fleiss' kappa. Subsequently, the observers refined their corresponding GTVs incorporating information from FDG-PET/CT. The impact of FDG-PET/CT on variability of GTV was assessed by comparing the overlap indices with and without FDG-PET/CT using paired 't' test. Results: There was moderate IOV in the delineation of GTV as evidenced by mean CI, DSC, and Fleiss' kappa of 0.41, 0.57, and 0.56, respectively. The use of FDG-PET/CT improved consistency of target volume delineation with resultant improvement in the overlap indices (mean CI, DSC, and Fleiss' kappa of 0.54, 0.69, and 0.69, respectively) that was statistically significant (P &lt; .001). There was good agreement between the three observers for delineation of spinal cord and parotid glands. Concordance was worst for the cochlea. Conclusion: This study demonstrates the presence of moderate IOV between three experienced head and neck radiation oncologists in an academic institutional setting for the delineation of GTV. The use of FDG-PET/CT for target volume delineation results in significant reduction of such variability. abstract_id: PUBMED:32594401 FDG PET Hybrid Imaging. Molecular imaging with positron emission tomography (PET) using tumour-seeking radiopharmaceuticals has gained wide acceptance in oncology with many clinical applications. The hybrid imaging modality PET/CT (computed tomography) allows assessing molecular as well as morphologic information at the same time. Therefore, PET/CT represents an efficient tool for whole-body staging and re-staging within one imaging modality. In oncology, the glucose analogue 18-F-fluorodeoxyglucose (FDG) is the most widely used PET/CT radiopharmaceutical in clinical routine. FDG PET and FDG PET/CT have been used for staging and re-staging of tumour patients in numerous studies. This chapter will discuss the use and the main indications of FDG PET/CT in oncology with special emphasis on lung cancer, lymphoma, head and neck cancer, melanoma and breast cancer (among other tumour entities). A review of the current literature is given with respect to primary diagnosis, staging and diagnosis of recurrent disease. Besides its integral role in diagnosis, staging and re-staging of disease in oncology, there is increasing evidence that FDG PET/CT can be used for therapy response assessment (possibly influencing therapeutic management and treatment planning) by evaluating tumour control, which will also be discussed in this chapter. abstract_id: PUBMED:30157883 Effect of radiochemotherapy on T2* MRI in HNSCC and its relation to FMISO PET derived hypoxia and FDG PET. Background: To assess the effect of radiochemotherapy (RCT) on proposed tumour hypoxia marker transverse relaxation time (T2*) and to analyse the relation between T2* and 18F-misonidazole PET/CT (FMISO-PET) and 18F-fluorodeoxyglucose PET/CT (FDG-PET). Methods: Ten patients undergoing definitive RCT for squamous cell head-and-neck cancer (HNSCC) received repeat FMISO- and 3 Tesla T2*-weighted MRI at weeks 0, 2 and 5 during treatment and FDG-PET at baseline. Gross tumour volumes (GTV) of tumour (T), lymph nodes (LN) and hypoxic subvolumes (HSV, based on FMISO-PET) and complementary non-hypoxic subvolumes (nonHSV) were generated. Mean values for T2* and SUVmean FDG were determined. Results: During RCT, marked reduction of tumour hypoxia on FMISO-PET was observed (T, LN), while mean T2* did not change significantly. At baseline, mean T2* values within HSV-T (15 ± 5 ms) were smaller compared to nonHSV-T (18 ± 3 ms; p = 0.051), whereas FDG SUVmean (12 ± 6) was significantly higher for HSV-T (12 ± 6) than for nonHSV-T (6 ± 3; p = 0.026) and higher for HSV-LN (10 ± 4) than for nonHSV-LN (5 ± 2; p ≤ 0.011). Correlation between FMISO PET and FDG PET was higher than between FMSIO PET and T2* (R2 for GTV-T (FMISO/FDG) = 0.81, R2 for GTV-T (FMISO/T2*) = 0.32). Conclusions: Marked reduction of tumour hypoxia between week 0, 2 and 5 found on FMISO PET was not accompanied by a significant T2*change within GTVs over time. These results suggest a relation between tumour oxygenation status and T2* at baseline, but no simple correlation over time. Therefore, caution is warranted when using T2* as a substitute for FMISO-PET to monitor tumour hypoxia during RCT in HNSCC patients. Trial Registration: DRKS, DRKS00003830 . Registered 23.04.2012. abstract_id: PUBMED:31735303 Diagnosis, Staging, Radiation Treatment Response Assessment, and Outcome Prognostication of Head and Neck Cancers Using PET Imaging: A Systematic Review. This systematic review summarizes the current applications of 18F-FDG PET imaging in the diagnosis, staging, radiation treatment response assessment, and outcome prognostication of head and neck cancers. For head and neck cancers of unknown primary origin, 18F-FDG PET/CT increases the likelihood of identifying the primary tumor and establishing the diagnosis. 18F-FDG PET/CT is important in the accurate staging of locoregionally advanced cases of HNSCC, which can greatly affect recommendations for treatment. Following definitive chemoradiation, 18F-FDG PET/CT is validated as a means of treatment response assessment. Emerging PET tracers of hypoxia and their potential applications are reviewed. Answer: FDG PET has been explored as a tool to predict radiation treatment outcomes in head and neck cancer, but its role is not yet fully established. A study by PUBMED:21461734 found that pretreatment FDG uptake may be an independent prognostic factor in head and neck cancer (HNC) treated with (chemo)radiotherapy. However, the study concluded that there is no definitive role for pretreatment FDG PET as a predictor of (chemo)radiotherapy outcome in HNC in daily routine, although further exploration is needed, focusing on FDG PET-based primary tumour volume, integrated SUV, and SUVmax of the primary tumour. Another study by PUBMED:29573279 evaluated the prognostic value of metabolic tumor parameters measured on FDG-PET in patients treated by reirradiation for recurrent head and neck cancer. The study found that the PET-derived parameter metabolic tumor volume (MTV) may serve as a potentially valuable factor for risk stratification and for guiding treatment in future reirradiation trials. PUBMED:36300564 also investigated the correlation between the location of locoregional tumour recurrence and baseline FDG-PET metabolic volumes and mid-treatment FDG-PET metabolic volumes in patients with HNSCC following definitive radiotherapy. The study found that the metabolic parameters of SUV2.5, MTV20%, and MTV40% delineated on the baseline and mid-treatment FDG-PET scans encompassed the majority of recurrences, suggesting that these parameters could be useful for future dose escalation studies. In summary, while FDG PET has shown potential in predicting radiation treatment outcomes in head and neck cancer, its role is not yet fully established, and more research is needed to determine its predictive value and how it can be integrated into routine clinical practice for guiding treatment decisions (PUBMED:21461734, PUBMED:29573279, PUBMED:36300564).
Instruction: Instrumental variables in influenza vaccination studies: mission impossible? Abstracts: abstract_id: PUBMED:19695007 Instrumental variables in influenza vaccination studies: mission impossible?! Objectives: Unobserved confounding has been suggested to explain the effect of influenza vaccination on mortality reported in several observational studies. An instrumental variable (IV) is strongly related to the exposure under study, but not directly or indirectly (through other variables) with the outcome. Theoretically, analyses using IVs to control for both observed and unobserved confounding may provide unbiased estimates of influenza vaccine effects. We assessed the usefulness of IV analysis in influenza vaccination studies. Methods: Information on patients aged 65 years and older from the computerized Utrecht General Practitioner (GP) research database over seven influenza epidemic periods was pooled to estimate the association between influenza vaccination and all-cause mortality among community-dwelling elderly. Potential IVs included in the analysis were a history of gout, a history of orthopaedic morbidity, a history of antacid medication use, and GP-specific vaccination rates. Results: Using linear regression analyses, all possible IVs were associated with vaccination status: risk difference (RD) 7.8% (95% confidence interval [CI] 3.6%; 12.0%), RD 2.8% (95% CI 1.7%; 3.9%), RD 8.1% (95% CI 6.1%; 10.1%), and RD 100.0% (95% CI 89.0%; 111.0%) for gout, orthopaedic morbidity, antacid medication use, and GP-specific vaccination rates, respectively. Each potential IV, however, also appeared to be related to mortality through other observed confounding variables (notably age, sex, and comorbidity). Conclusions: The potential IVs studied did not meet the necessary criteria, because they were (indirectly) associated with the outcome. These variables may, therefore, not be suited to assess unconfounded influenza vaccine effects through IV analysis. abstract_id: PUBMED:35473660 General vaccination willingness and current vaccination status in relation to clinical and psychological variables in patients with multiple sclerosis. Background: Infections can have a significant impact on morbidity and mortality in multiple sclerosis (MS) patients. Therefore, vaccinations are of immense importance. If vaccination willingness is to be increased, possible influencing factors should be identified. The aim of the present study was to investigate the status of active immunisation in MS patients in association with sociodemographic, clinical-neurological, psychopathological and personality variables using the NEO-Five Factor Inventory, the Temperament and Character Inventory-Revised and the Hospital Anxiety and Depression Scale. Method: Four hundred and four MS patients from two German neurological hospitals were examined for their vaccination attitudes, in detail, the general willingness to vaccinate and the current vaccination status of mumps, measles and rubella (MMR) as well as tetanus and influenza. We also looked at the current level of disability in relation to the current vaccination status, as well as possible associated personality and psychopathological variables. Results: Patients with a complete MMR vaccination status were significantly younger and those with a complete influenza vaccination status were significantly older than those with related incomplete vaccination status. Tetanus vaccination status completeness did not differ depending on age and did not show substantial association with personality scores. However, influenza vaccination completeness was associated with differences in personality and psychopathological variables; extraversion, openness, novelty seeking, harm avoidance and anxiety. A reported general vaccination willingness was significantly correlated with the current completeness of tetanus and influenza vaccinations. Novelty seeking, persistence, extraversion, agreeableness, conscientiousness and neuroticism were found associated with an increased vaccination willingness. Anxiety and depression were not related to general vaccination willingness. Conclusions: No specific personality trait could be defined on its own in relation to general vaccination willingness or complete vaccination status. Younger patients should be made more aware of influenza vaccination. Reasons for rather low vaccination rates need to be further investigated. abstract_id: PUBMED:36925423 Mediation and instrumental variable analyses for vaccine-induced antibody titer against influenza B. Objective: Immune correlate analyses for vaccine trials have been applied to investigate associations of vaccine efficacy and surrogate markers such as vaccine-induced antibodies. However, the role of antibody as a surrogate marker in predicting the outcome can vary by time, and surrogate-outcome confounding may have resulted in bias even in randomized trials. We provide a framework for surrogate marker assessment to address the aforementioned issues. Study Design And Setting: We reanalyzed the vaccine randomized trial for influenza B. We conducted a mediation analysis that enables estimation of vaccine efficacy, mediation effects and proportion of mediation on disease probabilities at various follow-up times. We proposed instrumental variable (IV) analyses with randomized vaccination as an IV accounting for potential unmeasured confounding. Results: The mediation effect of vaccine efficacy by hemagglutination inhibition (HAI) titer was significantly protective at 181 days after vaccination: 63.2% [95% confidence interval, (CI) = (39.9%, 82.0%)], and HAI titer explained 61.1% [95% CI = (36.7%, 96.2%)] of the protective effect of vaccination. Conclusions: Most of vaccine efficacy is mediated by HAI titer, particularly in children 10 years and older. Our contribution is to provide causal analytics for the role of surrogate marker with weaker assumptions regarding surrogate-disease causation. abstract_id: PUBMED:35632479 Medics as a Positive Deviant in Influenza Vaccination: The Role of Vaccine Beliefs, Self-Efficacy and Contextual Variables. The influenza vaccination rate remains unsatisfactorily low, especially in the healthy adult population. The positive deviant approach was used to identify key psychosocial factors explaining the intention of influenza vaccination in medics and compare them with those in non-medics. Methods: There were 709 participants, as follows: 301 medics and 408 non-medics. We conducted a cross-sectional study in which a multi-module self-administered questionnaire examining vaccination beliefs, risk perception, outcome expectations (gains or losses), facilitators' relevance, vaccination self-efficacy and vaccination intention was adopted. We also gathered information on access to vaccination, the strength of the vaccination habit and sociodemographic variables. Results: We used SEM and were able to explain 78% of the variance in intention in medics and 56% in non-medics. We identified both direct and indirect effects between the studied variables. In both groups, the intention was related to vaccination self-efficacy, stronger habits and previous season vaccination, but access to vaccines was significant only in non-medics. Conclusions: Applying the positive deviance approach and considering medics as positive deviants in vaccination performance extended the perspective on what factors to focus on in the non-medical population. Vaccination promotion shortly before the flu season should target non- or low-intenders and also intenders by the delivery of balanced information affecting key vaccination cognitions. General pro-vaccine beliefs, which may act as implicit attitudes, should be created in advance to build proper grounds for specific outcome expectations and facilitators' recognition. It should not be limited only to risk perception. Some level of evidence-based critical beliefs about vaccination can be beneficial. abstract_id: PUBMED:15516048 Vaccination of general practitioners The vaccination of general practitioners represents significant problems for two reasons: they can be a reservoir of infectious diseases for the patients they are in contact with and their attitudes towards vaccination can influence their preventative acts and consequently the vaccination coverage of the population. The anti-influenza vaccination of the care providers is associated with a decrease of mortality amongst patients. Hepatitis B is one of the most frequent professional occupational infectious diseases and vaccination is necessary for all the professionals at risk. Concerning the whooping-cough we can observe at the present time an increase in the number of cases. The "Conseil Supérieur d'Hygiène" advises to propose vaccination of the people in contact with infants and people who have not HAD vaccination supplements in childhood. The rubella vaccination for unprotected women is also recommended for the care providers, to avoid the risk of infection and the possible implications for the foetus. The existing data in Belgium on the vaccination coverage of general practitioners is poor, but leads us to believe there is an insufficient vaccination coverage against these diseases. A study will shortly be undertaken to analyze the vaccination status of the Belgian French-speaking doctors and to further analyse their reasoning for any possible non-vaccination. abstract_id: PUBMED:21489733 Current events in vaccination The annual meeting of the Infectious Disease Society of America (IDSA); which brought together nearly 5000 participants from over 80 countries in Vancouver, Canada, October 21 to 24, 2010; provided a review of the influenza (H1N1) 2009 pandemic, evaluated vaccination programmes and presented new vaccines under development. With 12,500 deaths in the United States in 2009-2010, the influenza (H1N1) 2009 pandemic was actually less deadly than the seasonal flu. But it essentially hit the young, and the toll calculated in years of life lost is high. The monovalent vaccines, whether live attenuated or inactivated with or without adjuvants, were well tolerated in toddlers, children, adults and pregnant women. In order to protect infants against pertussis, family members are urged to get their booster shots. The introduction of the 13-valent Pneumococcal conjugated vaccine in the beginning of 2010 may solve--but for how long?--the problem of serotype replacement, responsible for the re-increasing incidence of invasive Pneumococcal infections observed in countries that had introduced the 7-valent vaccine. The efficacy of a rotavirus vaccine has been confirmed, with a reduction in hospitalization in the United States and a reduction in gastroenteritis-related deaths in Mexico. In the United States, vaccination of pre-adolescents against human papillomavirus (HPV) has not resulted in any specific undesirable effects. Routine vaccination against chicken pox, recommended since 1995, has not had an impact on the evolution of the incidence of shingles. Vaccination against shingles, recommended in the United States for subjects 60 years and over, shows an effectiveness of 55%, according to a cohort study (Kaiser Permanente, Southern California). Although some propose the development of personalized vaccines according to individual genetic characteristics, the priority remains with increasing vaccine coverage, not only in infants but also in adults and the elderly. Vaccine calendars that cover a whole lifetime should be promoted, since the vaccination of adults and seniors is a determining factor of good health at all ages. abstract_id: PUBMED:25884906 Medical students' attitude towards influenza vaccination. Background: Influenza vaccination is recommended for all healthcare personnel (HCP) and most institutions offer vaccination for free and on site. However, medical students do not always have such easy access, and the predictors that might guide the motivation of medical students to get vaccinated are largely unknown. Methods: We conducted a cross-sectional survey study among pre-clinical medical students in a German University hospital to assess the social cognitive predictors of influenza vaccination, as well as reasons for refusal and acceptance of the vaccine. Results: Findings show that pre-clinical medical students have comparable knowledge gaps and negative attitudes towards influenza vaccination that have previously been reported among HCP. Lower injunctive norms and higher feelings of autonomy contribute to no intention to get vaccinated against influenza, while a positive instrumental attitude and higher feelings of autonomy contribute to a high intention to get vaccinated. The variables in the regression model explained 20% of the variance in intention to get vaccinated. Conclusions: The identified factors should be addressed early in medical education, and hospitals might benefit from a more inclusive vaccination program and accessibility of free vaccines for their medical students. abstract_id: PUBMED:32576459 Comparing the impact of high-dose versus standard dose influenza vaccines on hospitalization cost for cardiovascular and respiratory diseases: Economic assessment in the US Veteran population during 5 respiratory seasons using an instrumental variable method. Objectives: Cost savings associated with high-dose (HD) as compared to standard-dose (SD) influenza vaccination in the United States (US) Veteran's Health Administration (VHA) population have been attributed to better protection against hospitalization for cardiac and respiratory diseases. The relative contribution of each of these disease categories to the reported savings remains to be explored. Methods: During a recently completed study of HD versus SD vaccine effectiveness (conducted in the VHA over five respiratory seasons from 2010/11 through 2014/15), we collected cost data for all healthcare services provided at both VHA and Medicare-funded facilities. In that analysis, we compared the costs of vaccination and hospital care for patients admitted with either cardiovascular or respiratory disease. Treatment selection bias and other confounding factors were adjusted using an instrumental variable (IV) method. In this brief report we use the same study cohort and methods to stratify the results by patients admitted for cardiovascular disease (CVD) and those admitted for respiratory disease. Results: We analyzed 3.5 million SD and 0.16 million HD person-seasons. The IV-adjusted rVEs were 14% (7-20%) against hospitalizations for CVD and 15% (5-25%) against respiratory hospitalizations. Net cost savings per HD recipient were $138 ($66-$200) for CVD related hospitalizations and $62 ($10-$107) for respiratory disease related hospitalizations. Conclusions: In the US VHA population, the reduction in hospitalizations for CVD over five respiratory seasons contributed twice the cost savings (per HD recipient) of the reduction in hospitalizations for respiratory disease. abstract_id: PUBMED:22682490 Vaccination of cystic fibrosis patients Vaccination guidelines in patients with cystic fibrosis associate the usual vaccination schedule with specific recommendations concerning influenza, Pneumococcus, viral hepatitis A and B and varicella. Live attenuated vaccines are not contra-indicated but must be administered before pulmonary transplantation. Clinical studies evaluating vaccine efficacy in this population are still necessary. Development of new vaccines against respiratory infections (Staphylococcus aureus, Pseudomonas aeruginosa and Syncitial Respiratory Virus in particular) is an important challenge for the medical management of these patients. abstract_id: PUBMED:22078037 Pityriasis rubra pilaris after vaccination Background: Pityriasis rubra pilaris (PRP) following vaccination is rarely described in the literature. We report a case of PRP occurring two weeks after measles-mumps-rubella (MMR) vaccination. Case Report: A 17-month-old infant was referred for a rash appearing two weeks previously. The child was presenting diffuse erythematous scaly exanthema with follicular papules and orange palmoplantar keratoderma. The clinical features were highly evocative of PRP. The histology was non-specific, displaying epidermal acanthosis with a regular and thick parakeratosis, and without any impairment of the follicular infundibulum. An MMR vaccination had been given two weeks before onset of the rash. Treatment with topical corticosteroids and emollients proved effective. Discussion: Post-vaccinal PRP is rarely described in the literature. We report only the 3rd case. The first case concerned a 32-year-old woman presenting two episodes of PRP 10 days after diphtheria-tetanus-polio vaccination. The second case concerned a 47-year-old woman presenting PRP 18 days after anti-influenza vaccination and requiring treatment with acitretin. No cases have been described with MMR. These three vaccines (DTP, Tetragrip and ROR) have no shared pharmacological constituents, and the trigger mechanism could be immunological or parainfectious. Conclusion: Questioning about recent vaccination during history taking appears necessary to assess the importance of this trigger factor as well as the mechanism responsible for the onset of PRP. Answer: The use of instrumental variables (IVs) in influenza vaccination studies has been explored as a method to control for both observed and unobserved confounding, potentially providing unbiased estimates of influenza vaccine effects. However, a study assessing the usefulness of IV analysis in influenza vaccination studies found that potential IVs, such as a history of gout, orthopaedic morbidity, antacid medication use, and GP-specific vaccination rates, were all associated with vaccination status. Unfortunately, each potential IV was also related to mortality through other observed confounding variables, notably age, sex, and comorbidity. This association with the outcome means that these variables did not meet the necessary criteria to be used as IVs, as they were indirectly associated with the outcome. Therefore, these variables may not be suited to assess unconfounded influenza vaccine effects through IV analysis, suggesting that finding suitable IVs in influenza vaccination studies might indeed be a mission impossible (PUBMED:19695007).
Instruction: Is the prognosis and course of acral melanoma related to site-specific clinicopathological features? Abstracts: abstract_id: PUBMED:27010139 Is the prognosis and course of acral melanoma related to site-specific clinicopathological features? Objective: Acral melanoma is an uncommon type of melanoma in Caucasian patients. However, acral melanoma is the most common type of melanoma in African and Asian patients. Comparison analyses between hand-acral melanoma and foot-acral melanoma have been rarely reported in the literature. Acral melanoma is an uncommon melanocytic tumor characterized by an intrinsic aggressiveness, with specific histological and clinicopathological features. Acral melanoma involves the palms, soles and sub-ungueal sites. Patients And Methods: A total of 244 patients with acral melanoma were included in our analysis. The current study was performed in three different medical centers: Sapienza University of Rome, San Gallicano Institute of Rome and University of Magna Graecia (Italy). The Kaplan-Meier product was used to estimate survival curves for disease-free survival and overall survival. The log-rank test was used to evaluate differences between the survival curves. Assuming that the effects of the predictor variables are constant over time, the independent predictive factors were assessed by Spearman's test and subsequently data were analyzed performing Cox proportional-hazard regression. Results: In both univariate and multivariate analyses Breslow thickness (p &lt; 0.0001) and ulceration (p = 0.003) remained the main predictors. General BRAF mutation was detected in 13.8% of cases. We found that median Breslow value and the percentage of recurrences were similar in hand-acral melanoma and foot-acral melanoma, as well as there were no differences in both short and long-term. Conclusions: The absence of differences in survival between hand-acral melanoma and foot-acral melanoma shows that the aggressiveness of the disease is related to distinct mutational rate, as well as to anatomical site-specific features, rather than to the visibility of the primary lesion. abstract_id: PUBMED:32424819 Clinicopathological characteristics and prognosis of amelanotic acral melanoma: A comparative study with pigmented acral melanoma. We compared the clinicopathological features and prognosis between 16 amelanotic acral melanomas versus 56 pigmented acral melanomas. Amelanotic acral melanomas showed a thicker Breslow thickness (all had a Breslow thickness &gt; 1 mm), more frequent ulceration (15/16, 93.7%) and lower HMB-45 positive rate than pigmented acral melanomas. However, a significant difference in survival was not observed. abstract_id: PUBMED:37404434 Clinicopathological Profile of a Cohort of Patients With Malignant Melanoma in the United Kingdom. Introduction Malignant melanoma (MM) is potentially a fatal type of skin cancer and a major health concern for the Caucasian population. It is a heterogeneous disease with a wide spectrum of manifestations. Therefore, in this study, we evaluated the clinicopathological characteristics of MM. Methods We retrospectively studied the clinicopathological characteristics of MM in 167 biopsy-proven cases of MM reported between January 2020 and December 2021 at Kings Mill Hospital, Sutton-in-Ashfield, United Kingdom. Clinical data such as the age, sex, and anatomical site of the lesion were obtained from the clinical referral forms. Biopsies of the lesions were performed, and the specimens collected were sent to the laboratory for histopathological study and v-raf murine sarcoma viral oncogene homolog B1 (BRAF) mutation evaluation. Formalin-fixed paraffin-embedded blocks (FFPE) were prepared, sectioned, and stained with hematoxylin and eosin for histological examination. Results A total of 167 cases of MM were included in the study. The age range was 23-96 years, with the median age at diagnosis found to be 66 years; males were more commonly affected (52.1%). The median Breslow thickness was 1.20 mm. The median mitotic activity was 1.0/mm2. The lower limb was the most common site of involvement (27.5%), followed by the thorax (25.1%). The most common histological subtype was superficial spreading melanoma (SSM) (77.8%), followed by nodular melanoma (14.4%). The in situ component was present in 95.8% of cases; a majority (92.2%) of the cases showed vertical growth phase, 71.9% of cases were at Clark's level IV of invasion, regression was noted in 70.7% of cases, ulceration was present in 21.6% of cases, and microsatellites were present in 3% of cases. Perineural invasion was present in 3% of cases, and lymphovascular invasion (LVI) was present in 4.2% of cases. BRAF mutation testing was performed on 36 cases, out of which 20 cases (55.6%) showed BRAF mutation. Acral lentiginous melanoma and nodular melanoma were most likely to show ulceration (66.7% and 37.5%, respectively). SSM and lentigo maligna melanoma were more likely to be associated with regression. Conclusion The study demonstrated that MM is prevalent among the elderly population with male predominance; SSM was found to be the most common subtype. The study further demonstrated various clinicopathological features of MM and its association with histological subtypes. abstract_id: PUBMED:25660506 Melanoma of the lower extremities: foot site is an independent risk factor for clinical outcome. Background: Despite the better prognosis of melanomas localized on lower extremities, some studies have suggested that melanomas on the foot are related to a poorer survival and should be considered separately. Objective: To review our case series of cutaneous melanomas on the lower extremities and to analyze the clinicopathological association, time course, types of progression, and survival differences. Methods: We included 1671 patients (stage 0-II) with a cutaneous melanoma on the lower extremities (subungual melanomas were excluded). Of these, 327 were localized on the foot. Multivariate analyses were performed to evaluate disease-specific survival and disease-free interval. Results: Distribution of known prognostic factors and patterns of progression of foot and leg melanoma differ across genders. The foot site was confirmed as a negative independent prognostic factor on disease-specific survival and disease-free interval. Conclusion: Foot melanoma could represent a particular subgroup, which could require specific management in the future. abstract_id: PUBMED:26437541 Clinicopathological Features and Prognostic Factors of Malignant Melanoma: A Retrospective Analysis of Thai Patients in Ramathibodi Hospital. Objective: To present the clinicopathological characteristics of cutaneous melanoma in Thai patients and to define the possible clinicopathological prognostic factors. Material And Method: A retrospective study of primary cutaneous melanoma in Thai patients in Ramathibodi Hospital between January 2007 and December 2012 was conducted. All medical records and skin biopsies were reviewed for demographic data and histopathologicalfeatures. Univariate and multivariate analysis for overall survival, and prognostic factors, according to clinical and histopathological features were performed. Results: Fourty-three patients with pathologically confirmed primary cutaneous melanoma were identified and reviewed. The median age of onset was 58 years, with male:female ratio was 1:1.05. Acral lentiginous melanoma (ALM) was the most common type (76.7%). The majority of patients had clinical stage 11 (56.1%). Histologically, the median tumor thickness was 2.9 mm, 88.2% had Clark's level IV and V, 47.1% were ulcerated, and 76.5% had dermal mitotic rate of ≥ 1 mitoses/mm. The 5-year overall survival rates was 38.3%. Univariate analysis demonstrated that clinical stage IV Breslow's thickness of &gt; 3 mm, and dermal mitotic rate of ≥ 3 mitoses/mm2 were bad prognostic factors. Multivariate analysis demonstrated that advanced clinical staging (stage III and IV), Breslow's thickness of &gt; 3 mm, ulceration, palmoplantar or subungual site, and histologic subtype of ALM were the independent risk factors for poor prognosis. Conclusion: Most patients with cutaneous melanoma in Thai patients had the histologic subtype of ALM and were diagnosed with locally advanced disease (stage II). The prognosis depends on clinical staging, Breslow's thickness, ulceration, primary location of tumor and histologic subtype. abstract_id: PUBMED:37805391 Clinicopathological and prognostic features of subungual melanoma in situ Objective: To investigate the clinicopathological characteristics, immunohistochemical profiles, molecular features, and prognosis of subungual melanoma in situ (SMIS). Methods: Thirty cases of SMIS were collected in Fudan University Shanghai Cancer Center, Shanghai, China from 2018 to 2022. The clinicopathological characteristics and follow-up data were retrospectively analyzed. Histopathologic evaluation and immunohistochemical studies were carried out. By using Vysis melanoma fluorescence in situ hybridization (FISH) probe kit, combined with 9p21(CDKN2A) and 8q24(MYC) assays were performed. Results: There were 8 males and 22 females. The patients' ages ranged from 22 to 65 years (median 48 years). All patients presented with longitudinal melanonychia involving a single digit. Thumb was the most commonly affected digit (16/30, 53.3%). 56.7% (17/30) of the cases presented with Hutchinson's sign. Microscopically, melanocytes proliferated along the dermo-epithelial junction. Hyperchromatism and nuclear pleomorphism were two of the most common histological features. The melanocyte count ranged from 30 to 185. Most cases showed small to medium nuclear enlargement (29/30, 96.7%). Pagetoid spread was seen in all cases. Intra-epithelial mitoses were identified in 56.7% (17/30) of the cases. Involvement of nailfold was found in 19 cases, 4 of which were accompanied by cutaneous adnexal extension. The positive rates of SOX10, PNL2, Melan A, HMB45, S-100, and PRAME were 100.0%, 100.0%, 96.0%, 95.0%, 76.9%, and 83.3%, respectively. FISH analysis was positive in 6/9 of the cases. Follow-up data were available in 28 patients, and all of them were alive without disease. Conclusions: SMIS mainly shows small to medium-sized cells. High melanocyte count, hyperchromatism, nuclear pleomorphism, Pagetoid spreading, intra-epithelial mitosis, nailfold involvement, and cutaneous adnexal extension are important diagnostic hallmarks. Immunohistochemistry including SOX10 and PRAME, combined with FISH analysis, is valuable for the diagnosis of SMIS. abstract_id: PUBMED:27545198 Acral Melanoma in Chinese: A Clinicopathological and Prognostic Study of 142 cases. Acral melanoma (AM), as a peculiar subgroup of melanoma, is rare in Caucasians but has higher incidence in Asians. Large series of study on AM with clinicopathological features and prognostic factors is still limited, especially in Asian population. We retrospectively collected clinical, pathological and follow-up data of 142 AM cases. All patients were Chinese, with the age ranging from 24 to 87 years (mean 62.0; median 62.0). The Breslow thickness of primary lesions ranged from 0.6 to 16.3 mm (mean 4.9; median 3.7). 85.9% of the patients had acral lentiginous histologic subtype. Plantar was the most frequently involved site, followed by heels. Statistically, duration of the lesion before diagnosis (≤2.5 years), Breslow thickness &gt;4.0 mm (T4), high mitotic index (&gt;15 mm(-2)), presence of vascular invasion, regional lymph node metastasis at diagnosis and pathologic stage (II/III/IV) were found to be independent prognostic factors in both univariate and multivariate analyses. The prognosis of AM in Chinese is extremely poor. Our 5- and 10-year disease-specific survival (DSS) rates were 53.3% and 27.4%, respectively. Therefore, AM in Asians represents a more biologically aggressive melanoma subtype and is thought to carry a worse prognosis when compared with other races or cutaneous melanomas in other anatomic sites. abstract_id: PUBMED:35878586 Clinicopathologic Characteristics of Trauma-Related Nail Apparatus Melanoma: A Comparative Study according to the Presence of Trauma prior to Melanoma Development. Background: Nail apparatus melanoma (NAM) is a subtype of cutaneous melanoma occurring at nail units and belongs to the acral lentiginous melanoma subgroup. Due to its unique anatomical structure to protect the acral site, mechanical trauma may have a clinicoprognostic impact on NAM. Therefore, we investigated the clinicoprognostic and histopathological characteristics of NAM according to the presence of trauma history prior to melanoma development. Methods: Clinicopathological and follow-up data of patients with NAM according to trauma history were obtained. Results: We included 87 patients with NAM, 21.8% of whom had a previous trauma history. Trauma-related NAMs were more likely to involve the toenail (p = 0.040), include a high proportion of amelanotic melanomas (p = 0.038) as well as nail bed tumor (p = 0.013), and have a longer time interval between the onset of nail change and confirmed diagnosis (p = 0.012). Moreover, survival analysis revealed that trauma-related NAMs more frequently showed progression in general (p = 0.034) and nodal metastasis (p = 0.047) and had worse prognosis in terms of progression-free survival (p = 0.004). Conclusion: In conclusion, NAMs with previous trauma have unique clinicoprognostic characteristics. The specific clinicopathological features of NAMs according to trauma indicate that trauma may play a role in melanoma development. abstract_id: PUBMED:33955704 Expression of OPN3 in acral lentiginous melanoma and its associated with clinicohistopathologic features and prognosis. Background: OPN3 upregulation associated with metastasis was recently described in two subtypes of lung cancers. And OPN3 identified in light-independent functions in epidermal melanocytes has already shown promise. However, in malignant melanocytic tissues, the expression and characterization of OPN3 remain uncharacterized. Objectives: We investigated the clinico-histopathologic features in relation to OPN3 expression of acral lentiginous melanoma (ALM), which is a rare cutaneous melanoma subtype and not associated with prior sunlight exposure. Methods: In all, 84 samples of junctional melanocytic nevi (JMN, n = 12), primary ALMs (n = 39) and inguinal lymph node metastasis (ILNM, n = 23) from ALMs were evaluated for the immunohistochemical expression of OPN3. OPN3 messenger RNA and protein level were further determined in melanocytic tumors using quantitative real-time PCR, multiimmunofluorescence and Western blot assays. We also estimated the associations OPN3 expression between clinicopathological features and prognosis. Results: ILNMs, in contrast to JMN and ALMs, had higher OPN3 expression scores (p &lt; .001) by immunohistochemistry analysis. High OPN3 score was associated with presence of ulceration, increased Breslow depth and Clark level (p = .025, p = .042, and p = .012, respectively). Furthermore, a remarkable difference (p = .037) of patient overall survival was found when comparing the OPN3 expression of immunohistochemical score between equal to or larger than 100 and below 100 groups. Also, Cox regression models showed that high OPN3 scores were associated with worse melanoma survival. Conclusion: High OPN3 expression is significantly associated with ALMs and metastatic phenotype as well as a poor prognosis. abstract_id: PUBMED:33174932 Clinicopathological analysis of acral melanoma in a single center: a study of 45 cases. Objective: The relationship between the clinicopathological and sociodemographics characteristics of acral melanomas diagnosed at the Hospital do Servidor Público Estadual de São Paulo was analyzed and traced between 1997 and 2016. Methods: An observational, descriptive, and retrospective study of patients diagnosed with acral melanoma was performed at Hospital do Servidor Público Estadual de São Paulo. Sociodemographic and clinicopathological characteristics were collected and analyzed. Results: Forty-five patients with acral melanomas were found during the evaluation period. Thirty-one were females, and 14 were males (68.89% and 31.3%, respectively). Most of the cases were invasive (88.37%), and the predominant histological subtype was the acral lentiginous (91.11%). The plantar region was the most prevalent (64.44%). The median Breslow index was 3 mm, and there was a tendency towards greater severity in male patients. Conclusions: Acral site melanomas are detected diagnosed when they reach more advanced stages, which leads to a worse prognosis for patients. Late detection assumes even greater importance in highly mixed and black populations, such as the Brazilian population. Answer: Yes, the prognosis and course of acral melanoma are related to site-specific clinicopathological features. Studies have shown that certain features such as Breslow thickness and ulceration are significant predictors of prognosis in acral melanoma (PUBMED:27010139). Acral melanoma, which involves the palms, soles, and subungual sites, has been characterized by its intrinsic aggressiveness and distinct clinicopathological features. For instance, amelanotic acral melanomas, which lack pigmentation, have been found to have a thicker Breslow thickness and more frequent ulceration compared to pigmented acral melanomas, although no significant difference in survival was observed between the two (PUBMED:32424819). The site of the melanoma also plays a role in the prognosis. Melanomas on the foot have been identified as having a poorer survival compared to other lower extremity melanomas, suggesting that foot site is an independent risk factor for clinical outcome (PUBMED:26437541). Additionally, acral lentiginous melanoma (ALM) and nodular melanoma are more likely to show ulceration, which is a factor associated with a worse prognosis (PUBMED:37404434). In the context of subungual melanoma, which is a subtype of acral melanoma, the presence of trauma history prior to melanoma development has been associated with unique clinicoprognostic characteristics, including a worse prognosis in terms of progression-free survival (PUBMED:35878586). Furthermore, the expression of OPN3 in acral lentiginous melanoma has been linked to a poor prognosis, with high OPN3 expression significantly associated with metastatic phenotype and worse survival (PUBMED:33955704). Overall, the prognosis and course of acral melanoma are influenced by a combination of factors including Breslow thickness, ulceration, site of the melanoma, presence of trauma, and molecular features such as OPN3 expression. These findings underscore the importance of considering site-specific clinicopathological features when assessing the prognosis and managing acral melanoma (PUBMED:27010139; PUBMED:32424819; PUBMED:26437541; PUBMED:37404434; PUBMED:35878586; PUBMED:33955704).
Instruction: Is sensitization to furry animals an independent allergic phenotype in nonoccupationally exposed individuals? Abstracts: abstract_id: PUBMED:21462804 Is sensitization to furry animals an independent allergic phenotype in nonoccupationally exposed individuals? Background: Patients sensitized to common pets (cat, dog) frequently display an immunoglobulin (Ig) E-mediated response to allergens from other animals. Objective: To evaluate whether individuals sensitized to common pets might be at higher risk of developing allergic sensitization to other mammalian allergens. Methods: The study population comprised 900 consecutive patients (300 individuals sensitized to different allergens including those of cat and dog [group AL, 300 sensitized to allergens other than those derived from cat and dog [group B], and 300 nonsensitized individuals [group C, controls]). All patients underwent a physical examination, an interview (clinical history, pet ownership, possible exposure data), and skin prick test (SPT) with a standard panel of allergens including cat, dog, horse, rabbit, rat, mouse, guinea pig, hamster, and cow. Results: A significant difference in allergic sensitization to mammalian allergens was observed in groupA compared with group B (respectively, 244 vs. 17). No sensitization was found in group C. Conclusion: Since sensitization to pet allergens increases the risk of developing allergy to other furry animals, we suggest performing SPTs with several mammalian allergens to identify allergic sensitization and thus prevent future exposure in individuals who are highly sensitized and environmentally exposed to common pets. abstract_id: PUBMED:28294591 Letter to the Editor: Can dog allergen immunotherapy reduce concomitant allergic sensitization to other furry animals? A preliminary experience. Summary: It has been shown that allergen immunotherapy (AIT) is effective in reducing symptoms of allergic asthma and rhinitis. Data on the efficacy are less convincing with regard to AIT for allergens of common pets (cats/dogs). We describe a case of dog allergy in which we explored if dog AIT (DAI) could reduce a concomitant allergic sensitization to other allergens of furry animals. Our case demonstrates the efficacy of sublingual DAI on SPTs, symptom score, and spirometric responses despite persistent exposure to dog allergens at home in a patient sensitized, but not exposed, to several other furry animals. Moreover, this is the first report suggesting that DAI is able to reduce SPTs responses not only to dog, but also to other furry animals such as rabbit, horse, mouse, rat, hamster, cow. We recommend an accurate anamnesis and diagnosis of dog allergy before prescribing DAI. In particular, the use of ImmunoCAP ISAC is essential to verify the presence of IgE to lipocalins / albumins belonging to other furry animals. Obviously further studies carried out by using different DAI schedules, allergen amount and time of re-evaluation, laboratory procedure should be performed to confirm our findings. abstract_id: PUBMED:32516776 Sensitization to Furry Animals and Clinical Relevance of House Dust Mite-Induced Allergic Rhinitis in Guangzhou, China. Introduction: The impact of furry animal allergens on house dust mite (HDM)-induced allergic rhinitis (AR) is unclear. Objective: We aimed to investigate the co-sensitization and cross-sensitization of furry animal allergens and assess their clinical relevance with HDM-induced AR. Methods: We enrolled 268 patients with HDM-induced AR who were diagnosed with skin prick tests positive for dogs and/or cats. Specific immunoglobulin E (sIgE) for dogs (e1) and cats (e2), their components (Can f 1-5 and Fel d 1-2), and other uncommon furry animal extracts were measured. Symptoms and quality of life were assessed with a visual analog scale (VAS). Results: The VAS scores for the AR and asthma (AS; n = 166), moderate-to-severe persistent-AR (n = 132), and e1P (positive)-e2P (n = 89) groups were higher than those for single AR (n = 102), other AR classifications, and other AR sensitization profiles, respectively. The IgE positivity rates for components such as Can f 1-3 and Fel d 2 and those for rats, sheep, mice, cows, and horses were highest in e1P-e2P patients. Can f 1-4, Fel d 1, Fel d 2, or the combined allergens were positively correlated with VAS scores. AR combined with AS and sensitization to Can f 4, Fel d 1, or mice were risk factors for HDM-induced AR with VAS scores ≥5. Conclusions: Extensive cross-sensitization or co-sensitization was found between Can f 1-3, Fel d 2, or rat, sheep, mouse, cow, and horse extracts. Higher sIgE levels for Can f 1-4 and Fel d 1-2 or a higher number of furry animal allergens lead to more severe symptoms and a reduced quality of life. Combined with AS, sensitization to Can f 4, Fel d 1, or mice were risk factors for moderate-to-severe HDM-induced AR. abstract_id: PUBMED:34707374 Serum Albumin as a Cross-Reactive Component in Furry Animals May Be Related to the Allergic Symptoms of Patients with Rhinitis. Background: The prevalence of allergies has increased significantly in the past decade. Further research on allergic diseases caused by furry animals is of great importance for the clinical prevention, diagnosis and treatment of allergies. Objective: To identify the sensitization profile and clinical association of various furry animal crude extracts and components based on component resolved diagnosis (CRD). Methods: A total of 211 patients with allergic rhinitis with sensitivities to cats and/or dogs were recruited, and the specific immunoglobulin E (sIgE) against various furry animals (such as dog/cat extracts and their components, pigeon, parrot, duck, chicken, sheep, rat, mouse, goose, cow and horse extracts) were measured to analyze the sensitization profiles, cross-reactivity and clinical relevance with regards to allergies. Results: A total of 91.67% of cat-sensitized patients were sensitive to Fel d 1, while only 16.03% of cat-sensitized patients responded to Fel d 2. Can f 1 and Can f 5 were the major components of dogs, and the positive rates were 23.53% and 16.18%, respectively. Twenty percent of patients were sensitized to 10 other furry animals, and the positive rate was between 0% and 19.12%. There was a significant correlation between components (Can f 1-5 and Fel d 2) and 5 furry animals (mouse, sheep, Horse, rat, cow), especially between serum albumin (SA) (Can f 3, Fel d 2) and furry animals. Most of the animal crude extracts and components sensitization rates in patients who were SA-positive were significantly higher than that of patients who were SA-negative. In particular, for sensitization to mice, sheep, horses, rats and cows, more than 10-fold higher in patients who were SA-positive than in patients who were SA-negative. The VAS of symptoms and life of quality (LoQ) in the SA-sensitized patients was higher than that in unsensitized patients, and the patients with lipocalin sensitivities had a worse LoQ. Conclusion: Serum albumin Fel d 2 and Can f 3, as minor allergens in cats and dogs, but not lipocalin or prostatic kallikrein, is associated with other furry animals presumably due to serum albumin cross-reactivity. Patients sensitized with serum albumin had a significantly higher risk of sensitization to other animals and had a higher rhinitis VAS score. abstract_id: PUBMED:27326390 Sensitization to Common and Uncommon Pets or Other Furry Animals: Which May Be Common Mechanisms? Exposure to animal allergens constitutes a relevant risk factor for the development of allergic sensitization. Moreover, an increasing number of people become owners of less common animals. In this article we summarize aspects related to sensitization to cat/dog which may be applied also to uncommon pets or other furry animals. The data discussed here suggest that several different factors may induce allergic sensitization to furry animals with or without previous contact. Allergic sensitization without animal exposure is a relevant risk for patients because they are not aware about the possibility that even severe respiratory symptoms may develop after an occasional animal contact. This aspect should be taken into account by susceptible individuals before acquiring pets or beginning a contact for working/leisure activity with a common as well as uncommon animal. As a consequence, skin prick test and/or evaluation of specific IgE antibodies (by classic ImmunoCAP or micro-array technique ImmunoCAP ISAC) also to less common ("new") mammalian allergens could be recommended in individuals already sensitized to common pets to identify the occurrence of allergic sensitization and consequently to avoid future exposures to uncommon animal allergens. abstract_id: PUBMED:26357003 What could be the role of molecular-based allergy diagnostics in detecting the risk of developing allergic sensitization to furry animals? Although this highly refined diagnostic approach has been used in several fields of allergy diagnosis, we noticed the scarcity of data on the role of CDR in detecting current sensitization to the allergens of common pets (cat / dog) and, especially, its potential usefulness in predicting the risk of sensitization to other furry animals. Reported data suggest that cross-reacting mechanisms might play an important role in a significant proportion of allergic sensitizations to furry animals (common pets and unusual / exotic mammals) especially in the absence of any possible direct / indirect contact. In this context an evaluation of specific IgE by using the micro-array technique ImmunoCAP ISAC (Thermofisher Scientific - Immuno-Diagnostics, Sweden) for lipocalins (Can f 1, Can f 2, Equ c 1, Fel d 4, Mus m 1) and albumins (Bos d 6, Can f 3, Equ c 3, Fel d 2) might be very useful to evaluate the possibility of cross-reactions between the allergens of different animals. In fact, allergic sensitization without animal exposure is a relevant risk for patients, because they are not aware about the possibility that even severe respiratory symptoms may develop after an occasional animal contact. This aspect should be taken into account by susceptible individuals before acquiring new pets, after removal of common pets or beginning a contact for working / leisure activity with a common as well as uncommon animal. abstract_id: PUBMED:25282018 Allergy to furry animals: New insights, diagnostic approaches, and challenges. The prevalence of allergy to furry animals has been increasing, and allergy to cats, dogs, or both is considered a major risk factor for the development of asthma and rhinitis. An important step forward in the diagnosis of allergy to furry animals has been made with the introduction of molecular-based allergy diagnostics. A workshop on furry animals was convened to provide an up-to-date assessment of our understanding of (1) the exposure and immune response to the major mammalian allergens, (2) the relationship of these responses (particularly those to specific proteins or components) to symptoms, and (3) the relevance of these specific antibody responses to current or future investigation of patients presenting with allergic diseases. In this review research results discussed at the workshop are presented, including the effect of concomitant exposures from other allergens or microorganisms, the significance of the community prevalence of furry animals, molecular-based allergy diagnostics, and a detailed discussion of cat and dog components. abstract_id: PUBMED:32986999 Allergic sensitization pattern of patients in Brazil. Objective: Allergic sensitization is one of the key components for the development of allergies. Polysensitization seems to be related to the persistence and severity of allergic diseases. Furthermore, allergic sensitization has a predictive role in the development of allergies. The aim of this study was to characterize the pattern of sensitization of atopic patients treated at different pediatric allergy referral centers in Brazil. Methods: A nation-wide transversal multicenter study collected data on patients attended in Brazil. Peripheral blood samples were collected to determine the serum levels of allergen-specific IgE. If allergen-specific IgE was higher than 0.1 kUA/L, the following specific components were quantified. Results: A total of 470 individuals were enrolled in the study. Mite sensitization was the most frequent kind in all participants. A high frequency of sensitization to furry animals and grasses featured in the respiratory allergies. Regarding components, there was a predominance of sensitization to Der p 1 and Der p 2. It has been verified that having a food allergy, atopic dermatitis, or multimorbidity are risk factors for the development of more severe allergic disease. Conclusion: Studies on the pattern of allergic sensitization to a specific population offer tools for the more effectual prevention, diagnosis, and treatment of allergic diseases. Sensitization to dust mites house was the most prevalent in the evaluated sample. High rates of sensitization to furry animals also stand out. Patients with food allergy, atopic dermatitis, or multimorbidity appear to be at greater risk for developing more severe allergic diseases. abstract_id: PUBMED:36452936 Sensitization to Furry Animals in Patients with Suspected Allergic Disease in China: A Multicenter Study. Purpose: An objective of this study was to investigate the allergen sensitization characteristics of cat, dog and horse dander in patients with suspected allergic diseases in different geographical regions of Chinese mainland. Patients And Methods: We invited 2377 patients aged 0-86 years with suspected allergic diseases to participate in a cross-sectional survey in 11 provinces in China. Combined with the questionnaires and animal-specific IgE levels, we analyzed the epidemiological characteristics and risk factors of furry animal sensitization in different regions and populations of Chinese Mainland. Results: Among the 2377 patients with suspected allergic diseases, 14.9% were sensitized to cat dander, 9.3% to dog dander and 5.5% to horse dander. Animal allergens mainly cause low-level sensitization (class 1-3). There was a significant correlation between cat, dog and horse dander allergen sIgE, with correlation coefficients (rs) all greater than 0.750. The majority (46.5%) of sIgE-positive patients were sensitized to at least two animal allergens simultaneously. The prevalence of these animals all showed a trend of increasing first and then decreasing with age, and all reached the peak in young adolescence. As for the geographical factors, the sensitization rate of cats and dogs in Western was significantly higher than that in Eastern (cat: 18.4% vs 9.2%, dog: 11.6% vs 5.5%, all P &lt; 0.001), however, no significant differences were found in horses. Aged ≤6 years, living in Western, and high animal allergen exposure elevated the risk of cat or dog sensitization. Living on a lower floor (aOR: 0.56) was a protective factor for horse sensitization. Conclusion: In this representative sample of Chinese patients with suspected allergic diseases, there was an apparent geographic variation in sensitization to cats and dogs. Age factor and living in Western also had a significant impact on animal allergen sensitization rate. abstract_id: PUBMED:20648777 Respiratory allergy induced by exclusive polysensitization to serum albumins of furry animals. In this report we describe un unusual case of exclusive allergic sensitization to furry animals, as a possible study model to speculate about different modalities ofsensitization to allergens of common and less common mammalian species. A 27-year-old woman referred in our Allergological Centre for the occurrence of conjunctival and severe respiratory symptoms after contact with several animals such as cats, dogs, rabbits, horses, cows etc. Patient underwent clinical and anamnestic evaluation including a detailed information on the modality of exposure to different furry animals. Skin-prick-test (SPT) was performed with our routine panel of commercial standardized extracts (Lofarma Laboratories, Milan, Italy). Some animal allergenic extracts (rabbit, horse, rat, mouse, cavia, cow and hamster) have been tested by SPT one week after the routine SPT A blood sample was taken for measurement of total IgE and specific IgE (CAP System, Phadia, Uppsala, Sweden) as well as Immunoblotting procedures. The results of in vivo and in vitro procedures revealed allergic sensitization only to animal-derived allergens. Total IgE were 59.3 kU/L. Immunoblotting showed a specific IgE-mediated sensitization of the patient to cow's, rabbit's and horse's serum albumins (SA). In conclusion, our case report confirms the role of SA as cross-reacting agent in allergic sensitization to furry animals. This finding suggests to perform SPTs to several furry animal allergens in all individuals with high level of allergic sensitization to common pets (cats and/or dogs) in order to identify allergy to other animals and consequently to avoid future exposures at risk. Answer: Yes, sensitization to furry animals can be considered an independent allergic phenotype in nonoccupationally exposed individuals. A study comprising 900 consecutive patients found a significant difference in allergic sensitization to mammalian allergens between individuals sensitized to common pets (cat, dog) and those sensitized to other allergens, suggesting that sensitization to pet allergens increases the risk of developing allergies to other furry animals (PUBMED:21462804). This indicates that individuals who are sensitized to common pets like cats and dogs might be at a higher risk of developing allergic sensitization to other mammalian allergens, even if they are not exposed to these animals in an occupational setting. Furthermore, a case of dog allergy demonstrated that dog allergen immunotherapy (DAI) could reduce skin prick test (SPT) responses not only to dog allergens but also to other furry animals such as rabbit, horse, mouse, rat, hamster, and cow, suggesting a potential cross-reactive mechanism (PUBMED:28294591). This cross-reactivity is supported by findings that extensive cross-sensitization or co-sensitization was observed between certain allergens from dogs and cats and those from rats, sheep, mice, cows, and horses (PUBMED:32516776). Additionally, serum albumin, a minor allergen in cats and dogs, has been associated with cross-reactivity to other furry animals, and patients sensitized with serum albumin had a significantly higher risk of sensitization to other animals and had higher rhinitis visual analog scale (VAS) scores (PUBMED:34707374). This suggests that sensitization to furry animals may not only be an independent allergic phenotype but also involve cross-reactive components that can lead to sensitization to a broad range of furry animals beyond direct exposure. In summary, sensitization to furry animals is an independent allergic phenotype that can occur in nonoccupationally exposed individuals, and it may involve cross-reactive mechanisms that increase the risk of sensitization to a variety of furry animals.
Instruction: Should functional sensitivity of a new thyroid stimulating hormone immunoassay be monitored routinely? Abstracts: abstract_id: PUBMED:28551133 Enzyme-free chemiluminescence immunoassay for the determination of thyroid stimulating hormone. Based on the chemiluminescent resonance energy transfer (CRET) from high-energy intermediate formed in 1,1'-oxalyldiimidazole chemiluminescence (ODI-CL) reaction to fluorescent microsphere-conjugated detection antibody, a cost-effective and highly sensitive enzyme-free sandwich immunoassay with chemiluminescence detection was developed for the early diagnosis of thyroid cancer. Thyroid stimulating hormone (TSH) primary antibody immobilized on paramagnetic bead captures rapidly TSH in human serum within 15min. Also, TSH detection antibody-conjugated yellow-green microsphere, capable of emitting green light, rapidly interacted with TSH-bound TSH primary antibody immobilized on paramagnetic bead during the 30-min incubation at room temperature. The sandwich complexes immobilized on paramagnetic in PBS buffer (pH 7.4) emit bright green light with the addition of ODI-CL reagents (e.g., H2O2, ODI). The dynamic range of enzyme-free sandwich immunoassay was 0.037-18μIUml-1 with 3-5% coefficient of variation. The limit of detection (LOD=background+3σ) of the enzyme free sandwich immunoassay was as low as 0.011μIUml-1. We expect that the enzyme free sandwich immunoassay with ODI-CL detection can be a new assay method for the early diagnosis of thyroid cancer with statistically acceptable accuracy, precision, and reproducibility. abstract_id: PUBMED:29604521 Application of antibody-nanogold-ionic liquid-carbon paste electrode for sensitive electrochemical immunoassay of thyroid-stimulating hormone. A novel electrochemical immunosensor based on carbon paste electrode (CPE) composed of ionic liquid (IL) and graphite was constructed. It demonstrated good efficiency for quick (each test in 30 s) determination of thyroid stimulating hormone (TSH). Electrode surface was modified by gold nanoparticles in order to immobilize of the thyroid stimulating hormone antibody (anti-TSH) on the CPE. The immunoassay structure was established by sandwiching the antigen (TSH) between the thyroid stimulating hormone antibody on the CPE surface modified with gold nanoparticles and the secondary antibody, polyclonal anti-human-TSH labeled with horseradish peroxidase (HRP-labeled anti-TSH). The signal of differential pulse voltammetry (DPV) was used as a basis for the determination of TSH concentration. This signal is generated by the reaction between O-aminophenol (OAP) and H2O2 catalyzed by HRP. The proposed immunosensor is able to measure the concentration of TSH in a linear range between 0.2 and 90.0 ng/mL with a detection limit 0.1 ± 0.02 ng/mL. In addition, high sensitivity and acceptable stability were achieved by this immunosensor which is promising in the clinical assay of TSH. abstract_id: PUBMED:37752776 Quantitative Detection of Thyroid-Stimulating Hormone in Patient Samples with a Nanomechanical Single-Antibody Spectro-Immunoassay. Functional disorders of the thyroid remain a global challenge and have profound impacts on human health. Serving as the barometer for thyroid function, thyroid-stimulating hormone (TSH) is considered the single most useful test of thyroid function. However, the prevailing TSH immunoassays rely on two types of antibodies in a sandwich format. The requirement of repeated incubation and washing further complicates the issue, making it unable to meet the requirements of the shifting public health landscape that demands rapid, sensitive, and low-cost TSH tests. Herein, a systematic study is performed to investigate the clinical translational potential of a single antibody-based biosensing platform for the TSH test. The biosensing platform leverages Raman spectral variations induced by the interaction between a TSH antigen and a Raman molecule-conjugated TSH antibody. In conjunction with machine learning, it allows TSH concentrations in various patient samples to be predicted with high accuracy and precision, which is robust against substrate-to-substrate, intra-substrate, and day-to-day variations. It is envisioned that the simplicity and generalizability of this single-antibody immunoassay coupled with the demonstrated performance in patient samples pave the way for it to be widely applied in clinical settings for low-cost detection of hormones, other molecular biomarkers, DNA, RNA, and pathogens. abstract_id: PUBMED:37278108 Instability of harmonized thyroid-stimulating hormone immunoassays in clinical practice. Objective: Thyroid-stimulating hormone (TSH) harmonization is effective in minimizing differences between the results of immunoassays in healthy subjects. However, the effectiveness of TSH harmonization in clinical practice has not been investigated. The aim of this study was to evaluate the instability of TSH harmonization in clinical practice. Methods: We compared the reactivities of four harmonized TSH immunoassays using combined difference plots of 431 patients. We selected patients with statistically significant deviations in TSH levels and analyzed their thyroid hormone levels and clinical characteristics. Results: The combined difference plots showed that one harmonized TSH immunoassay exhibited markedly different reactivity even after TSH harmonization compared with the other three immunoassays. Among 109 patients with mild-to-moderate elevation of TSH levels, we selected 15 patients with statistically significant deviations in TSH levels according to the difference plots of three harmonized TSH immunoassays, excluding one immunoassay that showed different reactivity. The thyroid hormone levels of three patients were misclassified as hypothyroidism or normal due to deviating TSH levels. In terms of clinical characteristics, these patients were in poor nutritional status and general condition, possibly due to their severe illness (e.g., advanced metastatic cancer). Conclusion: We have confirmed that TSH harmonization in clinical practice is relatively stable. However, some patients showed deviating TSH levels in the harmonized TSH immunoassays, indicating the need for caution, particularly in poorly nourished patients. This finding suggests the presence of factors that contribute to the instability of TSH harmonization in such cases. Further investigation is warranted to validate these results. abstract_id: PUBMED:8857250 Clinical application of chemiluminescent immunoassay for thyroid stimulating hormone, free-T4 and intact-parathyroid hormone. To meet the demands of clinical practice, rapid, sensitive and specific laboratory tests are essential. We evaluated the performance of a newer generation of chemiluminescent immunoassay (CLIA) system (Immulite, Diagnostic Products Corp, Los Angeles, CA, USA) and compared it with a popular immunoradiometric immunoassay (IRMA) system. The data of 158 patients whose thyroid function was assessed and that of another 158 patients whose parathyroid function was assessed were analyzed. The thyroid stimulating hormone (TSH) CLIA is an ultrasensitive TSH (us-TSH) assay that provides more detailed, reliable results for low TSH concentrations than TSH IRMA. Coupled with the free-T4 assay, the us-TSH assay enhances the ability to detect and monitor the status of thyroid dysfunction. Intact-parathyroid hormone (PTH) CLIA can detect lower serum intact-PTH levels, is more sensitive and is as reliable as IRMA. Combined with serum calcium levels and clinical conditions, it proved efficient and discriminated reliably between hypercalcemia associated with malignancy and that of hyperparathyroidism. abstract_id: PUBMED:37627966 Evaluation of the Abbott Alinity i Thyroid-Stimulating Hormone Receptor Antibody (TRAb) Chemiluminescent Microparticle Immunoassay (CMIA). Background: We evaluated the performance of the Abbott thyroid-stimulating hormone receptor antibody chemiluminescent microparticle immunoassay (CMIA) on the Alinity i. Methods: Verification studies for precision, linearity, analytical measuring range, diagnostic cut offs for Graves' disease were performed. We compared the Abbott CMIA to an established TRAb assay (Roche electrochemiluminescence immunoassay). Method comparison analysis was performed between serum and plasma samples on the Abbott CMIA. Results: Repeatability (CV%) for TRAb were 4.07, 1.56, 0.71 and within-laboratory imprecision (CV%) were 4.07, 1.90, 0.71 at 3.0, 10.0, 30.0 IU/L of TRAb, respectively. Linearity and analytical measuring range were verified from 1.07-47.9 IU/L. The limit of the blank was 0 IU/L, limit of detection was 0.15 IU/L, and limit of quantification was 0.5 IU/L. Passing-Bablok analysis showed agreement between the two assays; Y-intercept = 0.787, slope = 1.04. Passing-Bablok analysis also showed agreement between the plasma and serum samples run on the Abbott CMIA; Y-intercept -0.17, slope = 0.97. Conclusions: The Abbott TRAb CMIA on the Alinity i performs within the manufacturer claims for assay precision, linearity, analytical measuring range, limit of blank, limit of detection, limit of quantitation and diagnostic cut offs for Graves' disease. Thus, the Abbott TRAb CMIA on the Alinity i is fit for clinical use. abstract_id: PUBMED:37264363 Sensitivity to thyroid hormone indices are associated with papillary thyroid carcinoma in Chinese patients with thyroid nodules. Background: The association between thyroid hormone sensitivity and thyroid cancer is unknown, and we aimed to investigate the association between sensitivity to thyroid hormone indices and papillary thyroid carcinoma (PTC) in Chinese patients with thyroid nodules (TNs). Methods: A total of 1,998 patients undergoing thyroid surgery due to TNs from Nanjing Drum Tower Hospital were included in this study. We evaluated central sensitivity to thyroid hormones, such as thyroid stimulating hormone index (TSHI), TSH T4 resistance index (TT4RI), thyroid feedback quantile-based index (TFQI), and parametric thyroid feedback quantile-based Index (PTFQI). Peripheral sensitivity to thyroid hormone was evaluated by FT3 to FT4 ratio. Multivariate logistic regression analysis was performed to evaluate the association between sensitivity to thyroid hormone indices and PTC risk. Results: The results showed that central indices of thyroid hormone sensitivity, including TSHI, TT4RI, TFQI, and PTFQI, were positively associated with PTC risk. For each SD increase in TSHI, TT4RI, TFQI, and PTFQI, the odds ratios (OR, 95% CI) of PTC were 1.31 (1.18-1.46), 1.01 (1.01-1.02), 1.94 (1.45-2.60), and 1.82 (1.41-2.34), respectively. On the other hand, the association between peripheral sensitivity to thyroid hormone and PTC was significantly negative. For each SD increase in FT3/FT4 ratio, the OR (95% CI) of PTC was 0.18 (0.03-0.96), and a negative correlation was found between FT3/FT4 ratio and TNM staging of PTC. Conclusions: Sensitivity to thyroid hormone indices could be used as new indicators for predicting PTC in Chinese patients with TNs. Future researches are still needed to confirm our findings. abstract_id: PUBMED:37254003 Electrochemical immunoassay for one-pot detection of thyroxin (T4) and thyroid-stimulating hormone (TSH) using magnetic and Janus nanoparticles. Concurrent measurement of thyroid-stimulating hormone (TSH) and thyroxine (T4) hormones profoundly help clinicians diagnose hyper- and hypothyroidism. This work demonstrates the development of a sandwich-type electrochemical immunoassay using Janus and magnetic nanoparticles for one-pot detection of thyroxine (T4) and thyroid-stimulating hormone (TSH). The signaling probe was developed by preparing Janus cadmium (CdO) and zinc oxide (ZnO) NPs decorated by T4/TSH-specific molecularly imprinted polymers (MIPT4-CdO and MIPTSH-ZnO). The capture probe was obtained by coating magnetic Fe3O4 NPs with 1,3-Bis(3-carboxy propyl) tetramethyl disiloxane and activating using N-hydroxy succinimide (NHS) and 1-ethyl-3-(3-dimethyl aminopropyl) carbodiimide (EDC), and finally conjugating with T4/TSH-specific antibodies. To analyze T4 and TSH in actual samples, MIPT4-CdO and MIPTSH-ZnO were added to the sample solutions, and after incubation, capture probes (Fe3O4-AbTSH and Fe3O4-AbT4) were added. An external magnetic field was used to separate the sandwiched nanosystem, followed by adding a dilute solution of nitric acid (HNO3) to dissolve CdO and ZnO NPs and free Cd(II) and Zn(II) cations. The concentration of these cations was determined using constant-current potentiometric stripping analysis (cc-PSA) on screen-printed electrodes (SPE) modified with multi-walled carbon nanotubes (MWCNT). The obtained signals for Cd(II) and Zn(II) were proportional to T4 and TSH concentrations. Limits of detection (LOD) for T4 and TSH analyses were respectively 0.02 ng.dL-1 and 0.0002 µU.mL-1 with a linear range of 0.05-50 ng.dL-1 and 0.001-100 µU.mL-1. The proposed nanosystem's main advantage is the simultaneous detection of T4 and TSH in clinical samples with high sensitivity, selectivity, and stability. abstract_id: PUBMED:38447802 Development and basic performance verification of a rapid homogeneous bioassay for agonistic antibodies against the thyroid-stimulating hormone receptor. Graves' disease is a type of autoimmune hyperthyroidism caused by thyroid-stimulating antibodies (TSAb).1 The combination of a porcine thyroid cell bioassay and cyclic adenosine monophosphate (cAMP) immunoassay (TSAb-enzyme immunoassay; EIA) is a clinically approved TSAb measurement method. Due to the requirement of multiple procedures and a long assay time of 6 h in the TSAb-EIA, a simplified and rapid assay is desired. Herein, we developed a rapid homogeneous TSAb bioassay (rapid-TSAb assay) using the human embryonic kidney cell line (HEK293), engineered to express the human thyroid-stimulating hormone receptor (TSHR), along with a cAMP-dependent luminescence biosensor. The measurement consists of three steps: thawing frozen cells, blood sample addition, and luminescence detection. The procedures can be conducted within 1 h. The World Health Organization International Standard TSAb (NIBSC 08/204) stimulated the cells co-expressing TSHR and cAMP biosensor. The intra- and inter-assay coefficients of variance were &lt; 10%. Stimulation activity using wild-type TSHR and chimeric TSHR (Mc4) almost completely correlated with the tested Graves' disease and normal samples. In the rapid-TSAb assay, the evaluation of 39 samples, including TSHR antibody-positive sera, yielded a sensitivity of 100.0% and a specificity of 90.9%, compared to the TSAb-EIA control. The rapid-TSAb assay enables simple and rapid measurement of TSAb and is promising for improving the diagnosis of autoimmune thyroid diseases. abstract_id: PUBMED:31934556 The Effect of Exenatide on Thyroid-Stimulating Hormone and Thyroid Volume. Objective: Glucagon-like peptide-1 (GLP-1) analogues are now widely used for the treatment of type 2 diabetes mellitus (DM). Many binding sites for GLP-1 have been demonstrated in the specific tissue compartments of organs in-cluding the brain and thyroid. The aim of this study was to investigate the effect of exenatide treatment on thyroid-stimulating hormone (TSH) and thyroid volume in diabetic patients without thyroid disease. Material And Methods: The study included 46 diabetic patients without thyroid disease who were receiving exenatide treatment. Comparisons were made of total thyroid volume and serum concentrations of TSH at baseline and after 6 months of follow-up. Results: Of the 46 patients, 13 were excluded from the study, as they were unable to complete the treatment or left the follow-up process. After 6 months of exenatide treatment, the serum TSH concentration decreased significantly (from 2.3 [0.7-5.4] to 1.8 mIU/L [0.3-4.2], p= 0.007). There were no significant differences in thyroid volume (11.6 ± 9.0 vs. 12.1 ± 8.8 cm3, p = 0.19), free thyroxine (fT4), free tri-iodothyronine (fT3), and calcitonin levels before and after treatment. Thyroid volume was not affected by decreased TSH level (p:= 0.141) or a reduction in body mass index (BMI) (p &gt; 0.05), and no correlation was detected between variation in TSH level and change in BMI (p &gt; 0.05). Conclusions: Exenatide treatment for 6 months significantly decreased serum TSH concentration but did not affect thyroid volume in diabetic patients without thyroid disease. Answer: The functional sensitivity of a new thyroid stimulating hormone (TSH) immunoassay should indeed be monitored routinely. This is important for ensuring the accuracy and reliability of the assay in clinical practice. The functional sensitivity refers to the lowest concentration of TSH that can be measured with acceptable precision and accuracy, typically defined by a coefficient of variation (CV) of 20% or less. Monitoring the functional sensitivity is crucial because TSH is a key marker for thyroid function, and its precise measurement is essential for the diagnosis and management of thyroid disorders. For instance, the enzyme-free chemiluminescence immunoassay for TSH determination has shown a limit of detection as low as 0.011μIUml-1, which is important for the early diagnosis of thyroid cancer (PUBMED:28551133). Similarly, an electrochemical immunosensor for TSH has demonstrated a detection limit of 0.1 ± 0.02 ng/mL, indicating its high sensitivity and potential utility in clinical assays (PUBMED:29604521). Moreover, the instability of harmonized TSH immunoassays in clinical practice has been observed, with some patients showing deviating TSH levels, which suggests the need for careful monitoring, particularly in patients with poor nutritional status or severe illness (PUBMED:37278108). This highlights the importance of routine functional sensitivity monitoring to ensure that the assay performs consistently across different patient populations and clinical conditions. Additionally, the development of rapid homogeneous bioassays for agonistic antibodies against the TSH receptor, which can be conducted within 1 hour, further emphasizes the need for ongoing performance verification to maintain the high sensitivity and specificity required for the diagnosis of autoimmune thyroid diseases (PUBMED:38447802). In conclusion, routine monitoring of the functional sensitivity of a new TSH immunoassay is essential to ensure that the assay provides reliable and accurate results for the diagnosis and management of thyroid-related conditions.
Instruction: Does a text messaging intervention improve knowledge, attitudes and practice regarding iodine deficiency and iodized salt consumption? Abstracts: abstract_id: PUBMED:22874004 Does a text messaging intervention improve knowledge, attitudes and practice regarding iodine deficiency and iodized salt consumption? Objective: To determine the effectiveness of implementation of a prevention programme via mobile phone text messaging in enhancing knowledge, attitudes and practice concerning iodine deficiency and iodized salt consumption. Design: In a randomized controlled trial, participants were subjected to a brief tele-educational support regarding iodine deficiency and the importance of iodized salt consumption. The intervention group received daily text messages via mobile phone for 6 weeks. Knowledge, attitude and practice scores, urinary iodine concentration and salt iodine content were assessed at baseline and 8 weeks after the intervention. Setting: Participants were recruited from health-care centres in Tehran, the capital city of Iran. Subjects: For the present study 205 females aged ≥18 years were randomly assigned to the intervention (n 95) and control (n 110) groups. Results: A significant difference was found in median knowledge scores between the intervention and control groups at follow-up (P = 0.004). There was also a significant difference in median attitude scores between the intervention and control groups (P = 0.02). The intervention group did not differ significantly in median practice score, urinary iodine concentration and salt iodine content from the control group. Conclusions: Text messaging interventions are effective in improving individuals' knowledge and attitudes regarding preventive health-care topics. abstract_id: PUBMED:30326961 Knowledge and practice of iodized salt utilization among reproductive women in Addis Ababa City. Objective: The objective of this study was to assess knowledge and practice of iodized salt utilization among reproductive women in Addis Ababa city. A cross-sectional study was carried out on 549 households. A sample district was designated by using the simple random sampling techniques. Data were collected by a face-to-face interview and household salt was tested to check whether its practice was good. p &lt; 0.2 in the bivariate logistic regression was entered into the multivariable logistic regression, and p &lt; 0.05 was considered as significantly associated. Results: Mothers who had good knowledge and practice of iodized salt were 78% (95% CI 74.9, 81.2) and 76.3% (95% CI 72.7, 79.8), respectively. Monthly household income (AOR = 2.97; 95% CI 1.20, 7.37) was associated with knowledge of iodized salt of respondents. Similarly, educational status (AOR = 2.45; 95% CL 2.10, 6.43) of respondents was significantly associated with the practice of iodized salt. This study indicated that increasing the level of knowledge and practice of iodized salt was good. Monthly household income and educational status were associated with knowledge and practices of iodized salt of respondents. Hence, improving mothers' education is a highly recommended strategy for addressing public health problems of iodine deficiency. abstract_id: PUBMED:32528198 Knowledge, attitude, and practice of iodized salt use in Al-Riyadh and Al-Ozozab areas, Khartoum, Sudan. Iodine deficiency and subsequent iodine deficiency disorders have been a longstanding problem in many areas of the world, including Sudan, and still remains as the leading cause of preventable brain damage. According to the Sudan Household Health Survey, in 2006, the consumption of iodized salt in Sudan was 11%, only to deteriorate later on in 2011 to 9.5%. This is a descriptive cross sectional study which assesses the knowledge, attitude, and practice regarding the use of iodized salt in two socioeconomically different residential areas, Al-Riyadh and Al-Ozozab, Khartoum city, Sudan. From total of 152 participants selected from Al-Riyadh and Al-Ozozab areas (50% from each), there were 87 (57 %) females. Participants from Al-Riyadh had a higher educational and income level than those from Al-Ozozab, and better knowledge regarding the importance of regular iodized salt consumption to treat iodine deficiency (61% and 54%, respectively). However, only 39% of the participants were actually buying iodized salt. There was a significant relationship between residency and buying of iodized salt among the participants from Al-Riyadh (49%) and Al-Ozozab (30%) areas (p-value = 0.02). There was also significant association between the educational level and buying iodized salt (p-value = 0.014), but not with the income (p-value = 0.23). The consumption of potential goitrogenic foods (pearl millet or peanut butter) was high among the participants (n = 142) from Al-Riyadh and Al-Ozozab, and constituted 76% and 83%, respectively. Compulsory national salt specification needs to be established in Sudan, together with monitoring the market availability of iodized salt. abstract_id: PUBMED:23057393 Consumption of iodized salt among households of Basra city, south Iraq. Progress towards universal salt iodization in Iraq is uncertain. A cross-sectional study with cluster sampling was carried out to investigate the sources and packaging of salt in 900 households in Basra and assess the knowledge and practice of household members regarding iodized salt and iodine deficiency disorders. Adequately iodized salt was found in only 68.3% of households. Salt from local sources was significantly less likely to be iodized than salt from imported sources (15.3% versus 86.9% had &gt; or = 15 ppm). Salt in plastic packages was mostly adequately iodized (95.0%) compared with loose packages (70.3%). Most respondents had heard about iodized salt (92.6%), yet only 27.1% knew about the health benefits. Significantly more household members with university education had iodized salt in the household than those who were illiterate or only could read/write (82.0% versus 57.7%). Better monitoring of local salt supplies and more education to improve the population's awareness of iodization are needed. abstract_id: PUBMED:30815492 Knowledge, attitude, and practice regarding household consumption of iodized salt among rural women of Tripura, India: A mixed-methods study. Introduction: Iodine is an essential element for thyroid function; it is necessary in minute amounts for normal growth, development, and well-being of all humans. There is gap in the utilization of adequately iodized salt in the rural areas due to nonavailability, poverty, poor knowledge of iodine deficiency diseases, and faulty storage practices. Objective: The objective was to find out knowledge, attitude, and practices (KAP) regarding iodized salt consumption and association of sociodemographic factors, if any, among rural women of Tripura. Materials And Methods: This community-based study was conducted among 270 rural women residing at Madhupur village, Tripura. A self-made pretested schedule (Cronbach's alpha = 0.7) was used as the study tool to collect information on KAP regarding iodized salt consumption. Results: Majority of the participants belonged to 31-40 years (30.4%), with a mean age of 38.6 (±13.8) years; all were predominantly Hindus (90.4%). Knowledge and attitude regarding iodized salt consumption were significantly associated with age groups ≤36 years, literacy, and general caste (P &lt; 0.05), but practice was not significant (P &gt; 0.05). Good knowledge and attitude regarding use of iodized packed salt were less than half (46.7% and 41.1%, respectively), but higher level (83.3%) of correct practice was found (P &gt; 0.05). Conclusion: The existing knowledge and attitude of participants toward iodized salt usage were less, but majority were consuming iodized salt without knowing its benefits. Hence, there is a need to educate rural people through nutrition education or knowledge, with active participation of grassroot level workers in generating awareness about the health benefits of consuming adequately iodized salt. abstract_id: PUBMED:33898842 Impact of knowledge, attitude, and practice on iodized salt consumption at the household level in selected coastal regions of Bangladesh. Background: Iodine deficiency disorder is one of the major hidden hunger for poverty-stricken coastal rural areas of Bangladesh. Despite various programs, including universal salt iodization, IDD is significantly persistent due to poverty, inadequate knowledge, poor attitude, and practice of iodized salt consumption. The present study aimed to determine iodine concentration in salt at the household level as well as the magnitude of knowledge, attitude, and practice related to iodized salt utilization and iodine deficiency disorders (IDDs). Material And Method: A population-based cross-sectional study was carried out using multistage sampling among 400 households in four coastal districts in Bangladesh. A standard pretested questionnaire was used to determine the knowledge, attitude, and practice towards iodized salt consumption. The iodine level was measured using the gold standard iodometric titration technique in the collected salt samples from the selected households. Result: The study results revealed that almost half of the participants had good knowledge regarding a balanced diet, whereas the positive attitude and practice of balanced diet consumption was relatively low (only 16.2% and 11.3%, respectively). The respondents' education profile and age were strongly correlated with knowledge and attitude; however, the practice was not significantly correlated with age and education. The concentration of iodine in salt at household levels was strongly associated with the practice of using iodized salt. Use of adequate iodized salt was 2.838 times (OR: 2.838, 95% CI: 1.7-4.735) and 3.884 times (OR: 3.884, 95%CI: 2.029-7.433) more likely to have a positive attitude and good practice towards iodized salt and IDDs respectively. Conclusion: Proper knowledge about iodized salt and IDDs should be introduced to all public education premises levels. Interventions and programs can be formulated to enhance knowledge, attitude, and practice at the household level so that the consumption of adequate iodized salt can be ensured to reduce iodine deficiency problems. abstract_id: PUBMED:30781393 Poor Iodine Knowledge, Coastal Region, and Non-Iodized Salt Consumption Linked to Low Urinary Iodine Excretion in Zhejiang Pregnant Women. Background: Iodine deficiency in pregnant women, defined as a median urinary iodine concentration (UIC) of less than 150 μg/L, is an important public health issue. To improve their iodine intake, it is important to understand the knowledge and practices regarding iodine. Methods: A cross-sectional investigation was conducted on 2642 pregnant women during 2016⁻2017 in Zhejiang province, China. A 3-point Likert scale questionnaire was used to record knowledge. The UIC and iodine content in household salt were determined. Results: Coastal participants were iodine deficient (median UIC 127.6 μg/L) while inland participants were iodine sufficient (median UIC 151.0 μg/L). The average knowledge scores were significantly lower for the coastal participants (24.2 points vs. 25 points for the inland participants; p &lt; 0.001). The percentage for iodized salt consumption was significantly lower for the coastal participants (88.9% vs. 96.0% for those inland; p &lt; 0.001). A generalized linear model analysis showed that non-iodized salt consumption, coastal region, and low knowledge scores were independently associated with a low UIC. Conclusions: Comprehensive interventional strategies are needed to develop to achieve an optimal iodine status. We recommend that coastal pregnant women should take iodine supplements based on the consumption of iodized salt, and improvement of iodine-related knowledge. abstract_id: PUBMED:22041794 Knowledge of iodoprophylaxis and iodized salt consumption among medical students in Italy. Objectives: Iodine deficiency is still a notable health-care problem in several regions of Europe and can be responsible for maternal and fetal goiter, miscarriages, stillbirths, reduced fetal growth, congenital hypothyroidisms, damaged reproduction in adult life, and inadequate development and maturation of the fetal brain, which is the major preventable cause of mental defects. The aim of this study was to assess the medical students' knowledge and practices regarding the program of iodized-salt-use and active iodine prophylaxis program launched by the Italian Ministry of Health. Materials And Methods: A self-administered, anonymous, 19 single-item questionnaire was developed. We have collected 294 filled forms: 86 from first-year, 118 from fourth-year and 90 from sixth-year medical students at the First Medical School of 'Sapienza' University of Rome, Italy, in May 2009. Results: Two hundred eighty four students (96.6%) affirm that they know of the existence iodized salt, even though only 199 (67.7%) report personal consumption. This level is quite far from the target (consumption by almost 80% of the general population) suggested by the international literature. No statistically relevant difference was observed between the answers of male and female students. Conclusions: Our study proves that clinical students have a more detailed knowledge on iodized salt if compared with pre-clinical students, but such knowledge remains overall defective and in some cases absolutely incorrect. Additional education is probably needed to prepare physicians to play a critical role in counseling about iodine prophylaxis. abstract_id: PUBMED:33102305 Knowledge and practice of mothers of school-going children regarding iodized salt and its association with the psychology testing of the students. Context: Iodized salt is the mainstay intervention to tackle iodine-related disorders. In spite of the government's efforts to make iodized salt available in more than 90% of the households across the country, there are sketchy details on its use. Since the main propagators of this intervention are the mothers cooking in kitchen, this study was conceived, as a corollary to a larger micronutrient assessment study among school-going children. Aims: The aim of this was to assess knowledge, practice of mothers of school-going children (6-16 years) regarding use of iodized salt, and to see any association between the simple psychological testing of the children and the iodine content of the salt used in home kitchen and the contributing factors thereof. Methods And Materials: A cross-sectional study was conducted by interviewing mothers of 240 school-going children of age groups 6-11 years and 12-16 years from six schools of Bhubaneswar using a semi-structured questionnaire regarding their knowledge on iodine and its deficiency and the practice of using iodized salt. MBI kit was used to assess the iodine content of salt used at home. The psychology adeptness of students was assessed using the standardized and validated coding, standard progressive matrices, and colored progressive matrices tests. Statistical Analysis: Analysis was done using SPSS version 16, wherein the sociodemographic data were shown in proportions, and the iodine content was taken as the dependent continuous variable and means reported. For the sake of associations with intelligence and cognition, a binary logistic regression model was drawn. Results: It reports that 47.5% of mothers knew that iodized salt is good for health and iodine content measured using MBI kit was 15 ppm and above for 71.7% of the sample. Further, iodine content in salt is seen as protective for cognition and also for IQ. Conclusions: The knowledge of the mothers' needs reinforcement regarding iodine and related disorders and the benefits of iodized salt. The aim of the National Iodine Deficiency Disorders Control Program can only be achieved by generating awareness, regular follow-up, and improving the iodization of salt. abstract_id: PUBMED:35966212 Knowledge and practices of pregnant women about iodized salt and its availability in their households in Hawassa city. Community-based cross-sectional study. Objectives: This study was aimed to assess the knowledge and practices of pregnant women about iodized salt and its availability in their households in Hawassa city, Ethiopia. Methods: A community-based cross-sectional study design was conducted among pregnant women with a total sample size (N) = 553 in Hawassa city. A multi-stage sampling technique was employed. First, four sub-cities of Hawassa city (out of eight sub-cities) were selected using the lottery method. Households of pregnant women in each selected sub-city were identified and screened. Finally, representative samples were drawn using systematic random sampling technique. Results: A total of 537 pregnant women was included, a response rate of 97.1%. This was due to incomplete data for nine and participation refusal for seven participants. The mean age of the participants was 26.98 ± 4.75 years. About 52.51% and 52.33% of the respondents had good knowledge and good salt handling practices, respectively. Regarding availability of iodized salt, 65.55% of pregnant women (31.66% salt with iodine level ⩾15 parts per million (ppm) and 33.89% salt with iodine level &lt;15 ppm) had iodized salt and the remaining 34.45% non-iodized salt (0 ppm) respectively. Conclusion: Availability of iodized salt in households of pregnant women was 65.55% (&gt;0 ppm). More than one-third of pregnant women in Hawassa city are utilized non-iodized salt (0 ppm). Nearly half of the pregnant women in Hawassa city had poor knowledge and poor practice about iodized salt. Hawassa city Health Department along with concerned stakeholders should invest more effort to increase awareness about iodized salt and how to handle it safely in households. Answer: Yes, a text messaging intervention does improve knowledge and attitudes regarding iodine deficiency and iodized salt consumption. A study conducted in Tehran, Iran, found that participants who received daily text messages for 6 weeks showed a significant difference in median knowledge scores and median attitude scores compared to the control group that did not receive such interventions. However, the intervention did not significantly affect the median practice score, urinary iodine concentration, or salt iodine content (PUBMED:22874004).
Instruction: Is high job control a risk factor for poor quality of life in workers with high autism spectrum tendencies? Abstracts: abstract_id: PUBMED:26119207 Is high job control a risk factor for poor quality of life in workers with high autism spectrum tendencies? A cross-sectional survey at a factory in Japan. Objectives: To examine the effect of autism spectrum (AS) tendencies and psychosocial job characteristics on health-related quality of life (HRQOL) among factory workers. Methods: A questionnaire survey was administered to 376 Japanese factory employees from the same company (response rate: 83.6%) in 2010. Psychosocial job characteristics, including job demand, job control, and social support, were evaluated using the Job Content Questionnaire (JCQ). AS tendencies was assessed using the Autism-Spectrum Quotient (AQ), and HRQOL was assessed using the Medical Outcomes Study Short-Form General Health Survey (SF-8). Associations were investigated using multiple logistic regression analysis adjusted for confounders. Results: In the multivariate analysis, AQ was positively (odds ratio [OR]: 3.94; 95% confidence interval [CI]: 1.70-9.73) and social support in the workplace was inversely (OR: 0.25; 95% CI: 0.10-0.57) associated with poor mental HRQOL. No significant interaction was observed between AQ and JCQ subitems. Only social support was inversely associated with poor physical HRQOL (OR and 95% CI for medium social support: 0.45 and 0.21-0.94), and a significant interaction between AQ and job control was observed (p=0.02), suggesting that high job control was associated with poor physical HRQOL among workers with high AQ, whereas low job control tended to be associated with poor physical HRQOL among others. Conclusions: Our results suggest that AS tendencies have a negative effect on workers' HRQOL and social support is a primary factor in maintaining HRQOL. Moreover, a structured work environment can maintain physical HRQOL in workers with high AS tendencies since higher job control will be stressful. abstract_id: PUBMED:35173134 Depressive symptoms in workers with high autistic trait scores according to job stress type. Individuals with high levels of autistic traits are at a high risk of experiencing depressive symptoms, and are also vulnerable to job stress. This study aimed to identify which combination of autistic traits and type of job stress are related to depressive symptoms. Participants comprised 992 workers from different regions of Japan. Autistic traits, depressive symptoms, and job stress were measured using the Autism-Spectrum Quotient, K6 scale, and Job Content Questionnaire, respectively. Logistic regression was performed to estimate the odds ratio and 95% confidence interval. Workers with high autistic traits scores reported significantly more depressive symptoms for all job stress types, especially high job demand. Depressive symptoms differed according to autistic traits and job stress. In workers with high autistic trait subscale scores, those with active job stress reported more depressive symptoms than those with high strain job stress, except for the "poor imagination" trait. This is contrary to previous reports that the active stress type is generally less associated with depressive symptoms than the high-strain stress type. To prevent depressive symptoms in workers with high autistic trait scores, it is important to understand which combination of autistic traits and type of job stress contribute to depressive symptoms. abstract_id: PUBMED:27407040 Self-regulation and quality of life in high-functioning young adults with autism. Background: Autism is generally associated with poor functional outcome but little is known about predictors of quality of life, especially during early adulthood. This study was conducted to assess subjective quality of life during early adulthood in high-functioning autism spectrum disorder and its relation with self-regulating abilities. Individuals with high-functioning autism spectrum disorder who progressed into post-secondary higher education ( N = 75) were compared to a typical peer control group ( N = 28) based on behavioral self-report questionnaires. The results indicated that individuals with high-functioning autism spectrum disorder reported significantly lower subjective quality of life than typical controls ( p &lt; 0.001, effect size ( d) = 1.84). In addition, individuals with high-functioning autism spectrum disorder reported more problems with emotion processing ( p &lt; 0.05, effect size ( d) = 0.79) and daily executive functioning ( p &lt; 0.001, effect size ( d) = 1.29) than controls. A higher level of executive functioning problems was related to lower quality of life in the high-functioning autism spectrum disorder group, but no significant relation between level of emotion processing and subjective quality of life became apparent in the regression analysis. Our findings show that even in high-functioning young adults with autism, executive functioning, emotion processing, and subjective quality of life are low compared to typically developing peers. Furthermore, these results emphasize the importance of targeting executive functioning problems in individuals with autism to improve subjective quality of life. abstract_id: PUBMED:35682245 Quality of Life among Mothers of High Functioning Autism Spectrum Disorder (HFASD)Adolescents. Autism Spectrum Disorder (ASD) has become more prevalent globally. The disorder is predominantly characterised by low social skills noted explicitly in people with High Functioning Autism Spectrum Disorder (HFASD). The individuals usually possess a normal or superior intelligence quotient (IQ) but the disability impedes the achievement of their actual high potential, hence compromising their quality of life (QoL). Managing adversities encountered by children with HFASD often compromises the QoL of the entire family. Thus, this study aimed to identify specific domains of QoL among mothers of high-functioning autistic adolescents. The study assessed seven mothers of adolescents with HFASD using a semi-structured interview format. A thematic qualitative analysis was conducted to analyse the data. The results suggested that mothers perceived their QoL based on physical and emotional well-being, material well-being, interpersonal relationship, and environmental well-being. Intervention for HFASD is multidisciplinary, which targets a broad spectrum of symptoms and skills deficits and customises the programme to meet each individual's different needs. Nonetheless, intervention facilities in Malaysia are seriously limited, particularly in supporting QoL for children with HFASD. Therefore, by identifying the domains of QoL would improve the mothers' resilience in raising their children with HFASD. abstract_id: PUBMED:27048354 Quality of life of high-functioning children and youth with autism spectrum disorder and typically developing peers: Self- and proxy-reports. Studies have shown parents to report lower quality of life for their children with autism spectrum disorder than children's self-report scores and the same applies for data on typically developing children. Our objectives were to: (1) explore how high-functioning children with autism spectrum disorder rate their quality of life compared with paired controls without autism spectrum disorder; (2) explore how parents of high-functioning children with autism spectrum disorder rate their children's quality of life compared with parents of paired controls; and (3) compare child self-reports of quality of life with their parent's proxy-reports for both groups of children. Data were collected with the Icelandic self- and proxy-reported versions of the KIDSCREEN-27. Reports of 96 children with autism spectrum disorder, 211 controls and their parents were included in the analyses. Compared with controls, children with autism spectrum disorder had lower means on all quality of life dimensions. Parents of children with autism spectrum disorder evaluated their children's quality of life lower on all dimensions than did parents of controls. On four out of five dimensions, children with autism spectrum disorder reported better quality of life than did their parents. Despite differences in ratings children with autism spectrum disorder and their parents agreed on the most problematic dimensions, namely, social support and peers and physical well-being. Our results highlight the importance of seeking the viewpoints of both children and their parents. abstract_id: PUBMED:31747775 Quality of high school programs for students with autism spectrum disorder. The purpose of the study was to examine the quality of high school programs for students with autism spectrum disorder in the United States. The Autism Program Environment Rating Scale-Middle/High School was used to rate the quality of programs for students with autism spectrum disorder in 60 high schools located in three geographic locations in the United States (CA, NC, and WI). Findings indicated that the total quality rating across schools was slightly above the adequate criterion. Higher quality ratings occurred for program environment, learning climate, family participation, and teaming domains. However, quality ratings for intervention domains related to the characteristics of autism spectrum disorder (e.g. communication, social, independence, functional behavior, and transition) were below the adequate quality rating level. Also, quality ratings for transition were significantly higher for modified (primarily self-contained) programs than standard diploma (primarily served in general education) programs. School urbanicity was a significant predictor of program quality, with suburban schools having higher quality ratings than urban or rural schools, controlling for race, school enrollment size, and Title 1 eligibility status. Implications for working with teachers and school teams that support high school students with autism spectrum disorder should include a targeted focus on transition programming that includes a breadth of work-based learning experiences and activities that support social-communication domains. abstract_id: PUBMED:35305592 The impact of schizotypy on quality of life among adults with autism spectrum disorder. Background: Autism spectrum disorder (ASD) and schizotypal personality disorder can be difficult to distinguish. Deficits in social relationships and social interaction, present in both conditions, are known to impair quality of life. The aim of the present study was to investigate if schizotypal symptoms affect quality of life among adults diagnosed with autism spectrum disorder and to study the association between schizotypy and autistic traits among them. Methods: Participants diagnosed with autism spectrum disorder (n = 110) completed questionnaires exploring schizotypy (Schizotypal Personality Questionnaire - Brief Revised (SPQ-BR)), autistic traits (The Ritvo Autism, Asperger Diagnostic Scale-Revised Screen 14 items), anxiety and depression (The Hospital Anxiety and Depression scale) and quality of life (Brunnsviken Brief Quality of Life Scale and the European quality of life index version 5D). Results: Schizotypy was found to be associated with anxiety, depressive and autistic symptoms, and poor quality of life. Although schizotypy was a predictor for impaired quality of life, this relationship was mediated by symptoms of anxiety and depression, plausibly inherent to autism. Autistic traits were positively associated with all higher order constructs of the SPQ-BR, i.e. positive and negative schizotypy, disorganization and social anxiety, as well as with poor quality of life. Conclusions: There is considerable overlap between schizotypy and autism that needs to be considered in research. Prominent schizotypal traits in people with ASD may constitute an endophenotype coinciding with a particularly poor quality of life. Trial Registration: ClinicalTrials.gov identifier: NCT03570372 : Internet-based Treatment for Adults with Autism Spectrum Disorder (MILAS). abstract_id: PUBMED:28666391 The combined impact of social support and perceived stress on quality of life in adults with autism spectrum disorder and without intellectual disability. Identifying modifiable correlates of good quality of life in adults with autism spectrum disorder is of paramount importance for intervention development as the population of adults with autism spectrum disorder increases. This study sought to examine social support and perceived stress as potential modifiable correlates of quality of life in adults with autism spectrum disorder. We hypothesized that adults with autism spectrum disorder without co-occurring intellectual disabilities ( N = 40; aged 18-44 years) would report lower levels of social support and quality of life than typical community volunteers who were matched for age, sex, and race ( N = 25). We additionally hypothesized that social support would buffer the effect of perceived stress on quality of life in adults with autism spectrum disorder. Results indicated that adults with autism spectrum disorder reported significantly lower levels of social support and quality of life than matched typical community volunteers. In addition, findings showed significant direct effects of social support and perceived stress on quality of life in adults with autism spectrum disorder. Social support did not buffer the effect of perceived stress on quality of life. Interventions that teach adults with autism spectrum disorder skills to help them better manage stress and cultivate supportive social relationships have the potential to improve quality of life. abstract_id: PUBMED:28805071 A systematic review of quality of life of adults on the autism spectrum. Autism spectrum disorder is associated with co-existing conditions that may adversely affect an individual's quality of life. No systematic review of quality of life of adults on the autism spectrum has been conducted. Our objectives were as follows: (1) review the evidence about quality of life for adults on the autism spectrum; (2) critically appraise current practice in assessing quality of life of adults on the autism spectrum. We searched bibliographic databases and other literature to identify studies using a direct measure of quality of life of adults on the autism spectrum. Hand searching of reference lists, citation searching and personal communication with field experts were also undertaken. In total, 827 studies were identified; 14 were included. Only one quality of life measure designed for use with the general autism spectrum population was identified. Quality of life of adults on the autism spectrum is lower than that of typically developing adults, when measured with tools designed for the general population. There are no comprehensive autism spectrum disorder-specific quality of life measurement tools validated for use with representative samples of adults on the autism spectrum. There is a pressing need to develop robust measures of quality of life of autistic adults. abstract_id: PUBMED:30805767 The Relationship Between Child Anxiety and the Quality of Life of Children, and Parents of Children, on the Autism Spectrum. Children on the autism spectrum experience high rates of anxiety but little is known about the impact of anxiety on child or parent quality of life (QoL). This study aimed to investigate the relationship between anxiety, autism characteristics, and QoL in children and their parents. Sixty-four parents of children on the spectrum completed questionnaires on their child's autism characteristics, anxiety symptomatology, and both child (PedsQL) and parent QoL (WHOQoL-BREF). Parents of children with elevated anxiety reported lower child and parent QoL. Regression models highlight specific anxiety subscales as predictive of PedsQL school and emotional functioning but not of parent QoL. Anxiety symptomatology may be a significant factor contributing to specific aspects of QoL for children on the spectrum. Answer: Yes, high job control can be a risk factor for poor quality of life in workers with high autism spectrum tendencies. A study conducted among Japanese factory workers found that while autism spectrum (AS) tendencies negatively affected workers' health-related quality of life (HRQOL), high job control was associated with poor physical HRQOL among workers with high AS tendencies. This suggests that workers with high AS tendencies may find high job control stressful, which in turn can impact their quality of life (PUBMED:26119207).
Instruction: Does preoperative genitography in congenital adrenal hyperplasia cases affect surgical approach to feminizing genitoplasty? Abstracts: abstract_id: PUBMED:20728173 Does preoperative genitography in congenital adrenal hyperplasia cases affect surgical approach to feminizing genitoplasty? Purpose: Genitography has traditionally been an imperative part of radiographic evaluation in females born with congenital adrenal hyperplasia before surgical reconstruction. We evaluated the role of preoperative genitogram in surgical reconstruction planning and how it correlates with intraoperative findings. Materials And Methods: We retrospectively reviewed the records of 40 patients with congenital adrenal hyperplasia who underwent feminizing genitoplasty at our institution between 2003 and 2009. Preoperative genitogram findings were recorded and correlated with operative findings. Results: A total of 42 preoperative genitograms were available for review in 40 patients with congenital adrenal hyperplasia who underwent feminizing genitoplasty. Genitography revealed complete anatomy of the urogenital sinus in 30 cases (72%) while bladder filling alone was present in 9 (21%) and vaginal filling was noted in 2 (5%). The urogenital sinus could not be catheterized in 1 patient (2%). Vesicoureteral reflux was identified in 6 patients (15%) with a mean grade of 2. Vaginoplasty was done with a flap technique in 37 patients (more than 90%) while the remaining 3 underwent pull-through vaginoplasty. In no case did genitogram reveal anatomy that was not visible via endoscopy or at reconstruction. The vaginoplasty technique was based on endoscopic and intraoperative findings, and not on genitogram. Conclusions: Genitography during preoperative evaluation in females with congenital adrenal hyperplasia undergoing feminizing genitoplasty did not reveal urogenital sinus anatomy completely in 25% of the patients in our series. Preoperative genitogram did not influence the surgical approach. Its value as preoperative imaging in patients with congenital adrenal hyperplasia may be limited. abstract_id: PUBMED:32485822 Early Feminizing Genitoplasty in Girls with Congenital Adrenal Hyperplasia (CAH)-Analysis of Unified Surgical Management. Aim: To analyze a single-centre experience in feminizing genitoplasty in virilized girls with congenital adrenal hyperplasia (CAH). Methods: Review of medical records of all 46, XX CAH patients undergoing single stage feminizing genitoplasty between 2003 and 2018 was performed. Results: A total of 31 girls aged from 4 months to 10 years were included in the study. The majority (n = 26/31, 84%) were operated before 2 years of age (median 8 months). External virilization was rated as Prader 3 (n = 7/31), Prader 4 (n = 21/31) and Prader 5 (n = 3/31). The urethrovaginal confluence location was low in 19 and high in 12 girls with a percentage distribution similar in Prader 4 and 5 (p &gt; 0.05) but significantly different in Prader 3 (p = 0.017). The follow-up ranged from 12 months to 15 years. All parents assessed the cosmetic result as satisfactory. Perioperative complications occurred in two patients and included rectal injury (n = 1/31) and prolonged bleeding (n = 1/31). Three patients developed late complications including labial atheromas (n = 2/31) and vaginal stricture requiring surgical dilatation (n = 1/31). Low confluence did not decrease the risk of complications. Conclusions: Early feminizing genitoplasty in girls with congenital adrenal hyperplasia, irrespective of virilization severity, gives satisfactory cosmetic results and is characterized by low and acceptable surgical risk. Nevertheless, the most important determinant of the effectiveness of such management would be future patients' satisfaction. abstract_id: PUBMED:37386261 Long-term 10-year comparison of girls with congenital adrenal hyperplasia who underwent early and late feminizing genitoplasty. Objectives: The appropriate time to perform feminizing genitoplasty in patients with congenital adrenal hyperplasia (CAH) with the 46, XX genotype is still debatable. The aim of our study was to evaluate the relationship between age at surgery and long-term surgical outcomes of patients who underwent feminizing genitoplasty. Methods: We retrospectively analyzed 14 patients with CAH and 46, XX genotypes with feminizing genitoplasty (clitoroplasty + vaginoplasty) between 2005 and 2022. The patients were divided into two groups. Group 1 consisted of seven girls (n = 7/14) who have been operated before the age of 2 years. Group 2 consisted of seven girls (n = 7/14) who have been operated after the age of 2 years. The two groups are compared regarding anatomical assessments, overall cosmetic results, need for additional intervention using Creighton's criteria. Additionally, the cosmetical satisfaction of the patients/parents is questioned. Results: The mean age of the girls was 32.42 months (10-96 months) during operation time. The mean age of Group 1 patients (n = 7/14) who have been operated before the age of 2 years was 11.71 months (10-19 months). The mean age of Group 2 patients (n = 7/14) who have been operated after the age of 2 years was 53.14 months (36-96 months). The mean follow-up time was 10.57 years (3-18 years). There was no statistically significant difference between those operated on before and after two years in terms of anatomical assessments, overall cosmetic results and patient/parent satisfaction except the need for additional intervention (p = 0.049). In Group1 (operation age &lt; 2 years old), five out of seven (71.42%) patients needed additional major surgery (four urogenital sinus re-mobilization, one redo-clitoroplasty). Those who received additional major surgery were the ones who were not satisfied. In Group 2 (operation age &gt; 2 years old), two patients out of seven patients (28.57%) received major surgery (two redo-urethroplasties) and those patients were not satisfied. When patient/parent satisfaction was compared with additional surgical intervention, as expected, patients'/parents' satisfaction increased as major surgical intervention decreased. This was statistically significant (p = 0.007). The main source of dissatisfaction was repeated surgery among the parents. Conclusions: The possibility of this additional surgical intervention increases, and patient/parent satisfaction decreases in patients below the age of 2 years. The corrective surgeries can be differed until the gender identity of the patient matures and the patient autonomy in deciding whether this surgery should occur. abstract_id: PUBMED:32810113 Feminizing Genitoplasty in a young girl with Glanzmann's thrombasthenia-management of haemostasis. We report peri- and post-operative management of haemostasis in a 11-year old girl with Glanzmann Thrombasthenia (GT) who had feminizing genitoplasty for genital ambiguity due to Congenital Adrenal Hyperplasia (CAH-21 Hydroxylase deficiency). A blend of Glanzmann Thrombasthenia (GT) and DSD 46XX due to CAH is not reported in literature. Surgery particularly genitourinary reconstruction in patients with GT is challenging due to risk of intra and post-operative bleeding. Haemostasis can successfully be achieved with platelet transfusions, antifibrinolytic (Tranexamic acid) and judicious use of recombinant factor VIIa (rFVIIa) even in a resource limited setting. abstract_id: PUBMED:11371974 1-Stage feminizing genitoplasty: 8 years of experience with 49 cases. Purpose: We retrospectively evaluated the results of a modified technique for 1-stage feminizing genitoplasty in children with congenital adrenal hyperplasia (CAH) and other rare intersex conditions. Materials And Methods: Between 1991 and 1998, 47 infants and 2 adolescents underwent 1-stage feminizing genitoplasty at our hospital. Of the patients 44 had CAH, 2 had the partial androgen insensitivity syndrome and 3 were true hermaphrodites. In 3 patients we used the Passerini-Glazel technique, while in 46 our technique of feminizing genitoplasty was used, which consists of maximal mobilization of the urogenital sinus en bloc including the vagina and urethra via the perineal approach. Two lateral plates built from dorsal split phallic and preputial skin together with the dorsal split urogenital sinus are used to construct the vaginal introitus and labia minora, and the scrotal folds are used to construct the labia majora. Reduction clitoroplasty is done according to the technique of Kogan et al. Mean age plus or minus standard deviation was 0.9 +/- 0.3 years of the patients with CAH and 13 +/- 2.3 of the remainder. Preoperative information, including status of the internal genitalia, anatomical length of the vagina site of communication between the vagina and urogenital sinus, in relation to the pelvic floor, was obtained by transabdominal pelvic ultrasound. Results: Operating time ranged between 120 and 180 minutes, average hospitalization period was 4 to 5 days and mean followup was 4.7 +/- 2.6 years. Preoperative ultrasound provided the correct data regarding the vaginal and internal genitalia anatomy in all cases, and the exact communication site between the vagina and urogenital sinus was demonstrated in 93%. Complications consisted of intraoperative rectal injury in 1 case and mild wound infection of the buttocks area in 3. In 1 case total clitoris loss was observed. All patients who underwent modified genitoplasty have had successful cosmetic and early functional results. Two patients presented with repeat clitoromegalia due to inadequate adrenal suppression. Conclusions: Our adapted technique of 1-stage feminizing genitoplasty provides good cosmetic and functional results. The functional results of this operation should be reevaluated after puberty and in adulthood. abstract_id: PUBMED:19693746 Congenital adrenal hyperplasia: a critical appraisal of the evolution of feminizing genitoplasty and the controversies surrounding gender reassignment. Although congenital adrenal hyperplasia (CAH) is the most common cause of genital ambiguity, its surgical treatment remains controversial and often times difficult, especially in cases of high urethro-vaginal confluence. Many aspects are still uncertain, such as the timing of feminizing genitoplasty and type of surgical technique. The objective of this study is to provide a thorough review of the surgical management of children with CAH, discussing the evolution of feminizing genitoplasty and its different techniques, and also to critically appraise the available literature on the timing of surgical intervention and gender reassignment. Prospective long-term studies evaluating the results of modern feminizing genitoplasty techniques are needed as current evidence is based on outdated operations that are no longer used. To date, there have been no studies comparing early and delayed feminizing genitoplasty with regard to psychological outcomes. All families should be counseled regarding the controversies and treatment options, including the watchful waiting approach. abstract_id: PUBMED:38446278 Outcomes of one-stage feminizing genitoplasty in children with congenital adrenal hyperplasia and severe virilization. Purpose: To present our surgical experience and outcomes in congenital adrenal hyperplasia (CAH) patients with severe virilization using a combined technique of total urogenital mobilization (TUM) and a modified pull-through vaginoplasty to perform a safe and effective one-stage feminizing genital reconstruction for these children. Methods: Fourteen CAH patients with severe virilization, defined by a Prader IV and V rating of the external genitalia, underwent TUM followed by a limited vaginal pull-through procedure from June 2016 to December 2020. Postoperative anatomical and cosmetic outcomes, and urinary continence, were evaluated. Results: Out of the 14 cases in this study, 8 were classified as prader IV and 6 as Prader V. The median age at surgery was 11 months (range 6-36 months), and the mean urethral length was 1.4 cm (range 1.2-1.8 cm). The median follow-up period was 4 years. Our cosmetic outcomes were good in 11 (78.5%), satisfactory in 2, and poor in one case. All patients achieved age-appropriate toilet training without urinary incontinence. Conclusion: Adopting our surgical approach of TUM with modified pull-through vaginoplasty has simplified feminizing surgical reconstruction in CAH cases with severe genital atypia and a very high vaginal confluence with short urethral length, yielding adequate introitus with good anatomical and cosmetic appearance and adequate urinary continence outcomes. abstract_id: PUBMED:35260360 Sexual function in adult females after feminizing genitoplasty for congenital adrenal hyperplasia. Objective: To investigate sexual function and subjective attitudes towards surgery in adult women after feminizing genitoplasty (FG) for congenital adrenal hyperplasia (CAH). Materials And Methods: We retrospectively reviewed the medical files of patients who underwent FG from 1996 to 2018. Of those, patients older than 16 years of age were asked to answer non-standardized questions aiming at the awareness of genital surgery and standardized questionnaire evaluating sexual function (GRISS). The anonymized answers were compared to a control group of 50 healthy females. Student's t-test, Pearson's χ2 test, Fisher's exact test and Spearman's rank correlation coefficient were performed. A P-value less than 0.05 was considered significant. Results: Out of 106 patients who underwent FG, 64 were included, 55 successfully contacted and 32, aged 17-40 years (median 25.5), answered the questionnaires (58% response rate). In all, 11 patients underwent historical clitorectomy in early childhood by a pediatric gynecologist before 1996 and pull-through vaginoplasty at puberty. A total of 21 patients underwent modern one- or two-staged FG. There were statistically significant differences between the CAH group and controls as to the proportion living in committed partnerships (28% and 84%, respectively; p = 0.0000), having sexual intercourse (41% and 92%, respectively; p = 0.0000) and having their own children (13% and 36%, respectively; p = 0.0369). The median overall GRISS score was 4 (range 1-7) in the CAH group compared to 1 (range 1-9) in the control group, with infrequency and vaginismus representing the least favorable subscales (median 5). In all, 81% of the women operated on for CAH would recommend such an operation to any other individual with CAH and 66% of them suggested the surgery should be performed in early childhood. Discussion: By this study we have contributed to the debate over suggestions that FG may cause adverse effects on sexual life of the patients in adulthood, beside already published effects on their mental well-being and LUTS. Despite possible selection bias due to only 58% response rate the comparison with control group shows only a moderate sexual dysfunction and more significant impact on long-term partnership that might be caused by other factors related to CAH beside FG itself. Conclusions: Partnership, offspring, sexual frequency and function were impaired in adult females with CAH after FG. However, the median GRISS score was below the pathological level of 5. abstract_id: PUBMED:10679698 Transabdominal pelvic sonography in the preoperative evaluation of patients with congenital adrenal hyperplasia. Purpose: We assessed the clinical value of transabdominal pelvic sonography in the preoperative evaluation of patients with congenital adrenal hyperplasia (CAH) who required feminizing genitoplasty. Methods: From 1987 to 1998, 31 patients with female pseudohermaphroditism due to CAH underwent feminizing genitoplasty. The median age of the patients was 9 months (range, 1-18 years). Radiologic evaluation performed before surgical reconstruction included retrograde genitography in the first 10 patients and sonographic examination in all 31 patients. Imaging was used to evaluate the anatomic positions and the length of the vagina, whether the junction of the vagina and the urogenital sinus occurred distal or proximal to the pelvic floor, and the presence of internal genitalia. Results: Abdominal sonography identified internal female genitalia in all 31 patients, identified the anatomic shape and position of the vagina in 30 patients (97%), and confirmed the site of communication between the vagina and the urogenital sinus relative to the pelvic floor in 28 patients (90%). Sonographic findings were confirmed by intraoperative panendoscopy. Genitography was less useful than sonography, identifying the site of communication between the vagina and urogenital sinus in only 6 (60%) of 10 patients. Conclusions: In patients with CAH undergoing vaginal reconstruction, sonography provides adequate information about the anatomy of the vagina and urogenital sinus for surgical decision-making. abstract_id: PUBMED:17408848 Passerini-glazel feminizing genitoplasty: modifications in 17 years of experience with 82 cases. Objectives: To describe modifications of Passerini-Glazel feminizing genitoplasty and report on long-term functional outcome. Methods: Modifications include vaginal dissection and disconnection from the urethrovaginal sinus as the initial stage of the procedure; large dissection of the neurovascular bundle on both dorsal and lateral faces of the clitoris; plication of the skin around the reduced clitoris; and suturing the lateral edge of the proximal portion of the mucocutaneous plate with the labia majora's medial edge to a plane deeper than the subcutaneous tissue. These modifications reduce bleeding and operating time, better preserve clitoral sensitivity, form the clitoral prepuce, and create labia minora. Results: Eighty-two patients underwent modified Passerini-Glazel feminizing genitoplasty. Mean operating time was 120min (range: 100-180). Forty-six patients (46 of 82, 56%) were assessed at a mean follow-up of 5 yr (range: 2-9). There were no cases of clitoral vascularization defect or urethrovaginal fistula. The urethral meatus was never hypospadic. The vaginal introitus was large and elastic in all cases. Vaginal caliber at the internal suture line was as large as the vaginal introitus and the distal native vagina in 20 (43.5%) of the 46 girls. All mothers and patients reported satisfaction with external genital appearance. Conclusions: These long-term results suggest that our modifications of one-stage Passerini-Glazel feminizing genitoplasty facilitate the procedure and produce good cosmetic results. Answer: Preoperative genitography has traditionally been an important part of the radiographic evaluation in females born with congenital adrenal hyperplasia (CAH) before surgical reconstruction. However, a study by PUBMED:20728173 found that preoperative genitography did not reveal urogenital sinus anatomy completely in 25% of the patients and did not influence the surgical approach to feminizing genitoplasty. The surgical technique was instead based on endoscopic and intraoperative findings, suggesting that the value of preoperative genitography in patients with CAH may be limited. In the context of this study, genitography did not provide additional information that altered the surgical plan, which was determined by what was visible during endoscopy or at the time of reconstruction. Therefore, according to this specific research, preoperative genitography does not seem to affect the surgical approach to feminizing genitoplasty in CAH cases.
Instruction: Can she make it? Abstracts: abstract_id: PUBMED:23590385 Investigation on the use of expired make-up and microbiological contamination of mascaras. Objectives: This study investigated the habits of female students regarding make-up use, and quantifies the microbiological contamination of mascaras worn by this population. Methods: To this end, 44 students answered a structured questionnaire to evaluate the use of expired make-up, shared usage and reports of adverse effects. Subsequently, make-up samples were collected to check the manufacturing registration and the expiration date and its visibility on the label. The total counts of microorganisms and identification of Pseudomonas aeruginosa and Staphylococcus aureus in mascara samples collected were performed as described in the Brazilian Pharmacopea (4th edition) Results: According to the results obtained, 97.9 % (43/44) of participants reported that they use or have previously used make-up after the expiration date, with mascara being the most frequently mentioned product. It was observed that on the sample collection date, 70.5% (31/44) of the students had some type of expired make-up. The microbiological analysis of 40 mascara samples revealed 2.54 ± 1.76 10(4) UFC mL(-1) bacteria and 2.55 ± 1.54 10(4) UFC mL(-1) fungi. Analysis revealed the presence of S. aureus in 79% of samples and of P. aeruginosa in 13%. Conclusion: The results are interesting because they show that women tend to continue to use make-up beyond the expiry date. Frequently, these products have a high level of contamination with pathogenic microorganisms. abstract_id: PUBMED:36397838 Scientific counterfactuals as make-believe. Counterfactuals abound in science, especially when reasoning about and with models. This often requires entertaining counterfactual conditionals with nomologically or metaphysically impossible antecedents, namely, counternomics or counterpossibles. In this paper I defend the make-believe view of scientific counterfactuals, a naturalised fiction-based account of counterfactuals in science which provides a means to evaluate their meanings independently of the possibility of the states of affairs their antecedents describe, and under which they have non-trivial truth-values. Fiction is here understood as imagination (in contrast with its most typical association with falsity), characterised as a propositional attitude of pretense or 'make-believe' (Walton 1990). The application of this theory to scientific counterfactuals makes their evaluation a game of make-believe: a counterfactual is (fictionally) true iff its antecedent and the rules of the game prescribe the imagining of its consequent (Kimpton-Nye 2020). The result is a practice-based account of counterfactuals and counterfactual reasoning in science which incorporates insights from theoretical and experimental analytic philosophy as well as cognitive science. This way, the make-believe view of scientific counterfactuals shows that the evaluation of scientific counterfactuals is none other than a question of scientific representation in disguise. abstract_id: PUBMED:34455650 Improving and evaluating the adhesion and stability of make-up by enhancing the affinity between skin/make-up layer. Purpose: Make-up clumps, bumps and collapses are the three factors that determine how well make-up has been performed. The purpose of this study is to reduce the three factors mentioned above by using amphiphilic substances to increase the affinity between the skin and the make-up layer. In addition, it aims to evaluate the improvement of the make-up layer by developing an objective make-up layer evaluation method. Methods: Experiments were performed in an attempt to increase the affinity between the skin and the make-up layer by minimizing the difference in surface energy between the two. Multiple types of artificial skin (leather and bio-skin) were used and treated to form the liquid foundation layer. Qualitative evaluation of the make-up layer was conducted by analyzing the surface, cross-section, and fracture area of the make-up layer, using the evaluation method proposed in this study. Results: After applying this method and taking measurements by 3D surface analysis, the surface roughness of the make-up layer reduced by 46%, and the maximum thickness of the make-up layer reduced by about 50% in comparison with the control group (method not applied). In the case of the make-up layer to which this method was applied, two-dimensional cross-sectional Scanning Electron Microscope (SEM) image analysis confirmed that agglomeration was reduced, and the thickness of the make-up layer was also reduced by an average of 54%. According to this result, the technique of increasing the affinity between the skin and the make-up layer reduces the level of aggregation of make-up and encourages the formation of a uniform and thin make-up layer. Also, the fracture area after motion simulation was reduced by 33%. These results indicate that the method of increasing the affinity between skin/make-up membranes positively affects the formation of a uniform make-up layer. Conclusion: Increasing the affinity by reducing the surface energy between the skin and the make-up layer plays an important role in forming a thin and uniform make-up layer by improving the problems of lifting, agglomeration, and collapse of the make-up. In addition, it has been confirmed that through this method, the quality of consumer experience related to make-up satisfaction can be improved. The results show that objective analyses of make-up help the understanding of the quality of consumer experience on make-up. abstract_id: PUBMED:36765986 Lab-Scale Methodology for New-Make Bourbon Whiskey Production. Whiskey production originated in Scotland in the 15th century and was based on malted barley. As Scotch-Irish settlers came into the Ohio river valley, they began fermenting and distilling the primary grain of North America, maize. These earlier settlers started a heritage; they created American Whiskey. The bourbon industry in Kentucky had tremendous growth in the last 20 years, and currently, distilleries have a broad increase in product innovation, new raw materials, improved sustainability, efficient processes, and product diversification. Our study presents a new lab-scale method for new-make bourbon whiskey production. It was developed to mimic distilleries' processes; therefore, results can be extrapolated and adopted by commercial distilleries. The method focused on reproducibility with consistency from batch to batch when handled by an operator or small crew in a university lab. The method consisted of a first cooking step to make a "mash", a fermentation phase of 96 h, a first distillation accomplished with a copper pot still to obtain the "low wines" and a second distillation carried out with an air still to collect the "hearts". The method produced a final distillate of 500-700 mL for further sensory analysis and tasting. This lab-scale method showed consistency between samples in the different parameters quantified and will be also used to train students in fermentation and distillation studies. abstract_id: PUBMED:32075119 Vehicle Make and Model Recognition using Bag of Expressions. Vehicle make and model recognition (VMMR) is a key task for automated vehicular surveillance (AVS) and various intelligent transport system (ITS) applications. In this paper, we propose and study the suitability of the bag of expressions (BoE) approach for VMMR-based applications. The method includes neighborhood information in addition to visual words. BoE improves the existing power of a bag of words (BOW) approach, including occlusion handling, scale invariance and view independence. The proposed approach extracts features using a combination of different keypoint detectors and a Histogram of Oriented Gradients (HOG) descriptor. An optimized dictionary of expressions is formed using visual words acquired through k-means clustering. The histogram of expressions is created by computing the occurrences of each expression in the image. For classification, multiclass linear support vector machines (SVM) are trained over the BoE-based features representation. The approach has been evaluated by applying cross-validation tests on the publicly available National Taiwan Ocean University-Make and Model Recognition (NTOU-MMR) dataset, and experimental results show that it outperforms recent approaches for VMMR. With multiclass linear SVM classification, promising average accuracy and processing speed are obtained using a combination of keypoint detectors with HOG-based BoE description, making it applicable to real-time VMMR systems. abstract_id: PUBMED:30813512 Real-Time Vehicle Make and Model Recognition with the Residual SqueezeNet Architecture. Make and model recognition (MMR) of vehicles plays an important role in automatic vision-based systems. This paper proposes a novel deep learning approach for MMR using the SqueezeNet architecture. The frontal views of vehicle images are first extracted and fed into a deep network for training and testing. The SqueezeNet architecture with bypass connections between the Fire modules, a variant of the vanilla SqueezeNet, is employed for this study, which makes our MMR system more efficient. The experimental results on our collected large-scale vehicle datasets indicate that the proposed model achieves 96.3% recognition rate at the rank-1 level with an economical time slice of 108.8 ms. For inference tasks, the deployed deep model requires less than 5 MB of space and thus has a great viability in real-time applications. abstract_id: PUBMED:33670494 The Impact of Terroir on the Flavour of Single Malt Whisk(e)y New Make Spirit. The impact of barley variety and its geographical growth location (environment) on the flavour of new make spirit was investigated to determine if "terroir" can be applied in the production of single malt whisk(e)y. New make spirits were produced at laboratory scale under controlled conditions from two different barley varieties (Olympus and Laureate) grown at two distinct environments (Athy, Co Kildare and Bunclody, Co Wexford) in Ireland over two consecutive seasons (2017 and 2018). The spirit samples were analysed by gas chromatography mass spectrometry olfactometry and descriptive sensory analysis. Forty-two volatiles were detected with eight deemed as very influential and fifteen deemed as influential to the aroma of new make spirit. Sensory attributes were influenced by barley variety, environment, and the interactions thereof over both seasons, with environment and the interaction of variety x environment having a greater impact than variety alone. Chemometric analysis of the olfactometry and sensory data found that both environment and season had a greater impact on the aromatic sensory perception of the new make spirits than variety alone. Therefore, this study clearly demonstrates a "terroir" impact on the flavour of new make spirit and highlights its potential importance especially in relation to single malt whisk(e)y. abstract_id: PUBMED:24876683 A comparison of the reliability of make versus break testing in measuring palmar abduction strength of the thumb. Previous studies have established the role of quantitative measurements of palmar abduction strength of the thumb (PAST). This study compares the reliability of the 'make' versus the 'break' test in measuring PAST in healthy volunteers. In a 'make' test, the body part being tested is positioned at the start of its range of motion and the participant is asked to exert his/her maximal force. In a 'break' test, increasing force is applied to a body part after it has completed its range of motion, until the joint being tested gives way. PAST was measured in both hands in 100 healthy volunteers using a handheld device. Two examiners measured PAST using both the 'make' and 'break' test to determine inter-rater reliability. The tests were repeated in 30 volunteers 6 weeks after the initial testing to determine intra-rater reliability. Our results showed that the 'make' test has better inter and intra-rater reliability. abstract_id: PUBMED:31307215 Make-Up Therapy for Patients With Facial Nerve Palsy. Objectives: To investigate the effectiveness of make-up therapy for patients with facial nerve palsy. Methods: Seven female patients with facial nerve palsy who received specialist make-up therapy were enrolled. The objective of the make-up therapy was to obtain a symmetrical facial appearance. Results: Overall score for the Facial Clinimetric Evaluation (FaCE) scale was significantly improved after make-up therapy. There was a tendency for symptoms of depression to be improved among patients after make-up therapy. Conclusion: Make-up therapy to improve the symmetry of facial appearance could afford a noninvasive and low-cost treatment for patients with facial nerve palsy, especially in terms of patient quality of life and psychological condition. abstract_id: PUBMED:37359632 Investigating individual intention to make money: can motivation of wealth be a socio-economic leverage? Wealth is an indicator of individual well-being. Increasing wealth is a pathway to achieving socio-economic development. Thus, it is imperative to shed light on factors that induce individual intention to increase wealth. This study investigates the effects of perception of wealth, perception of the rich, and behavioral control on personal intention to make money. A stratified sampling method was employed to obtain a sample of 991 respondents from Northern, Central, and Southern Vietnam who were invited to participate in a structured questionnaire survey in 2021. We utilized the Confirmatory Factor Analysis to validate the proposed model and tested the hypotheses using the Partial Least Squared-SEM. Empirical results highlight that individual behavioral control, explicit perception of the rich, and perception of wealth are essential determinants of individual intention to make money. Interestingly, motivation of wealth positively moderates the relationship between the perception of wealth and personal intention to make money. In addition, post-Covid-19 opportunities positively moderate two-pair relationships: perception of wealth-individual intention to make money and explicit perception of the rich-individual intention to make money. Insights from this study imply appropriate policies for the government to enhance the inspiration of people to work harder, leading to sustainable development. Answer: Based on the provided abstracts, the question "Can she make it?" is too vague to be answered directly. The abstracts cover a wide range of topics related to "make" in different contexts, such as the use and contamination of make-up (PUBMED:23590385), the concept of make-believe in scientific counterfactuals (PUBMED:36397838), improving the adhesion and stability of make-up (PUBMED:34455650), lab-scale methodology for making bourbon whiskey (PUBMED:36765986), vehicle make and model recognition (PUBMED:32075119, PUBMED:30813512), the impact of terroir on the flavor of whiskey new make spirit (PUBMED:33670494), the reliability of make versus break testing in measuring strength (PUBMED:24876683), make-up therapy for patients with facial nerve palsy (PUBMED:31307215), and investigating individual intention to make money (PUBMED:37359632). Without additional context or information about what "she" is attempting to "make," it is not possible to provide a relevant answer from these abstracts. If "she" refers to a person using make-up, then the abstracts provide information on the risks of using expired make-up (PUBMED:23590385), methods to improve make-up application (PUBMED:34455650), and the use of make-up therapy for facial nerve palsy (PUBMED:31307215). If "she" refers to a person involved in scientific research or vehicle recognition technology, then other abstracts would be relevant. However, without further details, it is not possible to determine if "she can make it" in any specific endeavor.
Instruction: Adjuvant chemotherapy for the "oldest old" ovarian cancer patients: can we anticipate toxicity-related treatment failure in a vulnerable population? Abstracts: abstract_id: PUBMED:19235786 Adjuvant chemotherapy for the "oldest old" ovarian cancer patients: can we anticipate toxicity-related treatment failure in a vulnerable population? Background: Patients aged &gt; or = 80 years who are diagnosed with advanced ovarian cancer (OC) have been reported to have a poor prognosis. In the current study, chemotherapy-related toxicity data were evaluated between patients aged &gt; or = 80 years and those aged &lt; 80 years. Methods: Patients with OC who underwent cytoreductive surgery with chemotherapy were included. Self-reported toxicity data were obtained from National Cancer Institute Common Toxicity Criteria (CTC) forms. Objective indicators of status including albumin level, weight, and creatinine clearance were abstracted both before and after therapy. Data were compared between patients by decade of age. Results: A total of 246 patients were included. A presenting Karnofsky performance status &gt;2 was recorded in 17% of patients aged &gt; or = 80 years versus 0% to 4% of patients aged &lt; 80 years (P = .002). Platinum-based chemotherapy was used in all patients. For patients aged &lt; 80 years, combination chemotherapy was used in &gt; 90% versus 69% in those aged &gt; or = 80 years (P &lt; .0001). Standard-dose combination therapy was used in 72% to 86% of patients aged &lt;80 years versus 28% of patients aged &gt; or =80 years (P &lt; .0001). Patients aged &gt; or =80 years completed &gt; or =6 cycles of therapy approximately 57% of the time versus 84% to 97% of the time for those aged &lt; 80 years (P = .0001). CTC forms identified no self-reported toxicities to be more common among patients aged &gt; or =80 years. Multivariate logistic regression identified creatinine clearance &lt; 65 mL/minute (odds ratio [OR] of 4.6), 5% weight loss (OR of 2.5), prechemotherapy albumin level of &lt; 2 g/dL (OR of 3.65), and initiation of therapy with a single agent (OR of 3.9) as independent predictors of failure to complete chemotherapy. Conclusions: Despite initial treatment modifications as well as toxicity assessment, only 57% of patients aged &gt; or =80 years completed planned chemotherapy. It was confirmed that further studies into the pharmacokinetics of chemotherapy in the elderly and more sensitive assessment of therapy-related toxicity are required. abstract_id: PUBMED:27159764 Utilization and Toxicity of Alternative Delivery Methods of Adjuvant Chemotherapy for Ovarian Cancer. Objective: Compared with conventional intravenous platinum and taxane-based chemotherapy for ovarian cancer, both intraperitoneal chemotherapy and more frequent dose-dense intravenous chemotherapy have been associated with improved survival in some studies. We examined the utilization and toxicity of these three methods of chemotherapy delivery in women with ovarian cancer. Methods: We performed a population-based study and analyzed data on women with ovarian cancer who underwent primary surgery followed by platinum and taxane-based chemotherapy from 2009 to 2013 who were recorded in the MarketScan database. Adjuvant chemotherapy was classified as: intraperitoneal chemotherapy, dose-dense chemotherapy (weekly administration of chemotherapy), or standard chemotherapy (every 3 weeks). Hospitalizations and emergency department visits for chemotherapy-associated complications and costs were recorded and compared using χ tests. Results: A total of 5,892 patients, including 4,135 (70.2%) who received standard chemotherapy, 859 (14.6%) who received intraperitoneal chemotherapy, and 898 (15.2%) treated with dose-dense chemotherapy, were identified. From 2009 to 2013, use of intraperitoneal chemotherapy remained constant (16.3-16.3%), whereas use of dose-dense therapy increased (8.7-18.1%) (P&lt;.001). Hospitalizations for chemotherapy-associated complications occurred in 21.3% of women receiving standard chemotherapy, 34.7% of patients treated with intraperitoneal therapy, and in 25.2% of those receiving dose-dense treatment (P&lt;.001); emergency department visits occurred in 18.3%, 26.3%, and 20.3%, respectively (P&lt;.001). The largest differences in hospitalizations and emergency department visits were seen for gastrointestinal toxicities and electrolyte disorders. The per-patient costs of hospitalization were higher for intraperitoneal chemotherapy than other treatment modalities. Conclusion: Intraperitoneal chemotherapy was used in less than 15% of women with ovarian cancer, whereas use of dose-dense chemotherapy is increasing. Although we did not examine survival, intraperitoneal chemotherapy is significantly more toxic than the other methods of treatment. abstract_id: PUBMED:34598899 Role of adjuvant and post-surgical treatment in gynaecological cancer. Adjuvant and post-operative therapy aimed at reducing the risk of disease recurrence and improving potential for cure can be broadly categorised into systemic and locoregional treatment. For epithelial ovarian cancer, cytoreductive surgery and platinum-based chemotherapy is the mainstay management. Maintenance therapy with PARPi is a state-of-the-art option for women with advanced disease following complete or partial response to first-line platinum-based chemotherapy, particularly those with BRCA mutations. Adjuvant treatment for endometrial cancer depends mostly on FIGO staging and histopathological risk factors. For cervical cancer, adjuvant chemoradiation is indicated after surgery in women with close or positive resection margins and positive nodes. Generally, recommendations for adjuvant therapy should be individualised and reviewed at the multidisciplinary tumour board meeting, and the decision for adjuvant therapy should be balanced with treatment toxicity. The overview of the role of adjuvant and post-surgical treatment in gynaecological cancers will be discussed in this chapter. abstract_id: PUBMED:32971514 Adjuvant Treatment of Early Ovarian Clear Cell Carcinoma: A Population-Based Study of Whole Abdominal Versus Pelvic Nodal Radiotherapy. Background: Adjuvant treatment in early ovarian clear cell carcinoma (OCCC) is not yet standardized. The objective of this population-based study was to compare the outcome of patients with early OCCC treated with adjuvant chemotherapy versus chemoradiotherapy (chemoRT) and evaluate the association of adjuvant radiotherapy regimens (whole abdominal radiotherapy [WART] versus pelvic nodal radiotherapy [PRT]) with outcome. Patients And Methods: Chart review was conducted to identify patients with stage I and II OCCC with complete information on staging. Patients with stage IA, IB, or IC OCCC purely resulting from capsular rupture were excluded because the provincial protocol does not recommend adjuvant treatment. Results: Overall, 403 patients were identified and 343 received adjuvant treatment, of whom 255 had stage IC or II OCCC and 153 were eligible for final analysis. On Cox multivariable regression, receipt of chemoRT (n=90) was associated with an improvement in failure-free survival (FFS) (hazard ratio [HR], 0.57; 95% CI, 0.34-0.94) compared with chemotherapy alone (n=63). Use of chemoRT also resulted in 54% reduction in the cumulative incidence of cancer-specific mortality (subdistribution HR, 0.46; 95% CI, 0.24-0.89). However, there was no significant difference in the HR for overall survival (OS) between the chemoRT (HR, 0.70; 95% CI, 0.43-1.13) and chemotherapy group. Relative to chemotherapy + WART (chemo-WART), chemotherapy + PRT (chemo-PRT) was not associated with any significant difference in HR for FFS (HR, 1.34; 95% CI, 0.40-4.44) or OS (HR, 1.13; 95% CI, 0.37-3.46). Conclusions: Adjuvant chemoRT was associated with a lower risk of failure compared with chemotherapy alone. However, there was no difference in OS between the adjuvant chemotherapy and chemoRT regimens. Additionally, no significant difference in terms of FFS or OS was found between the chemo-WART and chemo-PRT groups. abstract_id: PUBMED:33435776 Failure of multiple surgical procedures and adjuvant chemotherapy in early-stage steroid-cell ovarian tumor treatment: a case report and literature review. Ovarian steroid-cell tumors (SCTs) are a rare subgroup of sex-cord tumors of the ovary, accounting for less than 0.1% of all ovarian tumors. Not otherwise specified (NOS) tumors are the most common subtype. More than half of patients with SCTs-NOS show hyperandrogenic symptoms. The primary treatment for SCTs is surgery, as most cases are early-staged and benign. Because of the low incidence of metastatic disease, there is insufficient reliable information on the role of adjuvant therapy and the most effective treatment regimen. In this report, a rare case of a recurrent SCT-NOS in a 36-year-old female patient without endocrine symptoms is presented, highlighting the significance of appropriate pathological evaluation and immunohistochemical testing for the accurate diagnosis of this malignancy, particularly in the case of hormonally "silent" tumors. The metastatic tumor described here showed no response to four courses of adjuvant chemotherapy after several debulking surgeries. Based on the clinical findings, the neoplastic etiology should always be considered during the resection of ovarian tumors to prevent possible disease dissemination due to inappropriate surgical techniques. abstract_id: PUBMED:29313170 The Two Faces of Adjuvant Glucocorticoid Treatment in Ovarian Cancer. Adjuvant glucocorticoid treatment is routinely used in the treatment of ovarian cancer to mitigate the undesirable side effects of chemotherapy, thereby enhancing tolerability to higher cytotoxic drug doses and frequency of treatment cycles. However, in vitro and preclinical in vivo and ex vivo studies indicate that glucocorticoids may spare tumor cells from undergoing cell death through enhanced cell adhesion, promotion of anti-inflammatory signaling, and/or inhibition of apoptotic pathways. The implications of laboratory studies showing potential negative impact on the efficacy of chemotherapy have been long overlooked since clinical investigations have found no apparent survival detriment attributable to adjuvant glucocorticoid use. Importantly, these clinical studies were not randomized and most did not consider glucocorticoid receptor status, a vital determinant of tumor response to glucocorticoid administration. Additionally, the clinically beneficial elements of increased chemotherapy treatment adherence and dosing afforded by adjuvant glucocorticoids may offset and therefore mask their anti-chemotherapy activities. This review summarizes the current evidence on the impact of glucocorticoids in ovarian cancer and discusses the need for further research and development of alternative strategies to ameliorate untoward side effects of chemotherapy. abstract_id: PUBMED:34885008 Neo-Adjuvant Chemotherapy Reduces, and Surgery Increases Immunosuppression in First-Line Treatment for Ovarian Cancer. In monotherapy, immunotherapy has a poor success rate in ovarian cancer. Upgrading to a successful combinatorial immunotherapy treatment implies knowledge of the immune changes that are induced by chemotherapy and surgery. Methodology: Patients with a new d ovarian cancer diagnosis underwent longitudinal blood samples at different time points during primary treatment. Results: Ninety patients were included in the study (33% primary debulking surgery (PDS) with adjuvant chemotherapy (ACT), 61% neo-adjuvant chemotherapy (NACT) with interval debulking surgery (IDS), and 6% debulking surgery only). Reductions in immunosuppression were observed after NACT, but surgery reverted this effect. The immune-related proteins showed a pronounced decrease in immune stimulation and immunosuppression when primary treatment was completed. NACT with IDS leads to a transient amelioration of the immune microenvironment compared to PDS with ACT. Conclusion: The implementation of immunotherapy in the primary treatment schedule of ovarian cancer cannot be induced blindly. Carboplatin-paclitaxel seems to ameliorate the hostile immune microenvironment in ovarian cancer, which is less pronounced at the end of primary treatment. This prospective study during primary therapy for ovarian cancer that also looks at the evolution of immune-related proteins provides us with an insight into the temporary windows of opportunity in which to introduce immunotherapy during primary treatment. abstract_id: PUBMED:31104212 Transformation of follicular lymphoma to double-hit lymphoma during adjuvant chemotherapy for concurrent ovarian carcinoma. The frequency of multiple primary malignant neoplasms (MPMN) is increasing due to population aging. Since consensus guidelines for the treatment of MPMN are lacking, treatment strategies are determined by disease status on a per-patient basis. In this report, we describe a case of MPMN with follicular lymphoma (FL) grade 1 that transformed to double-hit lymphoma during adjuvant chemotherapy for concurrent ovarian carcinoma. A 64-year-old woman was diagnosed with MPMN of FL and endometrioid carcinoma by staging laparotomy and lymph node biopsy. She received four cycles of adjuvant chemotherapy (carboplatin and paclitaxel) for endometrioid carcinoma, but during chemotherapy, the FL grade 1 transformed to double-hit lymphoma. We speculate that adjuvant chemotherapy for endometrioid carcinoma may have triggered the transformation of FL in the present case. abstract_id: PUBMED:15152388 Adjuvant chemotherapy in patients operated on for early ovarian carcinoma The desirability of adjuvant chemotherapy in patients with early ovarian carcinoma, i.e. a carcinoma limited to the internal genitalia (FIGO stage I-IIA), has long been controversial. Until recently, the few randomised studies that had been performed showed no improvement in survival after adjuvant chemotherapy, but the numbers of patients were too low to yield convincing proof. Complete surgical staging in patients with early ovarian carcinoma is of utmost importance to define the stage of the disease. In case of inadequate staging, there is a 24% risk (range 16-46%) that undetected tumour deposits will remain in the abdominal cavity following surgery, so that the estimated stage will be too low. The combined results of the two largest randomised clinical trials on early ovarian carcinoma to date have recently been published: the 'Adjuvant chemotherapy in ovarian neoplasm' (ACTION) trial, coordinated by the European Organisation for Research and Treatment of Cancer (EORTC), and the 'International collaboration in ovarian neoplasm' (ICON-1) trial; they showed a significant benefit from adjuvant chemotherapy on overall and progression-free survival in early ovarian cancer. The ACTION trial (448 patients), however, also showed that adjuvant chemotherapy only improved the overall and disease-free survival significantly in inadequately staged patients. In the patients who had undergone optimal surgical staging, adjuvant chemotherapy had no effect on the prognosis. In the ICON-1 study (477 patients) there was a favourable effect of adjuvant chemotherapy in the whole population. Analysis of this trial, however, shows that most probably only a very small percentage of these patients had undergone optimal surgical staging. Based on the published results, the best policy for the treatment of patients with early ovarian cancer is felt to be to make every effort to achieve optimal surgical staging and to reserve the burdensome adjuvant chemotherapy for those patients in whom optimal (re)staging is not feasible. abstract_id: PUBMED:24219980 Adjuvant platinum-based chemotherapy for borderline serous ovarian tumors with invasive implants. Background: Most borderline ovarian tumors (BOTs) are cured with surgery. However BOTs with invasive implants have a poor prognosis with a mortality of 20-40%. The benefit of adjuvant chemotherapy (CT) in this setting remains poorly defined. Methods: Retrospective study of serous BOT+invasive implants treated with adjuvant CT. Results: 36 patients were referred with serous BOTs+invasive implants and treated with surgery and platinum-based CT between 06/1982 and 02/2011. 83% were stage III/IV. Tumors demonstrated microinvasion, micropapillary pattern or desmoplastic implants in 53%, 47% and 67% of cases, respectively. 8% had fertility-sparing surgery. Taking into account initial and completion surgeries, R0 was achieved in 84% (27/32) (NA, N=4). The majority (72%) received a combination of platinum+taxane. 11% of patients experienced a G3/G4 toxicity. 13 of 36 (36%) patients relapsed at a median of 27.3 months after diagnosis of invasive implants. Among 12 patients with histologically confirmed relapse, 8 patients progressed with invasive disease in the form of carcinoma or invasive implants. 5 year PFS/OS were 67%/96%. Neither microinvasion, micropapillary pattern, nor desmoplastic implants predicted relapse. In cases with evaluable disease, an objective response to chemotherapy was observed in 4 of 6 patients. Conclusion: This is the largest study of BOT with invasive implants treated with surgery and adjuvant platinum-based CT. Treatment was well tolerated and the invasive relapse rate was 22% (8/36). Although numbers are small, the objective responses suggest a possible role for adjuvant CT in BOTs with invasive implants. Answer: Adjuvant chemotherapy in "oldest old" ovarian cancer patients, defined as those aged 80 years and older, presents a significant challenge due to the increased risk of toxicity-related treatment failure in this vulnerable population. In a study comparing chemotherapy-related toxicity data between patients aged ≥80 years and those aged <80 years, it was found that only 57% of the oldest old completed planned chemotherapy, despite initial treatment modifications and toxicity assessment (PUBMED:19235786). This suggests that the oldest old are at a higher risk of not completing chemotherapy due to toxicity. Factors such as creatinine clearance <65 mL/minute, 5% weight loss, prechemotherapy albumin level of <2 g/dL, and initiation of therapy with a single agent were identified as independent predictors of failure to complete chemotherapy (PUBMED:19235786). The study also highlighted the need for further research into the pharmacokinetics of chemotherapy in the elderly and more sensitive assessment of therapy-related toxicity (PUBMED:19235786). This indicates that there is a need to better understand how chemotherapy drugs are processed in the bodies of older patients and to develop more accurate ways to predict and monitor toxicity in this age group. In conclusion, while adjuvant chemotherapy can be beneficial for ovarian cancer patients, the "oldest old" population is particularly susceptible to treatment failure due to toxicity. Clinicians should be aware of the predictors of chemotherapy completion failure and consider them when planning and managing treatment for these patients. More research is needed to optimize chemotherapy regimens and monitoring for the elderly to improve their chances of completing treatment and to minimize the risk of toxicity-related treatment failure.
Instruction: Does the ex utero intrapartum treatment to extracorporeal membrane oxygenation procedure change outcomes for high-risk patients with congenital diaphragmatic hernia? Abstracts: abstract_id: PUBMED:27836357 Does the ex utero intrapartum treatment to extracorporeal membrane oxygenation procedure change morbidity outcomes for high-risk congenital diaphragmatic hernia survivors? Purpose: In high-risk congenital diaphragmatic hernia (CDH), significant barotrauma or death can occur before extracorporeal membrane oxygenation (ECMO) can be initiated. We previously examined ex utero intrapartum treatment (EXIT)-to-ECMO in our most severe CDH patients, but demonstrated no survival advantage. We now report morbidity outcomes in survivors of this high-risk cohort to determine whether EXIT-to-ECMO conferred any benefit. Methods: All CDH survivors with &lt;15% predicted lung volume (PPLV) from September 1999 to December 2010 were included. We recorded prenatal imaging, defect size, and pulmonary, nutritional, cardiac, and neurodevelopmental outcomes. Results: Seventeen survivors (8 EXIT-to-ECMO, 9 non-EXIT) had an average PPLV of 11.7%. Eight of 9 non-EXIT received ECMO within 2days. There were no significant defect size differences between groups, mostly left-sided (13/17) and type D (12/17). Average follow-up was 6.7years (0-13years). There were no statistically significant differences in outcomes, including supplemental oxygen, diuretics, gastrostomy, weight-for-age Z scores, fundoplication, pulmonary hypertension, stroke or intracranial hemorrhage rate, CDH recurrence, and reoperation. No survivor in our cohort was neurologically devastated. All had mild motor and/or speech delay, which improved in most. Conclusions: In this pilot series of severe CDH survivors, EXIT-to-ECMO confers neither significant survival nor long-term morbidity benefit. Level Of Evidence: Level III treatment study. abstract_id: PUBMED:22703768 Does the ex utero intrapartum treatment to extracorporeal membrane oxygenation procedure change outcomes for high-risk patients with congenital diaphragmatic hernia? Purpose: In the most severe cases of congenital diaphragmatic hernia (CDH), significant barotrauma or death can occur before advanced therapies such as extracorporeal membrane oxygenation (ECMO) can be initiated. We have previously examined the use of the ex utero intrapartum treatment (EXIT) to ECMO procedure (EXIT with placement on ECMO) in high-risk infants and reported a survival advantage. We report our experience with EXIT to ECMO in a more recent cohort of our patients with most severe CDH. Methods: Every patient with less than 15% predicted lung volume during January 2005 to December 2010 was included. We obtained data on prenatal imaging, size and location of the defect, and survival. Results: Seventeen high-risk infants were identified. All 17 (100%) received ECMO and required a patch. Six children were delivered by EXIT to ECMO, and only 2 (33%) survived. An additional patient was delivered by EXIT to intubation with ECMO on standby and died. Of the 10 children who did not receive EXIT, 5 (50%) survived. Conclusions: No clear survival benefit with the use of the EXIT to ECMO procedure was demonstrated in this updated report of our high-risk CDH population. The general application of EXIT to ECMO for CDH is not supported by our results. abstract_id: PUBMED:17208548 Ex utero intrapartum treatment with extracorporeal membrane oxygenation for severe congenital diaphragmatic hernia. Purpose: The purpose of this study was to determine whether ex utero intrapartum treatment with extracorporeal membrane oxygenation (EXIT to ECMO) is a reasonable approach for managing patients antenatally diagnosed with severe congenital diaphragmatic hernia (CDH). Methods: A 6-year retrospective review was performed on fetuses with severe CDH (liver herniation and a lung/head ratio &lt;1.4, percentage of predicted lung volume &lt;15, and/or congenital heart disease). Fourteen of the patients underwent EXIT with a trial of ventilation. Fetuses with poor preductal oxygen saturations despite mechanical ventilation received ECMO before their delivery. Maternal-fetal outcomes were analyzed. Results: There were no maternal-reported complications. Three babies passed the ventilation trial and survived, but 2 of them required ECMO within 48 hours. The remaining 11 fetuses received ECMO before their delivery. Overall survival after EXIT-to-ECMO was 64%. At 1-year follow-up, all survivors had weaned off supplemental oxygen, but 57% required diuretics and/or bronchodilators. Conclusion: This is the largest reported experience using EXIT to ECMO in the management of severe CDH. The EXIT-to-ECMO procedure is associated with favorable survival rates and acceptable pulmonary morbidity in fetuses expected to have a poor prognosis under conventional management. abstract_id: PUBMED:23395145 Ex Utero Intrapartum Therapy. The Ex Utero Intrapartum Therapy (EXIT) procedure was initially developed to secure the airway in fetuses at delivery after they had undergone in utero tracheal occlusion for congenital diaphragmatic hernia. Indications for the EXIT procedure have been expanded to include any delivery in which prenatal diagnosis is concerned for neonatal airway compromise, such as large neck masses and Congenital High Airway Obstruction Syndrome, or when a difficult resuscitation is anticipated such as with large lung lesions. Uteroplacental blood flow and gas exchange are maintained through the use of inhalational anesthetics to allow optimal uterine relaxation with partial delivery of the fetus and amnioinfusion to sustain uterine distension. Using the EXIT procedure, sufficient time is provided on placental bypass to perform life-saving procedures such as bronchoscopy, laryngoscopy, endotracheal intubation, tracheostomy, cannulation for extracorporeal membrane oxygenation, and resection of lung masses or resection of neck masses in a controlled setting, thus avoiding a potential catastrophe. abstract_id: PUBMED:23662685 The ex utero intrapartum treatment (EXIT) procedure: application of a new therapeutic paradigm. The ex utero intrapartum treatment (EXIT) procedure is a term given to a technique that can transform a potentially fatal neonatal emergency to a controlled intervention with an improved outcome. It has revolutionised the care of prenatally diagnosed congenital malformations in which severe upper airway obstruction is anticipated. An extended period of utero-placental circulation can be utilised to avoid profound cardiopulmonary compromise. Its therapeutic applications have been broadened to include fetuses with congenital diaphragmatic hernia after tracheal plugging, high-risk intrathoracic masses, severe cardiac malformations and conjoined twins. It requires the co-ordination of a highly skilled and experienced multidisciplinary team. The recent enthusiasm for the EXIT procedure needs to be balanced against maternal morbidity. Specific indications and guidelines are likely to be refined as a consequence of ongoing advances in fetal intervention and antenatal imaging. abstract_id: PUBMED:12961113 Ex utero intrapartum therapy. The hallmark of ex utero intrapartum therapy (EXIT) procedure is the maintenance of uteroplacental blood flow and gas exchange. This goal is achieved with the use of inhalational agents to relax uterine tone, a continuous amnioinfusion to stabilize uterine volume, and partial exposure of the fetus. From March 1996 to December 2002, 43 EXIT procedures were performed at the Children's Hospital of Philadelphia (CHOP). Indications included airway obstruction from fetal neck masses (n = 19), reversal of tracheal occlusion for congenital diaphragmatic hernia (CDH; n = 13), resection of massive congenital cystic adenomatoid malformation of the lung (n = 5), congenital high airway obstruction syndrome (n = 3), EXIT-to-extracorporeal membrane oxygenation for a fetus with CDH and a cardiac defect (n = 1), unilateral pulmonary agenesis (n = 1), and thoracoomphalopagus conjoined twins (n = 1). Eight fetuses required initial tracheotomy at the time of EXIT to secure the airway. One death occurred during the EXIT procedure secondary to inability to secure the airway with parental refusal for tracheotomy. In all cases, the EXIT procedure provided time on uteroplacental gas exchange to perform procedures such as direct laryngoscopy, bronchoscopy, tracheotomy, arterial and venous access, resection of neck or lung masses, and ECMO cannulation, thereby converting an emergent crisis into a controlled situation. abstract_id: PUBMED:30664590 Development of the Pediatric Extracorporeal Membrane Oxygenation Prediction Model for Risk-Adjusting Mortality. Objectives: To develop a prognostic model for predicting mortality at time of extracorporeal membrane oxygenation initiation for children which is important for determining center-specific risk-adjusted outcomes. Design: Multivariable logistic regression using a large national cohort of pediatric extracorporeal membrane oxygenation patients. Setting: The ICUs of the eight tertiary care children's hospitals of the Collaborative Pediatric Critical Care Research Network. Patients: Five-hundred fourteen children (&lt; 19 yr old), enrolled with an initial extracorporeal membrane oxygenation run for any indication between January 2012 and September 2014. Interventions: None. Measurements And Main Results: A total of 514 first extracorporeal membrane oxygenation runs were analyzed with an overall mortality of 45% (n = 232). Weighted logistic regression was used for model selection and internal validation was performed using cross validation. The variables included in the Pediatric Extracorporeal Membrane Oxygenation Prediction model were age (pre-term neonate, full-term neonate, infant, child, and adolescent), indication for extracorporeal membrane oxygenation (extracorporeal cardiopulmonary resuscitation, cardiac, or respiratory), meconium aspiration, congenital diaphragmatic hernia, documented blood stream infection, arterial blood pH, partial thromboplastin time, and international normalized ratio. The highest risk of mortality was associated with the presence of a documented blood stream infection (odds ratio, 5.26; CI, 1.90-14.57) followed by extracorporeal cardiopulmonary resuscitation (odds ratio, 4.36; CI, 2.23-8.51). The C-statistic was 0.75 (95% CI, 0.70-0.80). Conclusions: The Pediatric Extracorporeal Membrane Oxygenation Prediction model represents a model for predicting in-hospital mortality among children receiving extracorporeal membrane oxygenation support for any indication. Consequently, it holds promise as the first comprehensive pediatric extracorporeal membrane oxygenation risk stratification model which is important for benchmarking extracorporeal membrane oxygenation outcomes across many centers. abstract_id: PUBMED:2378332 Extracorporeal membrane oxygenation for hydrocarbon aspiration. Extracorporeal membrane oxygenation is a pulmonary bypass procedure that has been employed in adults to provide temporary treatment for reversible acute pulmonary and cardiac insufficiency. The technology of membrane oxygenation has been used since 1977 in neonates with predictably fatal pulmonary failure due to respiratory distress syndrome, persistent fetal circulation or persistent pulmonary hypertension of the newborn, meconium aspiration syndrome, and congenital diaphragmatic hernia. The use of extracorporeal membrane oxygenation in older children with other pulmonary disorders has been limited. We report two cases of hydrocarbon aspiration involving petroleum-based products, both successfully treated with extracorporeal membrane oxygenation. A 15-month-old male infant who aspirated baby oil (light mineral oil) is particularly unusual owing to the generally expected low risk of aspiration with a hydrocarbon of such viscosity (greater than 60 Saybolt Universal Seconds). The second patient is a 16-month-old male infant who aspirated furniture polish (mineral seal oil). In both children severe intractable hypoxemia developed despite intensive ventilatory support, and they became candidates for alternative therapy. Extracorporeal membrane oxygenation provides a potentially life-saving option when a patient fails to respond to conventional therapy for hydrocarbon aspiration. abstract_id: PUBMED:32195608 Infection profile in neonatal patients during extracorporeal membrane oxygenation. Objectives: To describe risk factors for acquired infection during neonatal extracorporeal membrane oxygenation and to examine the predictive value of inflammatory markers in the diagnosis of infection. Methods: A retrospective study was conducted with data for patients under 30 days supported with extracorporeal membrane oxygenation from 2003 to April 2016, in a neonatal intensive care unit. Results: Our study included 160 neonatal patients, the average age of connection was 8.5 days and the duration of extracorporeal membrane oxygenation support was 9.7 days. The incidence of confirmed infection was 23%. Patients with confirmed infection present more frequently: vaginal delivery, lower birth weight, female sex, diagnosis of congenital diaphragmatic hernia, and longer duration of extracorporeal membrane oxygenation. When comparing the group of patients with confirmed infection and suspicion of infection, there were no significant differences in the inflammatory markers. When calculating the slope for each one, the difference in white blood cell count slope 72 h before the infection is significant; in patients with confirmed infection, the count of white blood cell increases (slope: 0.25), versus the group of patients with suspected infection in whom the count decreases (slope: -0.39). No differences were found in other variables. Conclusion: Our study describes that the factors that increase the risk of infection are lower birth weight, vaginal birth, duration of extracorporeal membrane oxygenation, and a positive trend of white blood cell 72 h prior to infection/suspicion. Further studies are necessary to include or definitively rule out the use of these factors and the biomarkers as predictors of infection in neonatal patients supported with extracorporeal membrane oxygenation. abstract_id: PUBMED:32903209 CD34 immunostain increases sensitivity of the diagnosis of fetal vascular malperfusion in placentas from ex-utero intrapartum treatment. Objectives: EXIT (ex-utero intrapartum treatment) procedure is a fetal survival-increasing modification of cesarean section. Previously we found an increase incidence of fetal vascular malperfusion (FVM) in placentas from EXIT procedures which indicates the underlying stasis of fetal blood flow in such cases. This retrospective analysis analyzes the impact of the recently introduced CD34 immunostain for the FVM diagnosis in placentas from EXIT procedures. Methods: A total of 105 placentas from EXIT procedures (48 to airway, 43 to ECMO and 14 to resection) were studied. In 73 older cases, the placental histological diagnosis of segmental FVM was made on H&amp;E stained placental sections only (segmental villous avascularity) (Group 1), while in 32 most recent cases, the CD34 component of a double E-cadherin/CD34 immunostain slides was also routinely used to detect the early FVM (endothelial fragmentation, villous hypovascularity) (Group 2). Twenty-three clinical and 47 independent placental phenotypes were compared by χ2 or ANOVA, where appropriate. Results: There was no statistical significance between the groups in rates of segmental villous avascularity (29 vs. 34%), but performing CD34 immunostain resulted in adding and/or upgrading 12 more cases of segmental FVM in Group 2, thus increasing the sensitivity of placental examination for FVM by 37%. There were no other statistically significantly differences in clinical (except for congenital diaphragmatic hernias statistically significantly more common in Group 2, 34 vs. 56%, p=0.03) and placental phenotypes, proving the otherwise comparability of the groups. Conclusions: The use of CD34 immunostain increases the sensitivity of placental examination for FVM by 1/3, which may improve the neonatal management by revealing the increased likelihood of the potentially life-threatening neonatal complications. Answer: The ex utero intrapartum treatment (EXIT) to extracorporeal membrane oxygenation (ECMO) procedure has been evaluated in several studies to determine its impact on outcomes for high-risk patients with congenital diaphragmatic hernia (CDH). However, the evidence does not consistently demonstrate a clear benefit in terms of survival or long-term morbidity. One study reported that in a cohort of CDH survivors with less than 15% predicted lung volume, there were no statistically significant differences in long-term morbidity outcomes between those who underwent EXIT-to-ECMO and those who did not. This included measures such as supplemental oxygen, diuretics, gastrostomy, weight-for-age Z scores, fundoplication, pulmonary hypertension, stroke or intracranial hemorrhage rate, CDH recurrence, and reoperation. The study concluded that EXIT-to-ECMO neither conferred a significant survival advantage nor long-term morbidity benefit in this severe CDH cohort (PUBMED:27836357). Another study echoed these findings, stating that no clear survival benefit was demonstrated with the use of EXIT-to-ECMO in their updated report of a high-risk CDH population. The study suggested that the general application of EXIT-to-ECMO for CDH is not supported by their results (PUBMED:22703768). However, a retrospective review of fetuses with severe CDH suggested that the EXIT-to-ECMO procedure is associated with favorable survival rates and acceptable pulmonary morbidity in fetuses expected to have a poor prognosis under conventional management (PUBMED:17208548). The EXIT procedure has been expanded to include deliveries where neonatal airway compromise is anticipated, and it allows for life-saving procedures to be performed in a controlled setting (PUBMED:23395145, PUBMED:23662685, PUBMED:12961113). Despite the potential benefits, the evidence for its use specifically in CDH patients remains mixed, and further research may be needed to refine specific indications and guidelines for its application (PUBMED:23662685). In summary, while the EXIT-to-ECMO procedure has been used in the management of severe CDH, the current evidence does not consistently support a significant change in outcomes for high-risk CDH patients when compared to conventional management strategies.
Instruction: Is cystatin C a promising parameter to determine postnatal outcome of prenatally diagnosed infravesical obstruction? Abstracts: abstract_id: PUBMED:19683759 Is cystatin C a promising parameter to determine postnatal outcome of prenatally diagnosed infravesical obstruction? Purpose: We investigated the prognostic power of cystatin C to determine renal functional outcome in patients with prenatally diagnosed infravesical obstruction. Materials And Methods: A total of 14 fetuses with ultrasound findings of bladder outlet obstruction were enrolled in the study. Amniotic fluid and fetal urine samples were obtained. Controls consisted of 25 fetuses undergoing amniocentesis for obstetrical reasons. Three consecutive bladder punctures were performed to obtain fetal urine samples. In addition to standard prognostic parameters, cystatin C was measured in urine and amniotic fluid samples. Results: Among the study group 6 pregnancies with poor prognostic parameters were terminated and 5 fetuses died postnatally. Two of 3 fetuses with favorable urinary indices by standard prognostic parameters and relatively low cystatin C levels survived postnatally. Mean serum creatinine was 0.2 mg/dl at 1-year followup after valve ablation. Mean +/- SD fetal urine levels of cystatin C were 1.44 +/- 1.53 mg/l (range 0.05 to 5.62), 1.35 +/- 1.43 mg/l (0.05 to 5.74) and 1.63 +/- 1.46 mg/l (0.05 to 5.89) in consecutive punctures. Mean +/- SD amniotic fluid cystatin C levels were 1.91 +/- 0.46 mg/l (range 1.1 to 2.8) in the study group and 1.12 +/- 0.20 mg/l (0.71 to 1.69) in controls (p = 0.0001). Amniotic fluid cystatin C levels were significantly higher in fetuses with suspected infravesical obstruction (study group and poor prognostic subgroup) compared to controls. There was a significant correlation between fetal urine (second and third punctures) and amniotic fluid in terms of cystatin C level (p = 0.038 and p = 0.04, respectively). Conclusions: In fetuses with prenatal ultrasound signs consistent with infravesical obstruction urinary levels of cystatin C progressively increase in consecutive samples. Amniotic fluid may represent fetal urine sample in suspected prenatal infravesical obstruction in terms of cystatin C level. Amniotic fluid cystatin C level was significantly increased in obstructed fetuses compared to normal controls. Cystatin C level in amniotic fluid sample may be sufficient to provide prognostic information in prenatally diagnosed infravesical obstruction. abstract_id: PUBMED:22474211 Is Cystatin C a promising marker of renal function, at birth, in neonates prenatally diagnosed with congenital kidney anomalies? Background: Despite the increased prenatal diagnosis of congenital abnormalities of the kidney and urinary tract (CAKUT), no reliable renal marker for glomerular filtration rate (GFR) has been validated yet in neonates. Cystatin C (CysC) is specific to the neonate and is proposed as a sensitive marker for this population. The aims of the study were first to define a reference interval in our center of CysC at birth in normal term babies and assess CysC as a marker of GFR in a group of term neonates prenatally diagnosed with CAKUT compared to controls. Methods: One hundred normal term neonates (control group) and 33 neonates with kidney malformation (KM) had the CysC levels in their cord blood measured. A reference interval for CysC in controls was calculated using non-parametric methods. CysC from controls was compared first to the whole group of neonates with KM, then with KM group divided in infants (n = 20) with unilateral kidney malformation (UKM) and those (n = 13) with bilateral kidney malformation (BKM). A multivariable analysis was performed to assess the difference in CysC between the groups with adjustment on other factors. The ability of CysC to discriminate neonates with BKM from the controls was assessed by a non-parametric receiver-operated characteristics (ROC) curve. Results: In the control group, the CysC reference interval was [1.54-2.64] mg/L with a median (M) CysC of 2.02 IQR [1.86-2.23]. In the neonates with KM, M CysC was 1.98 IQR [1.79-2.34]; 1.88 IQR [1.76-2.01] in the UKM group and 2.52 IQR [2.16-2.71] in BKM group. Using a multivariate regression analyses, CysC was significantly increased (P &lt; 0.001) in BKM compared to controls with an increment of CysC of 24.5%, and independent from gender, weight and size. The ROC curve analyses, comparing BKM versus controls with a chosen cut-off for CysC of 2.34, showed a sensitivity of 69% and a specificity of 86%. Conclusions: Comparing CysC with a reference interval of CysC validated in our center, we showed a significant increase of CysC in neonates presenting BKM compared to controls and those with UKM. abstract_id: PUBMED:26505835 Urinary IP-10, MCP-1, NGAL, Cystatin-C, and KIM-1 Levels in Prenatally Diagnosed Unilateral Hydronephrosis: The Search for an Ideal Biomarker. Objective: To investigate the urinary interferon gamma-induced protein 10 (IP-10), monocyte chemotactic protein 1 (MCP-1), neutrophil gelatinase-associated lipocalin (NGAL), cystatin-C, and kidney injury molecule-1 (KIM-1) levels in the management of children with prenatally diagnosed unilateral hydronephrosis. Materials And Methods: Twenty-seven children with antenatally diagnosed hydronephrosis were enrolled into the study. The controls consisted of 9 healthy children (6 boys, 3 girls; mean age: 41.77 ± 5.30 months). Thirteen children (9 boys, 4 girls; mean age: 48.46 ± 21.11 months) underwent pyeloplasty on follow-up; the remaining 14 (13 boys, 1 girl; mean age: 36.57 ± 14.02 months) were followed up after being diagnosed as having nonobstructive dilatation (NOD). The urinary marker levels were measured in the pyeloplasty, the NOD, and the control groups. Results: The preoperative concentrations of IP-10, MCP-1, NGAL, and KIM-1 were significantly higher in the pyeloplasty group than in the control group (P = .024, P = .002, P = .032, P = .001, respectively). The urinary IP-10 and MCP-1 levels were also significantly higher in the pyeloplasty group than in the NOD group (P = .038, P = .037, respectively). There was no significant difference between the pyeloplasty group and the NOD group regarding urinary NGAL and KIM-1. In the pyeloplasty group, urinary marker levels except cystatin-C were significantly decreased in the postoperative period. Conclusion: A decrease in levels of IP-10, MCP-1, NGAL, and KIM-1 after pyeloplasty may be used as a predictor of surgical outcome. Additionally, IP-10 and MCP-1 were superior to NGAL and KIM-1 in predicting who required surgery. abstract_id: PUBMED:18176906 Feto-amniotic shunting for lower urinary tract obstruction (LUTO)--a case report Posterior urethral valves are the main cause of fetal lower urinary tract obstruction (LUTO) with typical sonographic signs like urinary tract dilatation and reduction of amniotic fluid. LUTO is associated with a high rate of perinatal mortality and is the main cause of kidney failure in early childhood. In such cases vesico-amniotic shunting is a common but risky procedure of fetal surgery to prevent anhydramnion and lethal lung hypoplasia. This case report demonstrates that lung hypoplasia can be prevented by vesico-amniotic shunting of the fetal megacytis in the 23rd week of gestation in a fetus with lower urinary tract obstruction and anhydramnion. The prenatal measured concentration of cystatin C in the fetal urine correlated with the postnatal impaired kidney function. The indication and therapeutic benefit of vesico-amniotic shunting remain controversially discussed in the literature because until today there is no evidence for a reduction in perinatal or long-term morbidity due to early fetal kidney damage. The earlier ultrasound detection of LUTO during the first trimester of pregnancy proposes the possibility of earlier intervention and protection of nephrogenesis. First case studies about first trimester vesico-amniotic shunting have been published; the influence on the postnatal kidney function merits further well-structured investigation. abstract_id: PUBMED:36313872 Predictors of advanced chronic kidney disease in infancy after definitive vesicoamniotic shunting for congenital lower urinary tract obstruction. Background: Severe congenital lower urinary tract obstruction (cLUTO) is associated with poor postnatal outcomes, including chronic and end stage kidney disease, and high mortality. Studies of the impact of fetal intervention through vesicoamniotic shunting are marred by a device malfunction rate of up to 60%. In this study, we delineate the postnatal course and infant kidney function following definitive urinary diversion in utero. Materials And Methods: This is a retrospective, single-center cohort study of 16 male infants who survived the fetal intervention to birth, from 2010 to 2014 at a single center. All had patent shunts in place at birth. Perinatal and biochemical characteristics were collected with patients followed for one year, or until demise, with serial measures of serum creatinine (SCr) and serum cystatin C (CysC). Results: Of the 16 males, 81% were non-white (38% black, 43% Hispanic). Shunts were placed at a median of 20 weeks (IQR 19,23) gestation, with median fetal bladder volume of 39 cm3 (IQR 9.9,65). All neonates were born preterm [median 34 weeks (IQR 31,35)] and the majority with low birth weight [median 2340 grams (1,895, 2,600)]. 63% required positive pressure ventilation. Advanced chronic kidney disease stage 4-5 at 1 year of age was predicted by neonatal characteristics: peak SCr ≥2 mg/dl, time to peak SCr &gt; 6 days, discharge SCr ≥1.0 mg/dl, CysC ≥2.5 mg/l, urine protein:creatinine ≥4.8 mg/mg, urine microalbumin:creatinine ≥2.2 mg/mg. In infancy, a nadir SCr ≥0.5 mg/dl occurring before 160 days (5.3 months) of age was also predictive of advanced chronic kidney disease stage 4-5 at 1 year. Three patients died in the neonatal period, with 1 receiving kidney replacement therapy (KRT). Three additional patients required KRT before 12 months of age. Conclusions: Even with definitive vesicoamniotic shunting for cLUTO, postnatal morbidity and mortality remain high, emphasizing the role of renal dysplasia, in spite of urinary diversion, in postnatal kidney dysfunction. Neonatal and infant biochemical parameters exhibit distinct trends that offer families and physicians a better understanding of the prognosis of childhood kidney function. abstract_id: PUBMED:30979613 The role of renal biomarkers to predict the need of surgery in congenital urinary tract obstruction in infants. Introduction: The diagnosis of renal function impairment and deterioration in congenital urinary tract obstruction (UTO) continues to be extremely challenging. The use of new renal biomarkers in this setting may favor early renal injury detection, allowing for a reliable choice of optimal therapeutic options and the prevention or minimization of definitive renal damage. Objective: The aim of the study was to investigate a selection of promising biomarkers of renal injury with the intention of evaluating and comparing their profile with clinically based decisions for surgical intervention of infants with congenital obstructive uropathies. Study Design: The first-year profile of renal biomarkers, serum creatinine (sCr), serum and urine cystatin C (CyC), neutrophil gelatinase-associated lipocalin (NGAL), kidney injury molecule-1 (KIM-1), transforming growth factor beta-1 (TGF-β1), retinol-binding protein (RBP), and microalbuminuria (μALB), was analyzed in a cohort of 37 infants with congenital UTO, divided into three subgroups, 14 cases with grade III unilateral hydro(uretero)nephrosis, 13 cases with grade III bilateral hydro(uretero)nephrosis, and 10 cases with low urinary tract obstruction (LUTO), compared with 24 healthy infants matched by gestational age and birth weight. Serum and urine samples were stored at -70 °C and thereafter analyzed by quantitative enzymatic immunoassay. Results: Compared with the control group (Figure), all renal biomarker values were significantly increased in patients (P ≤ 0.02). In the unilateral hydronephrosis and LUTO group, RBP (P ≤ 0.043), NGAL (P ≤ 0.043), KIM-1 (P ≤ 0.03), and TGF-β1 (P ≤ 0.034) values dropped significantly after surgery. Neutrophil gelatinase-associated lipocalin alone and in combination with urine and serum CyC demonstrated the best performance in determining the need for surgery (area under the curve, 0.801 and 0.881, respectively). Biomarker profile analysis was suggestive of surgical intervention in 55.4% (7/13) of non-operated cases, and most of the biomarker values were above the cutoff levels within at least 3 months before the clinically based surgical decision in 58% (14/24) of all operated patients. Discussion: To the best of the authors' knowledge, this is the first study to present the clinical use of selected group of serum and urinary biomarkers in the setting of UTO to distinguish between patients who would benefit from surgery intervention. The most promising results were obtained using NGAL, RBP, TGF-β1, and KIM-1, especially in the unilateral hydro(uretero)nephrosis and LUTO subgroups when compared with the control group. Conclusions: Urine biomarkers, alone and in combination, demonstrated high potential as a non-invasive diagnostic tool for identifying infants who may benefit from earlier surgical intervention. abstract_id: PUBMED:29521269 The effect of darbepoetin alfa on renal fibrosis in rats with acute unilateral ureteral obstruction. Objectives: The most important treatment strategy for obstructive nephropathy is to protect renal tissue from the deleterious effects of fibrosis. Therefore, we sought to investigate the renoprotective effects of darbepoetin alfa on unilateral ureteral obstructions. Methods: We used 12 female and 12 male 3-monthold Wistar rats weighing between 250 and 350 g. The rats were divided equally into sham, darbepoetin and control groups. With the exception of the sham group, left unilateral obstructions were applied to all of the rats. The darbepoetin group received perioperative darbepoetin alfa at a dose of 10 mg/kg. The rats were sacrificed on postoperative day 7, and 3-cc blood samples and bilateral renal specimens were collected from each rat. Results: Renal ectasia was observed significantly less frequently in the darbepoetin group than the obstruction group (p&lt;0.001). Additionally, the uptake rates of cortical TNF and medullary SMA in the darbepoetin group were comparable to those in the sham group but lower than those in the ureteral obstruction group (p&lt;0.001 and p&lt;0.008, respectively). When biomarkers of renal injury, including cystatin-C, malondialdehyde, and B2 microglobulin, were evaluated in combination, B2 microglobulin was found at higher levels in the ureteral obstruction group (p&lt;0.004). Conclusion: As we know pelvicalyceal ectasia reflects intrapelvic pressure into renal tubular system via renal reflux. Therefore pelvicalyceal ectasia can be used as an indicator of renal tubular pressure. Although as a limitation of our study, renal tubular pressure was not quantitatively evaluated, parallelism between levels of renal ectasia detected in the rats of the sham, and DPO groups can predict that this drug (darbepoetin-a) can decrease renal tubular pressure in acute ureteral obstruction. Moreover, B2 microglobulin levels in the sham, and DPO groups differed from those of ureteral obstruction group, which suggested that DPO does not impair renal perfusion in addition to its decreasing effects on renal tubular pressure. We think that in countries with higher incidence rates of stone disease similar to our country, DPO may be used among medical treatment alternatives, which aim to preserve renal reserve. abstract_id: PUBMED:29522906 Daily therapy with a slow-releasing H2S donor GYY4137 enables early functional recovery and ameliorates renal injury associated with urinary obstruction. Objectives: To assess the effects of slow-releasing H2S donor GYY4137 on post-obstructive renal function and injury following unilateral ureteral obstruction (UUO) by using the UUO and reimplantation (UUO-R) model in rats and to elucidate potential mechanisms by using an in vitro model of epithelial-mesenchymal transition (EMT). Methods: Male Lewis rats underwent UUO at the left ureterovesical junction. From post-operative day (POD) 1-13, rats received daily intraperitoneal (IP) injection of phosphate buffered saline (PBS, 1 mL) or GYY4137 (200 μmol/kg/day in 1 mL PBS, IP). On POD 14, the ureter was reimplanted back into the bladder, followed by a right nephrectomy. Urine and serum samples were collected to monitor renal function. On POD 30, the left kidney was removed and tissue sections were stained with H&amp;E, TUNEL, CD68, CD206, myeloperoxidase, and Masson's trichrome to determine cortical thickness, apoptosis, inflammation, and fibrosis. In our in vitro model of EMT, NRK52E cells were treated with 10 ng/mL TGF-β1, 10 μM GYY4137 and/or 50 μM GYY4137. Western blot analysis was performed to determine the expression of E-cadherin, vimentin, Smad7 and TGF-β1 receptor II (TβRII). Results: GYY4137 led to a moderate decrease in post-obstructive serum creatinine, cystatin C and FENa. We also observed a trend towards a decrease in post-obstructive proteinuria following GYY4137 treatment. Histologically, we observed a significant decrease in apoptosis, inflammation, and fibrosis. Furthermore, our in vitro studies demonstrate that in the presence of TGF-β1, GYY4137 significantly decreases vimentin and TβRII and significantly increases E-cadherin and Smad7. Conclusions: H2S may help to accelerate the recovery of renal function post-obstruction and attenuates renal injury associated with UUO. It is possible that H2S mitigates fibrosis by regulating the TGF-β1-mediated EMT pathway. Taken together, our data suggest that H2S may be a potential novel therapy for improving renal function and limiting renal injury associated with obstructive uropathy. abstract_id: PUBMED:38344995 Diagnostic value of multi-parameter ultrasound evaluation in sepsis complicated by acute kidney injury. Background: This study aimed to discuss the diagnostic value of multi-parameter ultrasound evaluation in sepsis complicated with acute kidney injury (AKI). Methods: Patients were divided into an AKI group (n = 50) and a non-injury group (n = 50) based on the presence of AKI. The clinical characteristics were collected, and renal function parameters between the two groups were compared, including 24-h urine volume, serum creatinine, urea, serum cystatin C (CysC), renal parenchymal thickness (RPT), renal artery resistance index (RI), and multi-parameter ultrasound scoring (MPUS). Additionally, logistic regression analysis was conducted to determine the influencing factors of sepsis complicated with AKI. The prediction value was evaluated using a receiver operating characteristic (ROC) curve. Results: In the AKI group, creatinine, CysC, urea, MPUS score, RPT, and RI values were significantly higher, while the 24-h urine volume was lower than those in the non-injury group (p &lt; 0.01). Moreover, multivariate logistic analysis indicated that high CysC and RI values were independent risk factors, whereas high 24-h urine volume and low MPUS were independent protective factors for sepsis-induced AKI. The ROC curve demonstrated that RI (AUC = 0.906) was more effective than 24-h urine volume (AUC = 0.797), CysC (AUC = 0.730), and MPUS (AUC = 0.794) in identifying sepsis-induced AKI. Conclusion: High RI values increase the risk of sepsis-induced AKI, whereas low MPUS may reduce it. RI showed high diagnosis values for sepsis complicated with AKI. abstract_id: PUBMED:38244116 Analysis of risk factors for stenosis after laparoscopic pyeloplasty in the treatment of ureteropelvic junction obstruction. Background: Laparoscopic ureteroplasty is an effective method for managing ureteropelvic junction obstruction. Despite its high success rate, there remains a subset of patients who do not experience improvement in the hydrops. Methods: The study retrospectively analyzed the data of 143 patients with ureteropelvic junction obstruction (UPJO) who underwent laparoscopic pyeloplasty (LP) in our hospital from January 2015 to May 2022. Logistic regression was used to analyze the risk factors of recurrence stenosis after UPJO. Results: Out of these patients, 119 had complete clinical data and follow-up records. Among these patients, restenosis occurred in nine cases after the operation. There was a significant statistical difference in blood loss (P &lt; 0.05). Univariate and multivariate logistic regression analysis revealed that the preoperative separation degree of the renal pelvis, cystatin C, and intraoperative blood loss were potential risk factors for recurrent stenosis after primary LP. When divided by split renal function (SRF), the odds ratio (OR) was 7.850 (P = 0.044), indicating that it was an independent risk factor for postoperative restenosis. Similarly, the OR for stenotic segment length was 0.025 (P = 0.011), also indicating it as an independent risk factor for restenosis. The areas under the receiver operating characteristic curve for stenotic segment length and SRF were 0.9056 and 0.7697, respectively. Conclusion: In our study, we identified that preoperative renal pelvis separation, cystatin C, and intraoperative blood loss were potential risk factors for postoperative restenosis. SRF and stenosis segment length were independent risk factors for postoperative restenosis. Answer: Yes, cystatin C appears to be a promising parameter to determine postnatal outcome of prenatally diagnosed infravesical obstruction. Studies have shown that cystatin C levels can be indicative of renal function and may provide prognostic information in cases of prenatal infravesical obstruction. For instance, a study found that amniotic fluid cystatin C levels were significantly higher in fetuses with suspected infravesical obstruction compared to controls, suggesting that cystatin C level in amniotic fluid may be sufficient to provide prognostic information in these cases (PUBMED:19683759). Another study demonstrated that cystatin C was significantly increased in neonates with bilateral kidney malformation compared to controls and those with unilateral kidney malformation, indicating its potential as a marker of glomerular filtration rate (GFR) in neonates prenatally diagnosed with congenital kidney anomalies (PUBMED:22474211). Additionally, cystatin C was one of the urinary markers that decreased postoperatively in children who underwent pyeloplasty for prenatally diagnosed unilateral hydronephrosis, suggesting its role in predicting surgical outcomes (PUBMED:26505835). Furthermore, the prenatal measured concentration of cystatin C in fetal urine correlated with postnatal impaired kidney function, reinforcing its potential as a prognostic biomarker (PUBMED:18176906). Lastly, in a study of infants who underwent definitive vesicoamniotic shunting for congenital lower urinary tract obstruction, serum cystatin C levels were among the predictors of advanced chronic kidney disease in infancy (PUBMED:36313872). These findings collectively support the notion that cystatin C is a promising parameter for determining postnatal outcomes in prenatally diagnosed infravesical obstruction.
Instruction: The influence of body mass index on outcomes in patients undergoing cardiac surgery: does the obesity paradox really exist? Abstracts: abstract_id: PUBMED:25781994 The influence of body mass index on outcomes in patients undergoing cardiac surgery: does the obesity paradox really exist? Purpose: Obesity influences risk stratification in cardiac surgery in everyday practice. However, some studies have reported better outcomes in patients with a high body mass index (BMI): this is known as the obesity paradox. The aim of this study was to quantify the effect of diverse degrees of high BMI on clinical outcomes after cardiac surgery, and to assess the existence of an obesity paradox in our patients. Methods: A total of 2,499 consecutive patients requiring all types of cardiac surgery with cardiopulmonary bypass between January 2004 and February 2009 were prospectively studied at our institution. Patients were divided into four groups based on BMI: normal weight (18.5-24.9 kg∙m-2; n = 523; 21.4%), overweight (25-29.9 kg∙m-2; n = 1150; 47%), obese (≥ 30-≤ 34.9 kg∙m-2; n = 624; 25.5%) and morbidly obese (≥ 35kg∙m-2; n = 152; 6.2%). Follow-up was performed in 2,379 patients during the first year. Results: After adjusting for confounding factors, patients with higher BMI presented worse oxygenation and better nutritional status, reflected by lower PaO2/FiO2 at 24h and higher albumin levels 48 h after admission respectively. Obese patients showed a higher risk for Perioperative Myocardial Infarction (OR: 1.768; 95% CI: 1.035-3.022; p = 0.037) and septicaemia (OR: 1.489; 95% CI: 1.282-1.997; p = 0.005). In-hospital mortality was 4.8% (n = 118) and 1-year mortality was 10.1% (n = 252). No differences were found regarding in-hospital mortality between BMI groups. The overweight group showed better 1-year survival than normal weight patients (91.2% vs. 87.6%; Log Rank: p = 0.029. HR: 1.496; 95% CI: 1.062-2.108; p = 0.021). Conclusions: In our population, obesity increases Perioperative Myocardial Infarction and septicaemia after cardiac surgery, but does not influence in-hospital mortality. Although we found better 1-year survival in overweight patients, our results do not support any protective effect of obesity in patients undergoing cardiac surgery. abstract_id: PUBMED:33181060 Influence of Body Mass Index on Perioperative Outcomes Following Pancreaticoduodenectomy. Background: Overweight and obesity are increasing year by year all over the world, and there is a correlation between overweight and obesity and the risk of pancreatic cancer. However, the relationship between overweight and obesity and perioperative outcomes of pancreaticoduodenectomy (PD) was controversial. The purpose of this study was to investigate the effect of body mass index (BMI) on the perioperative outcome of PD. Methods: This study retrospectively evaluated 227 patients who underwent PD from 2015 to 2019. The patients were divided into three groups: underweight group (BMI &lt;18.5 kg/m2), normal weight group (18.5 ≤ BMI &lt;25 kg kg/m2), and overweight group (BMII ≥25 kg/m2). The association between different BMI groups and different perioperative results was discussed. Finally, the independent risk factors of clinically relevant-postoperative pancreatic fistula (CR-POPF) were analyzed by multivariate logistic regression. Results: The level of preoperative albumin was higher in patients of overweight group (P = .03). The incidence of hypertension increased gradually in the three BMI groups (P = . 039). The preoperative median CA19-9 level was significantly higher in the underweight group than that in the control groups (P = .001). The median operation time in the high BMI group was significantly longer than that in the other two groups. High BMI was an independent risk factor influencing CR-POPF after PD (P = .022, odds ratio 2.253, 95% confidence interval 1.123-4.518). Conclusions: Operation time of PD was increased in patients with high BMI. High BMI was an independent risk factor for the incidence of CR-POPF after PD. However, PD surgery is safe and feasible for patients with different BMI, and overweight and obese patients should not refuse PD surgery because of their BMI. abstract_id: PUBMED:28040677 Systematic review of prognostic roles of body mass index for patients undergoing lung cancer surgery: does the 'obesity paradox' really exist? Summary: The paradoxical benefit of obesity, the 'obesity paradox', has been recently identified in surgical populations. Our goal was to evaluate by a systematic review with meta-analysis the prognostic role of body mass index (BMI) and to identify whether the 'obesity paradox' exists in lung cancer surgery. Comprehensive literature retrieval was conducted in PubMed to identify the eligible articles. The odds ratios (OR) and hazard ratios (HR) with the corresponding 95% confidence intervals (CI) were used to synthesize in-hospital and long-term survival outcomes, respectively. The heterogeneity level and publication bias between studies were also estimated. Finally, 25 observational studies with 78 143 patients were included in this review. The pooled analyses showed a significantly better long-term survival rate in patients with higher BMI, but no significant benefit of increased BMI was found for in-hospital morbidity. The pooled analyses also showed that overall morbidity (OR: 0.84; 95% CI: 0.73-0.98; P = 0.025) and in-hospital mortality (OR: 0.78; 95% CI: 0.63-0.98; P = 0.031) were significantly decreased in obese patients. Obesity could be a strong predictor of the favourable long-term prognosis of lung cancer patients (HR: 0.69; 95% CI: 0.56-0.86; P = 0.001). The robustness of these pooled estimates was strong. No publication bias was detected. In summary, obesity has favourable effects on in-hospital outcomes and long-term survival of surgical patients with lung cancer. The 'obesity paradox' does have the potential to exist in lung cancer surgery. abstract_id: PUBMED:30076443 The impact of body mass index on metatarsalgia surgical outcomes. Introduction: Several studies have suggested that an increased body mass index (BMI) is a negative factor for forefoot plantar pain but its influence in the surgical correction of metatarsalgia is unknown. The purpose of the present study is to evaluate the influence of the BMI on the surgical outcomes of metatarsalgia. It has been hypothesized that the higher the BMI, the worse the functional outcomes after metatarsalgia surgical treatment at one year follow-up. Material And Methods: A prospective cohort study that included all patients operated on for third rocker metatarsalgia was conducted. Weil's osteotomy was performed on all the patients operated on. The patients' pre-operative height, weight, and BMI were recorded. The patients were subsequently divided into three groups based on their BMI. There was group 1 or the normal group (18.5 &gt; BMI ≤ 25 kg/m2), group 2 or the overweight group (25 &gt; BMI ≤ 30 kg/m2), and group 3 or the obese group (BMI &gt; 30 kg/m2). Pre-operative, post-operative, and differential AOFAS were used to evaluate and compare the groups. The post-operative VAS was also measured to assess pain. The correlation between the BMI and those variables was also analyzed. Results: After the exclusion criteria were applied, 107 patients were finally assessed. There were 22 patients (20.6%) in group 1, 52 patients (48.6%) in group 2, and 33 patients (30.8%) in group 3. No correlation was observed between the BMI and AOFAS (p &gt; 0.05). Neither were any differences found when the three groups were compared (p &gt; 0.05). Moreover, no correlation between the BMI and the VAS score was observed (p = 0.690). Conclusion: Obesity does not negatively influence functional outcomes after surgery for metatarsalgia in short to medium term. Regardless of their BMI, patients with propulsive metatarsalgia improve in functionality after surgical treatment. abstract_id: PUBMED:28928331 Outcomes according to body mass index following laparoscopic surgery in patients with colorectal cancer. Purpose: Body mass index (BMI) may not be appropriate for different populations. Therefore, the World Health Organization (WHO) suggested 25 kg/m2 as a measure of obesity for Asian populations. The purpose of this report was to compare the oncologic outcomes of laparoscopic colorectal resection with BMI classified from the WHO Asia-Pacific perspective. Patients And Methods: All patients underwent laparoscopic colorectal resection from September 2006 to March 2015 at a tertiary referral hospital. A total of 2408 patients were included and classified into four groups: underweight (n = 112, BMI &lt;18.5 kg/m2), normal (n = 886, 18.5-22.9 kg/m2), pre-obese (n = 655, 23-24.9 kg/m2) and obese (n = 755, &gt;25 kg/m2). Perioperative parameters and oncologic outcomes were analysed amongst groups. Results: Conversion rate was the highest in the underweight group (2.7%, P &lt; 0.001), whereas the obese group had the fewest harvested lymph nodes (21.7, P &lt; 0.001). Comparing oncologic outcomes except Stage IV, the underweight group was lowest for overall (P = 0.007) and cancer-specific survival (P = 0.002). The underweight group had the lowest proportion of national health insurance but the highest rate of medical care (P = 0.012). Conclusion: The obese group had the fewest harvested lymph nodes, whereas the underweight group had the highest estimated blood loss, conversion rate to open approaches and the poorest overall and cancer-specific survivals. abstract_id: PUBMED:36768056 Body Mass Index and Late Adverse Outcomes after a Carotid Endarterectomy. A cohort study was conducted to examine the association of an increased body mass index (BMI) with late adverse outcomes after a carotid endarterectomy (CEA). It comprised 1597 CEAs, performed in 1533 patients at the Vascular Surgery Clinic in Belgrade, from 1 January 2012 to 31 December 2017. The follow-up lasted four years after CEA. Data for late myocardial infarction and stroke were available for 1223 CEAs, data for death for 1305 CEAs, and data for restenosis for 1162 CEAs. Logistic and Cox regressions were used in the analysis. The CEAs in patients who were overweight and obese were separately compared with the CEAs in patients with a normal weight. Out of 1223 CEAs, 413 (33.8%) were performed in patients with a normal weight, 583 (47.7%) in patients who were overweight, and 220 (18.0%) in patients who were obese. According to the logistic regression analysis, the compared groups did not significantly differ in the frequency of myocardial infarction, stroke, and death, as late major adverse outcomes (MAOs), or in the frequency of restenosis. According to the Cox and logistic regression analyses, BMI was neither a predictor for late MAOs, analyzed separately or all together, nor for restenosis. In conclusion, being overweight and being obese were not related to the occurrence of late adverse outcomes after a carotid endarterectomy. abstract_id: PUBMED:30033742 Impact of Body Mass Index on Perioperative Outcomes of Endoscopic Pituitary Surgery. Background Endoscopic pituitary surgery (EPS) is increasingly being used for the treatment of pituitary lesions. Obesity is a growing epidemic in our nation associated with numerous comorbidities known to impact surgical outcomes. We present a multi-institutional database study evaluating the association between body mass index (BMI) and postsurgical outcomes of EPS. Methods Patients who underwent EPS from 2005 to 2013 were identified from the American College of Surgeons National Surgical Quality Improvement Program database. Preoperative variables, comorbidities, and postoperative outcomes, such as 30-day complications, morbidity, and mortality, were analyzed. Results A total of 789 patients were analyzed, of which 382 were obese (BMI ≥ 30) (48.4%). No difference in reoperation rate ( P = .928) or unplanned readmission rates ( P = .837) was found between the obese versus nonobese group. A higher overall complication rate was observed in the obese group compared to the nonobese counterparts ( P = .005). However, when separated into surgical complications (3.7% vs 1.5%, P = .068) and medical complications (7.6% vs 3.9%, P = .027), only medical complications, specifically pneumonia, remained significantly different. EPS on obese patients was also associated with prolonged operating time (154.8 min vs 141.0 min, P = .011). Conclusions EPS may be a safe treatment option for pituitary lesions in the obese population. Although obese patients undergoing EPS are at increased risk of medical complications and prolonged operating times, this did not influence mortality, reoperation, or readmission rate. abstract_id: PUBMED:32630619 The Association between High Body Mass Index and Early Clinical Outcomes in Patients with Proximal Femur Fractures. Background: Fractures of the proximal femur constitute daily work in orthopedic trauma surgery. With the continuous increase of obesity in the general population, surgeons face several known technical challenges. The aim of this study was to investigate the association of high body mass index (BMI) in patients with proximal femur fractures with intra- and postoperative adverse events, as well as with functional outcomes after successful surgery. Methods: In this retrospective, single-center cohort study, 950 patients who sustained a fracture of the proximal femur (femoral neck fracture or trochanteric fracture) and underwent surgical treatment at our level I trauma center between 2003 and 2015 were included. Patient-specific data were obtained in regard to demographics, comorbidities, and fracture morphology. In-hospital postoperative complications (i.e., need for revision surgery, wound site infection, pneumonia, urinary tract infection, necessary transfusion, and deep-vein thrombosis) were analyzed, along with the length of hospitalization and overall mortality rate. Functional outcome was assessed using the Barthel index and the patient's ability to walk on crutches. Mortality rate and need for revision surgery were assessed over a two-year time period. Any adverse event was correlated to one of the four WHO's BMI groups. Results: The cohort included 80 (8.4%) underweight patients, 570 (60.0%) normal weight patients, 241 (25.4%) overweight patients, and 59 (6.2%) obese patients. We found more femoral neck fractures (506, or 53%) than trochanteric fractures (444, or 47%). In bivariate analysis, no significant difference was found in regard to overall mortality or postoperative complications. Hospitalization time (LOS) differed between the underweight (12.3 ± 4.8 days), normal (13.6 ± 7.8 days), overweight (14.2 ± 11.7 days), and obese patients (16.0 ± 9.7 days) (p = 0.040). Operation time increased stepwise with increasing BMI: underweight = 85.3 ± 42.9 min; normal weight = 90.2 ± 38.2 min; overweight = 99.9 ± 39.9 min; obese = 117.2 ± 61.5 min (p &lt; 0.001). No significant difference was found by analyzing functional outcomes. However, patients with intermediate BMI levels (18.5-30 kg/m2) tended to achieve the best results, as represented by a higher Barthel index score and the patient's ability to walk on crutches. Conclusion: Increased BMI in patients with proximal femur fractures is associated with both longer operation time and length of hospitalization (LOS). Postoperative mobilization and functional outcomes appear to follow a reversed J-curve distribution (with overweight patients showing the best functional results), whereas both obese and underweight patients have associated poorer function. abstract_id: PUBMED:38402035 Influence of preconception body mass index on perinatal outcomes after metabolic and bariatric surgery. Introduction: Metabolic and bariatric surgery (MBS) is a very effective weight loss intervention, although does not invariably reverses the obesity status. Our aim was to evaluate whether despite successful weight loss after MBS, persistence of obesity at time of conception still carries additional risks of adverse perinatal pregnancy outcomes. Methods: Retrospective study comparing pregnancy outcomes of women previously submitted to MBS with a preconception (PC) body mass index BMI &lt; 30 kg/m2 or PC BMI ≥ 30 kg/m2. Results: Eighty pregnancies (n = 80) were included, 49 from women with a PC BMI &lt; 30 kg/m2 and 31 with a PC BMI ≥ 30 kg/m2. Gestational weight gain was significantly lower (9.72 ± 7.10 vs. 13.81 ± 7.16 respectively; p = 0.01) and neonatal intensive care unit admissions were significantly higher (5% vs. 0% respectively; p = 0.02) in women with PC BMI ≥ 30 kg/m2 as compared to those with PC BMI &lt; 30 kg/m2. There were no statistically significant differences in gestational diabetes, anemia, fetal growth restriction, prematurity rate, mode of delivery or birth weight between groups. Conclusion: Perinatal outcomes of pregnancies after MBS may be significantly influenced by PC BMI. The benefits of MBS induced weight loss on obesity-associated adverse pregnancy outcomes can be maximized if the obesity status can be reverted before pregnancy. abstract_id: PUBMED:25215473 Body mass index as an influencing factor for outcome of trabectome surgery? Background: The aim of this study was to investigate a possible influence of body mass index (BMI) to the outcome of trabectome surgery. Methods: 131 eyes with primary open angle glaucoma, myopia-associated glaucoma and pseudoexfoliation glaucoma were included into this retrospective study. The data were extracted from the Freiburg trabectome database from June 2009 to April 2013. We fitted a Cox proportional hazards model in order to assess the influence of the BMI on trabectome outcome. Results: The absolute success after trabectome surgery (20 % pressure reduction without anti-glaucomatous medication) was statistically significantly better in the group with BMI &gt; 25 kg/m(2) (p = 0.047). No statistically significant effect was observed for relative success or the rate of re-operation respectively. Conclusion: In our patient cohort of 131 eyes, a high BMI was associated with a reduced success, as long as an absolute success is required. No difference is seen if additional anti-glaucomatous medication is acceptable (relative success). Answer: The concept of the "obesity paradox" suggests that obesity may confer a protective effect in certain populations, such as those undergoing surgery. However, the evidence is mixed and varies depending on the type of surgery and the outcomes measured. In the context of cardiac surgery, one study found that while obesity increases the risk for perioperative myocardial infarction and septicaemia, it does not influence in-hospital mortality. Overweight patients showed better 1-year survival than normal weight patients, but the study concluded that the results do not support any protective effect of obesity in patients undergoing cardiac surgery (PUBMED:25781994). For other types of surgeries, the influence of BMI on outcomes also varies. In pancreaticoduodenectomy, high BMI was an independent risk factor for clinically relevant postoperative pancreatic fistula, but the surgery was still considered safe and feasible for patients with different BMIs (PUBMED:33181060). In lung cancer surgery, obesity was associated with favorable effects on in-hospital outcomes and long-term survival, suggesting that the 'obesity paradox' might exist in this context (PUBMED:28040677). In metatarsalgia surgery, obesity did not negatively influence functional outcomes after surgery (PUBMED:30076443). In colorectal cancer surgery, the obese group had the fewest harvested lymph nodes, and the underweight group had the poorest overall and cancer-specific survivals (PUBMED:28928331). In carotid endarterectomy, being overweight or obese was not related to the occurrence of late adverse outcomes (PUBMED:36768056). In endoscopic pituitary surgery, obese patients had a higher overall complication rate, specifically medical complications like pneumonia, and prolonged operating times (PUBMED:30033742). In proximal femur fractures, increased BMI was associated with longer operation time and length of hospitalization, but not with postoperative complications or mortality (PUBMED:32630619). In pregnancies after metabolic and bariatric surgery, a preconception BMI ≥ 30 kg/m2 was associated with higher neonatal intensive care unit admissions compared to a preconception BMI < 30 kg/m2 (PUBMED:38402035). Finally, in trabectome surgery for glaucoma, a higher BMI was associated with reduced success when an absolute success was required (PUBMED:25215473). In summary, the evidence for an obesity paradox is not In summary, the evidence for an obesity paradox is not consistent across different types of surgeries. While some studies suggest a protective effect of higher BMI in certain surgical populations, such as lung cancer surgery (PUBMED:28040677), others do not support this concept. For instance, in cardiac surgery, the protective effect of obesity is not supported, as obesity increases the risk for perioperative complications without influencing in-hospital mortality, and the better 1-year survival in overweight patients does not confirm a protective effect (PUBMED:25781994). Similarly, in other surgical contexts such as pancreaticoduodenectomy (PUBMED:33181060), colorectal cancer surgery (PUBMED:28928331), and endoscopic pituitary surgery (PUBMED:30033742), higher BMI is associated with certain negative outcomes or complications. In the case of proximal femur fractures, while higher BMI is associated with longer operation times and hospital stays, it does not impact mortality or postoperative complications (PUBMED:32630619). In the specific context of pregnancy outcomes after metabolic and bariatric surgery, a higher preconception BMI is associated with higher neonatal intensive care unit admissions (PUBMED:38402035). Lastly, in trabectome surgery, a higher BMI was associated with reduced success when absolute success was required (PUBMED:25215473). Overall, the existence of an obesity paradox remains controversial and seems to be surgery-specific. It is clear that BMI can influence surgical outcomes, but whether this influence is protective or detrimental may depend on a variety of factors, including the type of surgery, the patient population, and the specific outcomes being measured.
Instruction: Can a biomarker-based scoring system predict pathologic complete response after preoperative chemoradiotherapy for rectal cancer? Abstracts: abstract_id: PUBMED:27824703 Biomarker-Based Scoring System for Prediction of Tumor Response After Preoperative Chemoradiotherapy in Rectal Cancer by Reverse Transcriptase Polymerase Chain Reaction Analysis. Background: Numerous molecular markers have been investigated to predict tumor response after preoperative chemoradiotherapy for rectal cancer. Objective: This study aimed to evaluate the predictive value of biomarkers for the prediction of tumor response after preoperative chemoradiotherapy. Design & Setting: Tumor specimens have been collected prospectively from 80 patients with rectal cancer who underwent curative resection at 8 weeks after completing preoperative chemoradiotherapy. Main Outcome Measures: With the use of reverse transcriptase polymerase chain reaction analysis, mRNA expression levels of 7 candidate biomarkers (p53, p21, Ki-67, VEGF, CD133, CD24, and CD44) were evaluated from fresh tumor samples collected before preoperative chemoradiotherapy. The correlation between biomarker expression levels and the pathologic response was assessed based on histopathological staging (pTNM) and tumor regression grade. Results: The mRNA expression levels of 4 biomarkers (p53, p21, Ki67, and CD133) significantly correlated with tumor regression grade response and pathologic complete response. Patients showing low expression of p53 and/or high expression of p21, Ki67, and CD133 exhibited a significantly greater tumor regression grade response and pathologic complete response rate. A scoring system devised so that 1 point was given for each biomarker whose expression level correlated with pathologic complete response (score range: 0-4) showed that 9 of 62 patients with scores of 0 to 2 achieved pathologic complete response, whereas 15 of 18 patients with scores of 3 to 4 achieved pathologic complete response (14.5% vs 83.3%, p &lt; 0.001). For prediction of pathologic complete response, the scoring system showed 62.5% sensitivity, 94.6% specificity, an 83.3% positive predictive value, and an 85.5% negative predictive value. Limitations: Small patient numbers have limitations related to the reproducibility and ability to provide quantitative information. In addition, this study lacks test and validation sets. Conclusions: The pretreatment mRNA expression levels of 4 biomarkers correlated with pathologic tumor response after intraoperative chemoradiotherapy in rectal cancer. Furthermore, the scoring system combining values of biomarker expression might have predictive power with high positive and negative predictive values. abstract_id: PUBMED:24819099 Can a biomarker-based scoring system predict pathologic complete response after preoperative chemoradiotherapy for rectal cancer? Background: Numerous molecular markers have been investigated as potential predictors of tumor responses to preoperative chemoradiotherapy (preCRT) for rectal cancer. Objective: To develop a system in which biomarkers are used to predict the likelihood of a pathologic complete response (pCR) to preCRT. Design & Setting: This is a retrospective analysis of tumor specimens collected prior to preCRT from 81 patients who underwent curative resection for primary rectal adenocarcinoma between June 2008 and February 2012. Main Outcome Measures: Using tissue microarrays and immunohistochemistry, expression levels of twelve candidate biomarkers (p53, p21, Bcl2, Bax, EGFR, Cox-2, MLH-1, MSH-2, Ku70, VEGF, TS, Ki-67) were evaluated in paraffin-embedded tumor samples collected before preCRT. The correlation between biomarker expression levels and the pathologic response to preCRT was assessed based on histopathological staging (pTNM) and tumor regression grade (TRG). Results: Expression levels of 4 biomarkers (p53, VEGF, p21, Ki67) correlated with pCR. Patients showing low expression of p53 and/or high expression of VEGF, p21, and Ki67 exhibited a significantly greater pCR rate. A scoring system devised so that one point was given for each biomarker whose expression level correlated with pCR (score range: 0-4) showed that 1 of 26 patients with scores of 0 to 1 achieved pCR, whereas 26 of 55 patients with scores of 2 to 4 achieved pCR (3.8% vs. 47.3%, p &lt; 0.001). For prediction of pCR, the scoring system showed 96.3% sensitivity, 46.3% specificity, a 47.3% positive predictive value, and a 96.2% negative predictive value. Limitations: Immunohistochemistry has limitations related to reproducibility and the ability to provide quantitative information. In addition, this study lacks test and validation sets. Conclusions: Expression levels of 4 biomarkers correlated with pCR after preCRT for rectal cancer. A scoring system based on levels of biomarker expression showed good sensitivity and negative predictive value for pCR. abstract_id: PUBMED:22802748 Preoperative concurrent chemoradiotherapy for locally advanced rectal cancer: treatment outcomes and analysis of prognostic factors. Purpose: This study was designed to investigate the long-term oncologic outcomes for locally advanced rectal cancer patients after treatment with preoperative concurrent chemoradiotherapy followed by total mesorectal excision, and to identify prognostic factors that affect survival and pathologic response. Materials And Methods: From June 1996 to June 2009, 135 patients with locally advanced rectal cancer were treated with preoperative concurrent chemoradiotherapy followed by total mesorectal excision at Kyung Hee University Hospital. Patient data was retrospectively collected and analyzed in order to determine the treatment outcomes and identify prognostic factors for survival. Results: The median follow-up time was 50 months (range, 4.5 to 157.8 months). After preoperative chemoradiotherapy, sphincter preservation surgery was accomplished in 67.4% of whole patients. A complete pathologic response was achieved in 16% of patients. The estimated 5- and 8-year overall survival, loco-regional recurrence-free survival, and distant metastasis-free survival rate for all patients was 82.7% and 75.7%, 76.8% and 71.9%, 67.9% and 63.3%, respectively. The estimated 5- and 8-year overall survival, loco-regional recurrence-free survival, and distant metastasis-free survival rate for pathologic complete responders was 100% and 100%, 100% and 88.9%, 95.5% and 95.5%, respectively. In the multivariate analysis, pathologic complete response was significantly associated with overall survival. The predictive factor for pathologic complete response was pretreatment clinical stage. Conclusion: Preoperative chemoradiotherapy for locally advanced rectal cancer resulted in a high rate of overall survival, sphincter preservation, down-staging, and pathologic complete response. The patients achieving pathologic complete response had very favorable outcomes. Pathologic complete response was a significant prognostic factor for overall survival and the significant predictive factor for a pathologic complete response was pretreatment clinical stage. abstract_id: PUBMED:34912724 A Novel Scoring System for Response of Preoperative Chemoradiotherapy in Locally Advanced Rectal Cancer Using Early-Treatment Blood Features Derived From Machine Learning. Background: Preoperative chemoradiotherapy (CRT) is a standard treatment for locally advanced rectal cancer (LARC). However, individual responses to preoperative CRT vary from patient to patient. The aim of this study is to develop a scoring system for the response of preoperative CRT in LARC using blood features derived from machine learning. Methods: Patients who underwent total mesorectal excision after preoperative CRT were included in this study. The performance of machine learning models using blood features before CRT (pre-CRT) and from 1 to 2 weeks after CRT (early-CRT) was evaluated. Based on the best model, important features were selected. The scoring system was developed from the selected model and features. The performance of the new scoring system was compared with those of systemic inflammatory indicators: neutrophil-to-lymphocyte ratio, platelet-to-lymphocyte ratio, lymphocyte-to-monocyte ratio, and the prognostic nutritional index. Results: The models using early-CRT blood features had better performances than those using pre-CRT blood features. Based on the ridge regression model, which showed the best performance among the machine learning models (AUROC 0.6322 and AUPRC 0.5965), a novel scoring system for the response of preoperative CRT, named Response Prediction Score (RPS), was developed. The RPS system showed higher predictive power (AUROC 0.6747) than single blood features and systemic inflammatory indicators and stratified the tumor regression grade and overall downstaging clearly. Conclusion: We discovered that we can more accurately predict CRT response by using early-treatment blood data. With larger data, we can develop a more accurate and reliable indicator that can be used in real daily practices. In the future, we urge the collection of early-treatment blood data and pre-treatment blood data. abstract_id: PUBMED:34048192 Predictive Factors for Pathologic Complete Response Following Neoadjuvant Chemoradiotherapy for Rectal Cancer. Background: An accurate assessment of potential pathologic complete response(pCR) following neoadjuvant chemoradiotherapy(NCRT) is important for the appropriate treatment of rectal cancer. However, the factors that predict the response to neoadjuvant chemoradiotherapy have not been well defined. Therefore, this study analyzed the predictive factors on the development of pCR after neoadjuvant chemoradiation for rectal cancer. Methods: From January 2008 to January 2018, a total of 432 consecutive patients from a single institution patients who underwent a long-course neoadjuvant chemoradiotherapy were reviewed in this study. The clinicopathological features were analyzed to identify predictive factors for pathologic complete response in rectal cancer after neoadjuvant chemoradiation. Results: The rate of pathologic complete response in rectal cancer after neoadjuvant chemoradiation was 20.8%, patients were divided into the pCR and non-pCR groups. The two groups were well balanced in terms of age, gender, body mass index, ASA score, tumor stage, tumor differentiation, tumor location, surgical procedure, chemotherapy regimen and radiation dose. The multivariate analysis revealed that a pretreatment carcinoembryonic antigen (CEA) level of ≤5 ng/mL and an interval of ≥8 weeks between the completion of chemoradiation and surgical resection were independent risk factors of an increased rate of pCR. Conclusions: Pretreatment carcinoembryonic antigen (CEA) level of ≤5 ng/mL and an interval of ≥8 weeks between the completion of chemoradiation and surgical resection are predictive factors for pathologic complete response in rectal cancer after neoadjuvant chemoradiation. Using these predictive factors, we can predict the prognosis of patients and develop adaptive treatment strategies. A wait-and-see policy might be possible in highly selective cases. abstract_id: PUBMED:25779365 Sustaining Blood Lymphocyte Count during Preoperative Chemoradiotherapy as a Predictive Marker for Pathologic Complete Response in Locally Advanced Rectal Cancer. Purpose: The objective of this study was to explore the relationship between the circulating lymphocyte level during preoperative chemoradiotherapy (CRT) and pathologic complete response (pCR) in locally advanced rectal cancer. Materials And Methods: From May 2010 to May 2013, 52 patients treated with preoperative CRT followed by surgery, were analysed. Patients received conventional fractionated radiotherapy (50-54 Gy) with fluorouracil-based chemotherapy. Surgical resection was performed at 4 to 8 weeks after the completion of preoperative CRT. Absolute blood lymphocyte counts and their relative percentage in total white blood cell counts were obtained from complete blood count tests performed prior to and after 4, 8, and 12 weeks of CRT. We analysed the association between achieving pCR and change in blood lymphocyte level during CRT, as well as clinical parameters. Results: Among 52 patients, 14 (26.9%) had evidence of pCR. Sustaining the blood lymphocyte count during CRT (lymphocyte count at 4 weeks/baseline lymphocyte count &gt; 0.35; odds ratio, 8.33; p=0.02) and initial carcinoembryonic antigen &lt; 4.4 ng/mL (odds ratio, 6.71; p=0.03) were significantly associated with pCR in multivariate analyses. Conclusion: Sustaining blood lymphocyte count during preoperative CRT was predictive for pCR in rectal cancer. Further studies are warranted to investigate the association between pathologic responses and circulating lymphocyte count with its subpopulation during preoperative CRT. abstract_id: PUBMED:30275218 SUVmax-based Parameters of FDG-PET/CT Reliably Predict Pathologic Complete Response After Preoperative Hyperthermo-chemoradiotherapy in Rectal Cancer. Background/aim: To determine the most reliable predictor for pathologic complete response (pCR) in patients who underwent preoperative chemoradiotherapy and regional hyperthermia (HCRT) for rectal cancer. Patients And Methods: Thirty-six patients were enrolled. The local control status of the patients was assessed using 18F-fluorodeoxyglucose positron emission tomography/computed tomography (FDG-PET/CT), magnetic resonance imaging (MRI), and colonoscopy before and after HCRT. The relationships between various parameters of these clinical examinations and pCR were analyzed. Results: Ten (28%) patients achieved pCR. The accuracies of predicting pCR using FDG-PET/CT, MRI, and colonoscopy were 78%, 61%, and 75%, respectively. FDG-PET/CT was the only independent predictive modality for pCR (p=0.021). The maximum standardized uptake value (SUVmax) and SUVmax normalized to liver uptake (SLR) after HCRT showed the highest sensitivity (90%) and the decreasing rate of SUVmax and SLR demonstrated the highest specificity (89%) for pCR. Conclusion: SUVmax-based parameters of FDG-PET/CT after HCRT were the most reliable predictors for pCR. abstract_id: PUBMED:34298695 Improvement in the Assessment of Response to Preoperative Chemoradiotherapy for Rectal Cancer Using Magnetic Resonance Imaging and a Multigene Biomarker. The response to preoperative chemoradiotherapy (PCRT) is correlated with oncologic outcomes in patients with locally advanced rectal cancer. Accurate prediction of PCRT response before surgery can provide crucial information to aid clinicians in further treatment planning. This study aimed to develop an evaluation tool incorporating a genetic biomarker and magnetic resonance imaging (MRI) to improve the assessment of response in post-CRT patients with locally advanced rectal cancer. A total of 198 patients who underwent PCRT followed by surgical resection for locally advanced rectal cancer between 2010 and 2016 were included in this study. Each patient's response prediction index (RPI) score, a multigene biomarker developed in our previous study, and magnetic resonance tumor regression grade (mrTRG) score were added to create a new predictive value for pathologic response after PCRT, called the combined radiation prediction value (cRPV). Based on the new value, 121 and 77 patients were predicted to be good and poor responders, respectively, showing significantly different cRPV values (p = 0.001). With an overall predictive accuracy of 84.8%, cRPV was superior to mrTRG and RPI for the prediction of pathologic chemoradiotherapy response (mrTRG, 69.2%; RPI, 77.3%). In multivariate analysis, cRPV was found to be the sole predictor of tumor response (odds ratio, 32.211; 95% confidence interval, 14.408-72.011; p = 0.001). With its good predictive value for final pathologic regression, cRPV may be a valuable tool for assessing the response to PCRT before surgery. abstract_id: PUBMED:30008877 Aneuploidy of chromosome 8 and mutation of circulating tumor cells predict pathologic complete response in the treatment of locally advanced rectal cancer. Identifying patients who may or may not achieve pathologic complete response (pathCR) allows for treatment with alternative approaches in the preoperative setting. The aim of the current study was to investigate whether aneuploidy of chromosome 8 and mutations of circulating tumor cells (CTCs) could predict the response of patients with rectal cancer to preoperative chemoradiotherapy. A total of 33 patients with locally advanced rectal cancer (cT3-T4 and/or cN+) treated with neoadjuvant chemoradiotherapy between September 2014 and March 2015 were recruited. Blood samples were collected from 33 patients with pre-chemoradiotherapy rectal cancer. It was demonstrated that ≥5 copies of chromosome 8 was associated with pathCR (univariate logistic regression, P=0.042). Of the 6 patients whose CTCs had &lt;5 copies of chromosome 8, 3 achieved pathCR (3/6, 50%), and of the 27 patients whose CTCs had ≥5 copies of chromosome 8 obtained 3 pathCR (3/27, 11.1%; Chi-square test, P=0.0255). Of the 33 patients with mutations assessed, 8 significant nonsynonymous mutations in CTCs were identified as associated with pathCR (Chi-square test, P-values range, 0.0004-0.0298; mutations in ARID1A, HDAC1, APC, ERBB3, TP53, AMER1 and AR). These results suggest that ≥5 copies of chromosome 8 and 8 nonsynonymous mutations in ARID1A, HDAC1, APC, ERBB3, TP53, AMER1 AR in CTCs were associated with pathCR. This conclusion should be validated further in larger prospective studies and the long-term follow-up survival data of this study will also be reported in the future. abstract_id: PUBMED:30087709 Analysis of Clinical characteristics to predict pathologic complete response for patients with locally advanced rectal cancer treated with neoadjuvant chemoradiotherapy. To explore clinical characteristics which could be applied to predict pathologic complete response (pCR) for patients with locally advanced rectal cancer treated with neoadjuvant chemoradiotherapy (neo-CRT) and total mesorectal excision (TME). 297 patients with locally advanced rectal cancer (cT3-4 or cN+) who were treated with neo-CRT followed by TME were retrospectively reviewed. Clinical characteristics including age, gender, tumor distance from anus, serum CEA, hemoglobin levels before treatment and clinical TN stage were used to investigate the association with pCR after neo-CRT. Seventy-nine (26.6%) patients achieved pCR after neo-CRT. pCR were achieved in 42 (34.4%) patients in cT1-3 stage and 37 (21.1%) in cT4 stage. pCR rate was 36.4% and 16.4% for patients with pre-treatment serum CEA ≤5.33ng/ml and &gt;5.33ng/ml, respectively. Uni- and multi-variate analyses revealed that pre-treatment serum CEA level ≤5.33ng/ml and clinical T stage, (i.e., cT1-3 versus cT4) were highly correlated with pCR (p &lt; 0.05). Clinical T stage and pre-treatment serum CEA level were strongly associated with pCR for patients with locally advanced rectal cancer treated with neo-CRT followed by TME which could be applied as clinical predictors for pCR. Answer: Yes, a biomarker-based scoring system can predict pathologic complete response (pCR) after preoperative chemoradiotherapy for rectal cancer. Several studies have developed and evaluated such scoring systems using different sets of biomarkers and methodologies. One study developed a scoring system based on the mRNA expression levels of four biomarkers (p53, p21, Ki67, and CD133) evaluated from fresh tumor samples collected before preoperative chemoradiotherapy. This scoring system showed that patients with higher scores had a significantly greater tumor regression grade response and pathologic complete response rate, with high positive and negative predictive values (PUBMED:27824703). Another study used a similar approach with a scoring system based on the expression levels of four biomarkers (p53, VEGF, p21, Ki67) assessed by immunohistochemistry. This system also demonstrated that patients with higher scores had a significantly greater pCR rate, with good sensitivity and negative predictive value (PUBMED:24819099). Additional research has identified other predictive factors for pCR, such as pretreatment clinical stage (PUBMED:22802748), early-treatment blood features derived from machine learning (PUBMED:34912724), pretreatment carcinoembryonic antigen (CEA) level, and the interval between the completion of chemoradiation and surgical resection (PUBMED:34048192). Sustaining blood lymphocyte count during preoperative chemoradiotherapy has also been associated with pCR (PUBMED:25779365). Furthermore, SUVmax-based parameters of FDG-PET/CT after preoperative hyperthermo-chemoradiotherapy were found to be reliable predictors for pCR (PUBMED:30275218). A combined approach using a multigene biomarker and magnetic resonance imaging (MRI) has been proposed to improve the assessment of response in post-CRT patients (PUBMED:34298695). Additionally, aneuploidy of chromosome 8 and mutations of circulating tumor cells (CTCs) have been suggested as predictors of response to preoperative chemoradiotherapy (PUBMED:30008877). Clinical characteristics such as pre-treatment serum CEA level and clinical T stage have also been associated with pCR (PUBMED:30087709).
Instruction: Is histological examination necessary when excising a pilonidal cyst? Abstracts: abstract_id: PUBMED:25794794 Is histological examination necessary when excising a pilonidal cyst? Background: Pilonidal disease is a common inflammatory condition mostly affecting young males. Malignant degeneration of a pilonidal cyst is rare, with incidence estimated at 0.1%. The most common type is squamous cell carcinoma and the treatment of choice remains en block resection of the lesion. Case Report: We present the case of a patient with locally advanced squamous cell carcinoma arising in a pilonidal cyst, due to misdiagnosis of the disease during his first treatment. Conclusions: Detailed histological examination of all excised pilonidal cyst lesions is essential and any histological suspicion should prompt a wider excision. abstract_id: PUBMED:33369840 Pilonidal sinus: is histological examination necessary? Background: Pilonidal sinus is a common pathology occurring in a number of sites including the gluteal cleft between the buttocks. Surgery is the principal mode of treatment with a variety of techniques proposed to cure the disease and prevent recurrence. The rate of malignant degeneration is estimated at 0.1% in patients with chronic pilonidal disease. A case series study of pilonidal sinus found that most patients with malignancy were over the age of 50 years and had had chronic pilonidal disease for several years. This study aimed to review the value of histological analysis of pilonidal sinus specimens. Methods: A retrospective observational study was conducted of all patients who have undergone pilonidal sinus excision by the Department of General Surgery at Counties Manukau District Health Board in the past 10 years. Results: This study reviewed 1324 admissions with a pilonidal disease requiring surgical management. There were 325 pilonidal resections with 320 histological specimens analysed. Pilonidal sinus was detected in 312 specimens, eight others were benign and there were no specimens with malignancy. Conclusion: No cases of malignancy were found in all pilonidal specimens that were sent for histological review. Routine histological examination of pilonidal sinus specimens appears to be unnecessary; however, it should be considered in patients over the age of 50 years. abstract_id: PUBMED:33517687 Is It Necessary to Perform a Histopathological Examination of Pilonidal Sinus Excision Material? Background: Although pilonidal sinus disease is common, development of associated malignancy is very rare. After surgical treatment, most surgeons send the excision material for a histopathological examination. The aim of this study was to examine whether it is necessary to routinely send the pilonidal sinus surgical excision material for this examination. Method: The data of 3146 patients were retrospectively screened, and 2486 patients with available histopathological reports of the excision material were included in the study. Results: Of the 2486 patients included in the study, 2165 were men and 321 were women, and 94.7% of the patients were under the age of 50 years while 5.3% were 50 years or above. The rate of patients who underwent surgery due to recurrence was 1.2%. No malignancy was detected in any patient after the histopathological examination. Discussion: In this study, none of the pathology results was reported as malignant. This confirms that it is necessary to ask the question whether we should routinely send the surgical excision material for a histopathological examination. abstract_id: PUBMED:35310433 Histopathological Examination and Endoscopic Sinusectomy: Is It Possible? Introduction: Although carcinomatous degeneration is a rare occurrence, some authors support the need for a histopathological examination after pilonidal cyst excision. Today, minimally invasive techniques are widely spread for the treatment of pilonidal sinus disease but opposed to standard procedures, these techniques could not allow to perform a histopathological examination because of the absence of a specimen. The aim of this two-institutions study is to evaluate whether histopathological examination of the pilonidal sinus excision material can be successfully performed after an endoscopic ablation of the cyst. Materials And Methods: We identified all consecutive patients from January 2021 to September 2021 with diagnosis of pilonidal sinus disease who underwent Video Assisted Ablation of Pilonidal Sinus (VAAPS) followed by histopathological examination. Results: A total of 45 patients were included in the study. All patients were Caucasians and aged below 50 years. Nine of them underwent surgery due to recurrence of PSD. No evidence of malignancy was detected in the histopathological examination of the pilonidal sinus sampling material. Discussion: We were able to send pilonidal sinus sampling material for a histopathological examination in all patients who underwent minimally invasive technique for the treatment of pilonidal sinus disease. No evidence of malignancy was found in any of the 45 samples. Our findings prove that minimally invasive ablation of pilonidal sinus does not preclude histopathological examination of the cysts. abstract_id: PUBMED:33999526 Re: Pilonidal sinus: is histological examination necessary? - the age of patients with pilonidal carcinoma is overestimated. N/A abstract_id: PUBMED:33999527 Re: Pilonidal sinus: is histological examination necessary? - pilonidal sinus carcinoma is largely underreported and underpublished. N/A abstract_id: PUBMED:35306783 Radiological, Clinical, and Histological Findings in the Treatment of Pilonidal Sinus with Phenol Injection. Objective: Sacrococcygeal pilonidal disease is a chronic discharging wound that causes pain and loss of quality of life. Phenol application is an outpatient procedure with low complications and low recurrence rates. We evaluated the radiological, histological, and clinical results of phenol application. Methods: A total of 44 consecutive patients with sacrococcygeal pilonidal disease underwent phenol application in Kocaeli University Faculty of Medicine, General Surgery Clinic between December 2015 and March 2017. Demographics, complaints, symptom duration, and the number of sinuses were recorded. Patients were examined using ultrasonography (USG) before surgery and two months after surgery. Four patients who wanted rhomboid excision and Limberg flap procedure underwent preoperative local phenol application and were excluded from the study. The excised tissues were used for pathological evaluation. Results: Purulent discharge was the leading symptom. All of the patients had hair in the preoperative USG evaluation, whereas five patients had still hairs in the sinuses in the postoperative USG examination. Four of these cases had recurrences on a 2-month follow-up. After phenol application, a 10% volume decrease was observed. Pathological examination was performed in four cases with phenol application, and in pathological material, the sinuses looked sclerotic and fibrotic. Conclusions: Our study revealed that a single phenol application achieved 90% success. However, no significant difference was observed in the cavity volume after phenol injection. abstract_id: PUBMED:29146394 Is histological analysis of pilonidal sinus useful? Retrospective analysis of 731 resections. The pilonidal sinus (SP) is a common pathology. The treatment is a surgical excision. Many surgeons continue to systematically send this SP in histological analysis. The objective of our retrospective study was to evaluate the interest of this systematic histological analysis. The retrospective analysis of patients undergoing surgery was performed between 1 January 2006 and 31 December 2014. The primary observation was the presence of a malignant disease on the surgical specimen. Secondary observations were the wound healing time and the rate of recurrence. Seven hundred and thirty-one patients were analyzed. There was no malignant lesion. For 323 patients, the histological analysis did not describe the resection margins. Two hundred and eighty five patients had complete resect on and 38 were incomplete. Twenty-four patients had recurrence (7%). There was no significant difference between those who had complete and incomplete resection. The healing time was 61 days. Our study raises the question about the value of systematic histological analysis of the PS specimen. abstract_id: PUBMED:15570850 A case of subcutaneous myxopapillary ependymoma presenting as a pilonidal sinus. We describe a rare neurogenic tumour presenting as a pilonidal sinus. This case illustrates the pitfalls associated with the management of a commonly occurring surgical condition. Pilonidal sinuses are often the province of trainee surgeons and while ependymomas are rare, this example emphasises the need to send the pilonidal sinus tract for histological examination. abstract_id: PUBMED:23153466 Pilonidal disease is a benign condition Malignant transformation in pilonidal disease is rare and never reported in Denmark. There are 75 cases in the literature, mostly squamous cell carcinoma. The cost-benefit of routine histological examination of specimens from pilonidal surgery is questionable, as malignancy is extremely rare. Due to increased incidence of cancer, specimens from patients with many years of pilonidal disease, known immunodeficiencies and/or human papillomavirus infection or clinical abnormal presentation, should always be investigated. Answer: The necessity of histological examination when excising a pilonidal cyst is debated in the literature. Some sources argue that detailed histological examination of all excised pilonidal cyst lesions is essential due to the rare but possible occurrence of malignant degeneration, such as squamous cell carcinoma, in a pilonidal cyst. This is particularly important if there is any histological suspicion, prompting a wider excision (PUBMED:25794794). However, other studies suggest that routine histological examination may not be necessary for every case. For instance, a retrospective observational study found no cases of malignancy in 320 histological specimens analyzed from pilonidal resections, leading to the conclusion that routine histological examination appears to be unnecessary, but should be considered in patients over the age of 50 years (PUBMED:33369840). Another study that screened the data of 3146 patients and included 2486 patients with available histopathological reports found no malignancy in any patient after the histopathological examination. This study questioned the necessity of routinely sending surgical excision material for histopathological examination (PUBMED:33517687). Furthermore, a two-institutions study showed that even when using minimally invasive techniques for the treatment of pilonidal sinus disease, which do not typically allow for histopathological examination due to the absence of a specimen, it was still possible to perform histopathological examination, and no evidence of malignancy was found in any of the 45 samples (PUBMED:35310433). On the other hand, a retrospective analysis of 731 resections found no malignant lesions, raising the question about the value of systematic histological analysis of the pilonidal sinus specimen (PUBMED:29146394). Additionally, it is noted that malignant transformation in pilonidal disease is extremely rare and has never been reported in Denmark, suggesting that the cost-benefit of routine histological examination of specimens from pilonidal surgery is questionable (PUBMED:23153466). In summary, while there is a potential risk of malignancy in pilonidal cysts, the rarity of such cases has led some studies to question the necessity of routine histological examination, especially in younger patients without suspicious clinical presentations. However, it is still recommended in certain cases, such as in older patients or those with atypical presentations.
Instruction: Infant and perinatal outcomes of triplet pregnancy in Auckland: better than expected? Abstracts: abstract_id: PUBMED:31829154 Perinatal outcomes of monochorionic diamniotic triplet pregnancies: a case series. Background: Triplet pregnancies are associated with higher fetal morbidity and mortality rates as well as life-threatening maternal complications. Monochorionic diamniotic (MCDA) triplet pregnancies are very rare compared to other types of triplet pregnancies. Case Presentation: We report three cases of MCDA triplet pregnancies between January 2012 and December 2017. Two of these MCDA triplet pregnancies received regular and intensive prenatal care, were diagnosed by ultrasonography during the first trimester or early second trimester, and had good perinatal outcomes. The case with irregular perinatal care had poor outcomes, and the MCDA triplet pregnancy was diagnosed intrapartum. Conclusions: The possibility of continuing an MCDA triplet pregnancy should be recognized. Early diagnosis, regular antenatal care, close prenatal monitoring, and sufficient communication are recommended to obtain better perinatal outcomes in MCDA triplet pregnancies. abstract_id: PUBMED:29807452 Perinatal outcomes according to the mode of delivery in women with a triplet pregnancy in The Netherlands. Objective: In women with a triplet pregnancy, there is debate on the preferred mode of delivery. We performed a nationwide cohort study to assess the impact of mode of delivery on perinatal outcome in women with a triplet pregnancy. Methods: Nationwide cohort study on women with a triplet pregnancy who delivered between 26 + 0 and 40 + 0 weeks of gestation in the years 1999-2008. We compared perinatal outcomes according to the intended mode of delivery and the actual mode of delivery. Outcome measures were perinatal mortality and neonatal morbidity. Perinatal outcomes were analyzed taking into account the dependency between the children of the same triplet pregnancy ("any mortality" and "any morbidity") and were also analyzed separately per child. Results: We identified 386 women with a triplet pregnancy in the study period. Mean gestational age at delivery was 33.1 weeks (SD 2.5 weeks; range 26.0-40.0 weeks). Perinatal mortality was 2.3% for women with a planned caesarean section and 2.4% in women with a planned vaginal delivery (aOR 0.37; 95% confidence interval (CI) 0.09-1.5) and neonatal morbidity was 26.0% versus 36.0%, (aOR 0.88; 95% CI 0.51-1.4) respectively. In the subgroup analyses according to gestational age and in the analysis of perinatal outcomes per child separately, there were also no large differences in perinatal outcomes. The same applied for perinatal outcomes according to the actual mode of delivery. Conclusion: In this large cohort study among women with a triplet pregnancy, caesarean delivery is not associated with reduced perinatal mortality and morbidity. abstract_id: PUBMED:35407479 Maternal, Perinatal and Neonatal Outcomes of Triplet Pregnancies According to Chorionicity: A Systematic Review of the Literature and Meta-Analysis. Triplet pregnancies are rare events that affect approximately 93 in 100,000 deliveries in the world, especially due to the increased use of assisted reproductive techniques and older maternal age. Triplet pregnancies are associated with a higher risk of fetal and maternal morbidity and mortality compared to twins and singletons. Chorionicity has been proposed as a major determinant of perinatal and maternal outcomes in triplet pregnancies, although further evidence is needed to clarify the extent and real influence of this factor. Thus, the aim of this study was to conduct a systematic review of the literature and a meta-analysis of the maternal and perinatal outcomes of triplet pregnancies, evaluating how chorionicity may influence these results. A total of 46 studies with 43,653 triplet pregnancies and 128,145 live births were included. Among the main results of our study, we found a broad spectrum of fetal and maternal complications, especially in the group of monochorionic and dichorionic pregnancies. Risk of admission to NICU, respiratory distress, sepsis, necrotizing enterocolitis, perinatal and intrauterine mortality were all found to be higher in non-TCTA pregnancies than in TCTA pregnancies. To date, our meta-analysis includes the largest population sample and number of studies conducted in this field, evaluating a wide variety of outcome measures. The heterogeneity and retrospective design of the studies included in our research represent the main limitations of this review. More evidence is needed to fully assess outcome measures that could not be studied in this review due to scarcity of publications or insufficient sample size. abstract_id: PUBMED:26608832 Perinatal and obstetric outcomes of dichorionic vs trichorionic triplet pregnancies. Background: Clinical management and outcome of multiple gestation can be affected by chorionicity. In triplet pregnancies, fetal death has been associated with dichorionic (DC) and monochorionic placentation. Studies evaluating triplet pregnancy outcomes in relation to chorionicity have been few and may not reflect contemporary antenatal and neonatal care. Objective: The objective of this study was to compare obstetric and perinatal outcomes in DC and trichorionic (TC) triplet pregnancies. Study Design: We performed a retrospective cohort study of triplet pregnancies that delivered at ≥20 weeks' gestation at 2 Chicago area hospitals from January 1999 through December 2010. Chorionicity was determined by pathology specimen. Maternal and infant charts were reviewed for obstetric and perinatal outcomes. Results: The study population included 159 pregnancies (477 neonates) of which 108 were TC (67.9%) and 51 were DC (32.1%). Over 94% of mothers in this study had all 3 infants survive to discharge regardless of chorionicity. No difference was found in perinatal mortality rate between DC and TC triplets (3.3% vs 4.6%; P = .3). DC triplets were significantly more likely to be very low birthweight (41.8% vs 22.2%; odds ratio, 2.2; 95% confidence interval, 1.2-4.2; P = .02) and to deliver at &lt;30 weeks (25.5% vs 8.3%; odds ratio, 6.1; 95% confidence interval, 1.9-19.4; P = .002) compared to TC triplets. Criteria for twin-twin transfusion syndrome (TTTS) were present in 3 DC triplet pregnancies (5.9%). Neonates in pregnancies complicated by TTTS were less likely to survive 28 days as compared to neonates from DC pregnancies that were not affected by TTTS (P = .02) or TC neonates (P = .02) Neonatal survival was similar in DC pregnancies not affected by TTTS and TC pregnancies (98.6% and 96.6%; P = .7). Conclusion: Although perinatal mortality did not correlate with chorionicity, DC pregnancies were more likely to deliver &lt;30 weeks' gestational age and have very low birthweight neonates. Neonatal mortality appears to be mediated by the presence or absence of TTTS as 28-day survival was worse in DC pregnancies complicated by TTTS, but similar between DC pregnancies not affected by TTTS and TC pregnancies. abstract_id: PUBMED:28367358 Perinatal and Neonatal Outcomes of Triplet Gestations Based on Chorionicity. Objective To compare perinatal and neonatal outcomes of dichorionic (DC) and monochorionic (MC) with trichorionic (TC) triplet gestations. Methods A retrospective cohort study of DC + MC versus TC triplet gestations delivered at a tertiary care hospital from 2009 to 2015. The results include 42 sets of triplets (TC, n = 26; DC + MC, n = 16). Maternal demographics and pregnancy data were compared. Neonatal outcomes were assessed using composite morbidity and mortality. Results Maternal baseline characteristics including age, mode of conception, race, parity, body mass index, and previous preterm delivery were statistically comparable. Comparison of prenatal management and complications yielded no significant differences in terms of presence of shortened cervix, cerclage placement, use of tocolytics, intrauterine growth restriction, premature rupture of membranes, pregnancy-induced hypertension, or gestational diabetes. However, evaluation of composite morbidity and mortality (RDS, IVH, NEC, IUGR, and death) illustrated that all infants born from DC + MC triplet gestations suffered some morbidity or mortality compared with TC pregnancies (p &lt; 0.01). Conclusion DC + MC triplet gestations are at an increased risk of neonatal morbidity and mortality compared with TC triplet gestations. abstract_id: PUBMED:32932859 Economic Crisis Impact and Social Determinants of Perinatal Outcomes and Infant Mortality in Greece. (1) Background: To explore the effects of the 2008 economic crisis on maternal, perinatal and infant mortality in Greece and the socio-economic determinants associated with them; (2) Methods: The annual rates of stillbirth (SBR), perinatal mortality (PMR), infant mortality (IMR), neonatal mortality (NNMR), post-neonatal mortality (PNMR), low birth weight (LBW), and maternal mortality (MMR) were calculated for the years 2000-2016. Average Annual Percent Changes (AAPC) were calculated by the period before and after 2008. The expected rates of 2009-2016 and the observed-to-expected rate ratios (RR) were calculated. Correlation and multiple linear regression analyses were used to test the impact of socio-economic variables on health outcomes; (3) Results: A reverse in downwards trends of PNM, IMR, and NNMR is observed since 2009. All observed values of 2009-2016 were found significantly higher than the expected ones by 12-34%. All indicators except SBR were found negatively correlated with GDP and DHI. A positive correlation was found between IMR, NNMR, and LBW and long-term unemployment, and no association with public health expenditure; (4) Conclusions: Economic crisis was associated with remarkable adverse effects on perinatal outcomes and infant mortality, mainly determined by long-term unemployment and income reduction. The findings stress a need for interventions to protect maternity and child health during crises. abstract_id: PUBMED:25926295 Triplet pregnancy: is the mode of conception related to perinatal outcomes? Many triplets are conceived as a consequence of assisted reproductive technology (ART). Concerns have been raised that triplet pregnancies conceived by ART are more complicated than those conceived spontaneously. The purpose of this study was to evaluate all triplet pregnancies managed over a 12-year period to determine if there were any differences in outcome based on the mode of conception. All triplet pregnancies between 1999 and 2011 that reached at least 20 weeks' gestation and that were managed at the Royal Women's Hospital (RWH), Melbourne, Victoria were identified. Maternal and neonatal outcomes were compared between ART conceived and spontaneously conceived triplets. In the study period, 53 sets of triplets managed in our institution met the eligibility criteria. Twenty-five triplet sets were conceived by ART and 28 were conceived spontaneously. More ART conceptions resulted in trichorionic triamniotic (TCTA) triplets than did spontaneous conceptions (p = .015). There were no differences between ART and spontaneously conceived triplets for any of the maternal or neonatal complications studied. Trichorionic (TC) triplets delivered at a later gestation than other triplets: 32.1 (SD 2.9) versus 30.4 (SD 3.9) weeks (p = .08). TC triplets were significantly less likely to die than monochorionic (MC) or dichorionic (DC) triplets: 3/93 (3%) versus 13/66 (20%) (p = .025). In conclusion, triplets conceived by ART are more likely to have TCTA placentation and TCTA triplet sets had lower mortality rates than other triplet combinations. Outcomes for triplets conceived by ART were similar to those of triplets conceived spontaneously. abstract_id: PUBMED:26714679 Prediction of perinatal mortality in triplet pregnancies. Purpose: To investigate predictors of perinatal mortality in triplet pregnancies. Methods: Retrospective cohort managed in a tertiary teaching hospital (1998-2012) including all pregnancies with tree live fetuses at the first ultrasound examination, performed after 11 weeks of gestation. Primary end-point was defined as the number of children alive at hospital discharge. Ordinal stepwise regression analysis examined the association with maternal age, parity, pregnancy chorionicity, gestational age at our first ultrasound evaluation, presence of maternal clinical, obstetrical and fetal complications and gestational age at delivery. Results: Sixty-seven triplet pregnancies were first seen at 18.5 ± 6.8 weeks, 33 (49.3 %) were trichorionic, obstetric complications occurred in 34 (50.7 %) and fetal complications were diagnosed in 17 (25.4 %). Perinatal mortality rate was 249 ‰ (95 % CI 189-317) and 138 (73 %) children were discharged alive from hospital (11 pregnancies with no survivors; single and double survival in ten cases each; all children alive in 36). Regression analysis showed that presence of fetal complications (OR 0.10, 95 % CI 0.03-0.36) and gestational age at delivery (OR 1.55, 95 % CI 1.31-1.85) are significant predictors of outcome (p &lt; 0.001). Conclusions: Perinatal mortality in non-selected triplet pregnancies is high and is related to the presence of fetal complications and gestational age at delivery. abstract_id: PUBMED:30584681 Perinatal mortality and morbidity in triplet pregnancy according to chorionicity: systematic review and meta-analysis. Objective: The incidence of perinatal mortality and morbidity in triplet pregnancies according to chorionicity is yet to be established. The aim of this systematic review was to quantify perinatal mortality and morbidity in trichorionic triamniotic (TCTA), dichorionic triamniotic (DCTA) and monochorionic triamniotic (MCTA) triplets. Methods: MEDLINE, EMBASE and CINAHL databases were searched in December 2017 for literature published in English describing outcomes of DCTA, TCTA and/or MCTA triplet pregnancies. Primary outcomes were intrauterine death (IUD), neonatal death, perinatal death (PND) and gestational age at birth. Secondary outcomes comprised respiratory, neurological and infectious morbidity, as well as a composite score of neonatal morbidity. Data regarding outcomes were extracted from the included studies. Random-effects meta-analysis was used to estimate the risk of mortality and morbidity and to compute the difference in gestational age at birth between TCTA and DCTA triplet pregnancies. Results: Nine studies (1373 triplet pregnancies, of which 1062 were TCTA, 261 DCTA and 50 MCTA) were included in the analysis. The risk of PND was higher in DCTA than in TCTA triplet pregnancies (odds ratio (OR), 3.3 (95% CI, 1.3-8.0)), mainly owing to the higher risk of IUD in DCTA triplet pregnancies (OR, 4.6 (95% CI, 1.8-11.7)). There was no difference in gestational age at birth between TCTA and DCTA triplets (mean difference, 1.1 weeks (95% CI, -0.3 to 2.5 weeks); I2 = 85%; P = 0.12). Neurological morbidity occurred in 2.0% (95% CI, 1.1-3.3%) of TCTA and in 11.6% (95% CI, 1.1-40.0%) of DCTA triplets. Respiratory and infectious morbidity affected 28.3% (95% CI, 20.7-36.8%) and 4.2% (95% CI, 2.8-5.9%) of TCTA and 34.0% (95% CI, 21.5-47.7%) and 7.1% (95% CI, 2.7-13.3%) of DCTA triplets, respectively. The incidence of composite morbidity in TCTA and DCTA triplets was 29.6% (95% CI, 21.1-38.9%) and 34.0% (95% CI, 21.5-47.7%), respectively. When translating these figures into a risk analysis, the risk of neurological morbidity (OR, 5.4 (95% CI, 1.6-18.3)) was significantly higher in DCTA than in TCTA triplets, while there was no significant difference in the other morbidities explored. Only one study reported on outcomes of MCTA pregnancies, hence, no formal comparison with the other groups was performed. Conclusion: DCTA triplets are at higher risk of perinatal mortality and morbidity than are TCTA triplets. Copyright © 2018 ISUOG. Published by John Wiley &amp; Sons Ltd. abstract_id: PUBMED:29883983 The Relationship of Assisted Reproductive Technology on Perinatal Outcomes in Triplet Gestations. Objective: To assess whether assisted reproductive technology (ART) is associated with increased risk of adverse perinatal outcomes in triplet gestations compared with spontaneous conception. Study Design: Secondary analysis of a multicenter randomized trial for the prevention of preterm birth in multiple gestations. Triplets delivered at ≥ 24 weeks were studied. The primary outcome was the rate of composite neonatal morbidity (CNM) that included one or more of the following: bronchopulmonary dysplasia, respiratory distress syndrome, necrotizing enterocolitis, culture proven sepsis, pneumonia, retinopathy of prematurity, intraventricular hemorrhage, periventricular leukomalacia, or perinatal death. Results: There were 381 triplets (127 women) of which 89 patients conceived via ART and 38 patients spontaneously. Women with ART were more likely to be older, Caucasian, married, nulliparous, have higher level of education, and develop pre-eclampsia. Spontaneously conceived triplets were more likely to delivery at an earlier gestation (31.2 ± 3.5 vs 32.8 ± 2.7 weeks) (p = 0.009) with a lower birth weight (p &lt; 0.001). After adjusting for confounders, no differences were noted in culture proven sepsis, perinatal death, CNM, respiratory distress syndrome, or Apgar score &lt; 7 at 5 minutes. All remaining perinatal outcomes were similar. Conclusion: Triplets conceived by ART had similar perinatal outcomes compared with spontaneously conceived triplets. Answer: Based on the provided abstracts, it appears that the perinatal outcomes of triplet pregnancies can vary significantly depending on several factors, including the mode of conception, chorionicity, and the level of prenatal care received. Monochorionic diamniotic (MCDA) triplet pregnancies are very rare and associated with higher fetal morbidity and mortality rates. However, with early diagnosis, regular antenatal care, close prenatal monitoring, and sufficient communication, better perinatal outcomes can be achieved (PUBMED:31829154). The mode of delivery in triplet pregnancies does not seem to significantly impact perinatal mortality and morbidity. A large cohort study in the Netherlands found that caesarean delivery was not associated with reduced perinatal mortality and morbidity in women with triplet pregnancies (PUBMED:29807452). Chorionicity is a major determinant of perinatal and maternal outcomes in triplet pregnancies. Non-trichorionic triamniotic (non-TCTA) pregnancies, such as monochorionic and dichorionic pregnancies, have a higher risk of neonatal intensive care unit (NICU) admission, respiratory distress, sepsis, necrotizing enterocolitis, perinatal and intrauterine mortality compared to TCTA pregnancies (PUBMED:35407479). Dichorionic (DC) triplet pregnancies are more likely to deliver before 30 weeks' gestational age and have very low birthweight neonates compared to trichorionic (TC) triplet pregnancies. Neonatal mortality appears to be mediated by the presence or absence of twin-twin transfusion syndrome (TTTS) (PUBMED:26608832). Dichorionic and monochorionic (DC + MC) triplet gestations are at an increased risk of neonatal morbidity and mortality compared with TC triplet gestations (PUBMED:28367358). The mode of conception, whether through assisted reproductive technology (ART) or spontaneous, does not seem to significantly affect perinatal outcomes in triplet pregnancies. ART-conceived triplets had similar perinatal outcomes compared with spontaneously conceived triplets (PUBMED:29883983). In conclusion, while triplet pregnancies inherently carry higher risks, the outcomes in Auckland or similar settings could be better than expected if there is careful management of the pregnancy, including consideration of chorionicity, timely and adequate prenatal care, and appropriate delivery planning. The studies do not provide specific data for Auckland, but the general principles for managing triplet pregnancies and improving outcomes are likely applicable.
Instruction: Does inducible NOS have a protective role against hypoxia/reoxygenation injury in rat heart? Abstracts: abstract_id: PUBMED:20418118 Does inducible NOS have a protective role against hypoxia/reoxygenation injury in rat heart? Purpose: The present study analyzes the role of the nitric oxide (NO) derived from inducible NO synthase (iNOS) under cardiac hypoxia/reoxygenation situations. Methods: For this, we have designed a follow-up study of different parameters of cell and tissue damage in the heart of Wistar rats submitted for 30 min to acute hypobaric hypoxia, with or without prior treatment with the selective iNOS inhibitor N-(3-(aminomethyl)benzyl) acetamidine or 1400W (10 mg/kg). The rats were studied at 0 h, 12 h, and 5 days of reoxygenation, analyzing NO production (NOx), lipid peroxidation, apoptosis, and protein nitration expression and location. This is the first time-course study which analyzes the effects of the iNOS inhibition by 1400W during hypoxia/reoxygenation in the adult rat heart. Results: The results show that when 1400W was administered before the hypoxic episode, NOx levels fell, while both the lipid peroxidation level and the percentage of apoptotic cells rose throughout the reoxygenation period. Levels of nitrated proteins expression fell only at 12 h post-hypoxia. Conclusions: The inhibition of iNOS raises the peroxidative and apoptotic level in the hypoxic heart indicating that this isoform may have a protective effect on this organ against hypoxia/reoxygenation injuries, and challenging the conventional wisdom that iNOS is deleterious under these conditions. These findings could help in the design of new treatments based on NO pharmacology against hypoxia/reoxygenation dysfunctions. abstract_id: PUBMED:11834473 L-arginine protects human heart cells from low-volume anoxia and reoxygenation. Protective effects of L-arginine were evaluated in a human ventricular heart cell model of low-volume anoxia and reoxygenation independent of alternate cell types. Cell cultures were subjected to 90 min of low-volume anoxia and 30 min of reoxygenation. L-Arginine (0-5.0 mM) was administered during the preanoxic period or the reoxygenation phase. Nitric oxide (NO) production, NO synthase (NOS) activity, cGMP levels, and cellular injury were assessed. To evaluate the effects of the L-arginine on cell signaling, the effects of the NOS antagonist N(G)-nitro-L-arginine methyl ester, NO donor S-nitroso-N-acetyl-penicillamine, guanylate cyclase inhibitor methylene blue, cGMP analog 8-bromo-cGMP, and ATP-sensitive K+ channel antagonist glibenclamide were examined. Our data indicate that low-volume anoxia and reoxygenation increased NOS activity and facilitated the conversion of L-arginine to NO, which provided protection against cellular injury in a dose-dependent fashion. In addition, L-arginine cardioprotection was achieved by the activation of guanylate cyclase, leading to increased cGMP levels in human heart cells. This action involves a glibenclamide-sensitive, NO-cGMP-dependent pathway. abstract_id: PUBMED:10760374 Involvement of peroxynitrite and hydroxyradical generated from nitric oxide in hypoxia/reoxygenation injury in rat cerebrocortical slices. The changes in nitric oxide (NO) formation during hypoxia and reoxygenation were measured in slices of rat cerebral cortex, and the possible involvement of NO and its decomposition products, including peroxynitrite and hydroxyradical, in the hypoxia/reoxygenation injury was subsequently investigated. NO formation estimated from cGMP accumulation in the extracellular fluids was enhanced during hypoxia and to a lesser extent in the reoxygenation period. The mRNA for inducible NO synthase (NOS) was detected 3-5 h after reoxygenation, although neuronal NOS mRNA decreased after reoxygenation. Several NOS inhibitors such as N(G)-monomethyl-L-arginine and N(G)-nitro-L-arginine blocked not only the NO formation but also the hypoxia/reoxygenation injury as determined by lactate dehydrogenase (LDH) leakage. The hypoxia/reoxygenation injury was prevented by peroxynitrite scavengers including deferoxamine and uric acid, or several hydroxyradical scavengers such as dimethylthiourea, 2-mercaptopropionylglycine and D(-) mannitol. In addition, the hypoxia/reoxygenation injury was attenuated by poly(ADP-ribose)synthetase inhibitors such as banzamide, 3-aminobenzamide and 1,5-isoquinolinediol. On the other hand, both N-morpholinosidnonimine, a peroxynitrite generator, and hydroxyradical-liberating solution containing FeCl(3)-ADP and dihydroxyfumarate caused a marked LDH leakage in normoxic slices. These findings suggest that the enhanced formation of NO causes hypoxia/reoxygenation injury after degradation to peroxynitrite and hydroxyradical and the resultant activation of poly(ADP-ribose)synthetase. abstract_id: PUBMED:11846411 Mixed tocopherol preparation is superior to alpha-tocopherol alone against hypoxia-reoxygenation injury. Hypoxia-reoxygenation (H-R) is associated with alterations in oxidant-antioxidant balance and L-arginine-nitric oxide system. Tocopherols decrease the activity of reactive oxygen species (ROS) and yet are not beneficial in clinical trials. It has been proposed that mixed tocopherols as found in nature may be more tissue protective than alpha-tocopherol alone found in commercial preparations. We compared the effect of a mixed tocopherol preparation with that of alpha-tocopherol alone on superoxide dismutase (SOD) activity and iNOS expression in cultured myocytes exposed to H-R. Myocytes from Sprague-Dawley rat hearts were subjected to hypoxia for 24 h followed by reoxygenation for 3 h H-R. Parallel groups of myocytes were pretreated with alpha-tocopherol alone or a mixed-tocopherol preparation (containing alpha-, gamma-, and delta-tocopherols) (50 microM) for 30 min. H-R resulted in myocyte injury (determined by LDH release), a decrease in SOD activity and an upregulation of iNOS expression/activity. Both tocopherol preparations attenuated cell injury and markedly decreased the effects of H-R on SOD activity and iNOS expression/activity (all P &lt; 0.05 vs H-R group, n = 5). However, mixed-tocopherol preparation was much superior to alpha-tocopherol in terms of myocyte protection from the adverse effect of H-R (P &lt; 0.05). Lack of efficacy of commercial tocopherol preparations in clinical trials may reflect absence of gamma- and delta-tocopherols. abstract_id: PUBMED:25614059 Protective effects of Millettia pulchra flavonoids on myocardial ischemia in vitro and in vivo. Background: Previous studies have demonstrated that Millettia pulchra flavonoids (MPF) exhibit protective effects on myocardial ischemia reperfusion injury (MI/RI) in isolated rat hearts and show anti-oxidative, anti-hypoxic and anti-stress properties. Methods: In this study, the cardioprotective effects of MPF on myocardial ischemia and its underlying mechanisms were investigated by a hypoxia/ reoxygenation (H/R) injury model in vitro and a rat MI/RI model in vivo. Results: We found that the lactate dehydrogenase (LDH) and inducible nitric oxide synthase (iNOS) activities were decreased in the MPF pretreatment group, whereas the activities of constructional nitric oxide synthase (cNOS), total nitric oxide synthase (tNOS), Na(+)-K(+)-ATPase and Ca(2+)-Mg(2+)-ATPase were significantly increased. In addition, the cardiocytes were denser in the MPF groups than in the control group. The mortality rate and apoptosis rate of cardiocytes were significantly decreased. Furthermore, pretreatment with MPF in vivo significantly improved the hemodynamics, decreased malondialdehyde (MDA) abundance, increased the activities of plasma superoxide dismutase (SOD) and glutathione peroxidase (GSH-Px) and decreased the expression of the Bax protein and ratio Bax/Bc1-2 ration. Conclusions: These results suggest that MPF is an attractive protective substance in myocardial ischemia due to its negative effects on heart rate and ionotropy, reduction of myocardial oxidative damage and modulation of gene expression associated with apoptosis. abstract_id: PUBMED:11514268 TGF-beta(1) modulates NOS expression and phosphorylation of Akt/PKB in rat myocytes exposed to hypoxia-reoxygenation. Myocardial hypoxia-reoxygenation (H-R) is associated with upregulation of inducible nitric oxide synthase (iNOS), decrease in endothelial NOS (eNOS), and increase in protein kinase B (PKB). Previous work also shows that transforming growth factor-beta(1) (TGF-beta(1)) can attenuate myocardial injury induced by H-R. We examined the modulation of NOS and PKB expression in response to H-R by TGF- beta(1). Myocytes from Sprague-Dawley rat hearts were cultured and exposed to hypoxia (95% N(2)-5% CO(2), PO(2) ~30 mmHg) for 24 h and reoxygenation (95% air-5% CO(2)) for 3 h. Myocytes were then examined for lactate dehydrogenase (LDH) release, iNOS activity (conversion of L-[(3)H]arginine to L-[(3)H]citrulline), iNOS and eNOS expression, and PKB phosphorylation. H-R alone resulted in myocyte injury, upregulation of iNOS activity and expression, decrease in eNOS expression, and increase in PKB phosphorylation (all P &lt; 0.05 vs. cells cultured in normoxic conditions). Treatment of myocytes with TGF-beta(1) (1 ng/ml) resulted in a reduction in LDH release, attenuation of the alterations in NOS expression (both iNOS and eNOS), and PKB phosphorylation in response to H-R (all P &lt; 0.05 vs. H-R alone). These observations suggest that TGF-beta(1) decreases H-R injury and attenuates alterations in NOS and PKB phosphorylation in myocytes exposed to H-R. abstract_id: PUBMED:12124433 Aspirin provides cyclin-dependent kinase 5-dependent protection against subsequent hypoxia/reoxygenation damage in culture. Aspirin [acetylsalicylic acid (ASA)] is an anti-inflammatory drug that protects against cellular injury by inhibiting cyclooxygenases (COX), inducible nitric oxide synthase (iNOS) and p44/42 mitogen-activated protein kinase (p44/42 MAPK), or by preventing translocation of nuclear factor kappaB (NF-kappaB). We studied the effect of ASA pre-treatment on neuronal survival after hypoxia/reoxygenation damage in rat spinal cord (SC) cultures. In this injury model, COX, iNOS and NF-kappaB played no role in the early neuronal death. A 20-h treatment with 3 mm ASA prior to hypoxia/reoxygenation blocked the hypoxia/reoxygenation-induced lactate dehydrogenase (LDH) release from neurons. This neuroprotection was associated with increased phosphorylation of neurofilaments, which are substrates of p44/42 MAPK and cyclin-dependent kinase 5 (Cdk5). PD90859, a p44/42 MAPK inhibitor, had no effect on ASA-induced tolerance, but olomoucine and roscovitine, Cdk5 inhibitors, reduced ASA neuroprotection. Hypoxia/reoxygenation alone reduced both the protein amount and activity of Cdk5, and this reduction was inhibited by pre-treatment with ASA. Moreover, the protein amount of a neuronal Cdk5 activator, p35, recovered after reoxygenation only in ASA-treated samples. The prevention of the loss in Cdk5 activity during reoxygenation was crucial for ASA-induced protection, because co-administration of Cdk5 inhibitors at the onset ofreoxygenation abolished the protection. In conclusion, pre-treatment with ASA induces tolerance against hypoxia/reoxygenation damage in spinal cord cultures by restoring Cdk5 and p35 protein expression. abstract_id: PUBMED:21158074 Protective effects of ginsenoside RB3 on hypoxic/ischemic brain injury and involved mechanisms Aim: To observe protective effects and involved mechanism of ginsenoside Rb3 on hypoxic/ischemic brain injury, using cultured hippocampal neurons, rat hippocampal slices and intact animals. Methods: (1) Mice were tightly closed in glasses of 150 ml, and then survival time of them was observed. (2) During simulated ischemia and after reoxygenation, changes of orthodromic population spikes (OPS) in the area CA1 of hippocampal slice were investigated. (3) By using histochemistry, the expressions of NOS in CA1 area of rat hippocampus after hypoxic exposure were observed. (4) Using LDH detection, tests of total NOS, iNOS and cNOS activity, the protective effects of ginsenoside Rb3 were investigated on cultured hippocampal neurons treated with hypoxia. Results: (1) Given ginsenoside Rb3 (10 mmol/L), mice survived significantly longer than that of control group. (2) The occurrence of HIP (hypoxic injury potentials) decreased after administration of ginsenosides Rb3 (60 micromol/L) in many slices, while the recovery rate and amplitude of OPS after reoxygenation were significantly higher than those of the control group. (3) In CA1 area of rat hippocampus, NOS-positive neurons increased at the end of 24 h hypoxia and further 24 h reoxygenation, while the number of NOS-positive neurons decreased after treatment with ginsenoside Rb3. (4) The LDH leakage rate of cultured rat hippocampal neurons increased at the end of hypoxia, while it decreased after treatment with Rb3. Moreover the total NOS, especially iNOS activity of these neurons also decreased. Conclusion: Ginsenoside Rb3 has a significant protective effect on hypoxic-ischemic injury of neurons, and this involves the stabilization of the cell membrane, the inhibition of the expression and activity of NOS, especially iNOS activity. abstract_id: PUBMED:14698483 Protective effect of growth hormone on neuronal apoptosis after hypoxia-ischemia in the neonatal rat brain. Recent studies have shown that growth hormone (GH) can reduce neuronal loss after hypoxic-ischemic injury (HI) in neonatal and juvenile rat brains. Here, we investigated whether GH exerts its neuroprotective role through an anti-apoptotic effect in neonatal rat brains damaged by severe HI. Gross and histological observations showed that the extent of brain damage was found to be reduced in GH-treated brain at E7 after injury. In a terminal transferase-mediated dUTP nick-end-labeling (TUNEL) study, TUNEL-positive apoptotic cells were localized only at the damaged region in animals treated with saline, which was confirmed by an electron microscopy. In an immunohistochemical study with anti-bcl-2, -bax, -bad, -neuronal nitric oxide synthase (nNOS), -inducible NOS (iNOS) and -endothelial NOS (eNOS) antibodies, we observed that bax, bad, iNOS and eNOS were elevated in the saline-treated group. This study thus suggests that the protective role of GH against HI injury is mediated thorough an anti-apoptotic effect, which offers the possibility of a GH application for the treatment of neonatal HI encephalopathy. abstract_id: PUBMED:12624303 Aspirin inhibits p44/42 mitogen-activated protein kinase and is protective against hypoxia/reoxygenation neuronal damage. Background And Purpose: Acetylsalicylic acid (ASA) is preventive against stroke and protects against focal brain ischemia in rats. We studied the mechanisms of the manner in which ASA provides neuroprotection against hypoxia/reoxygenation (H/R) injury. Methods: Spinal cord cultures exposed to 20 hours of hypoxia followed by reoxygenation were treated with a vehicle, ASA or inhibitors of inducible nitric oxide synthase (iNOS), mitogen-activated protein kinases p38 MAPK and ERK1/2, or an N-methyl-d-aspartic acid (NMDA) receptor antagonist. Cell viability was assessed by LDH release measurement and cell counts. Prostaglandin production was measured by enzyme immunoassay, MAPK signaling by immunoblotting, and DNA binding of nuclear factor-kappaB (NF-kappaB) and activating protein-1 (AP-1) by electrophoretic mobility shift assay. Results: One to 3 mmol/L ASA inhibited H/R-induced neuronal death when present during H/R but not when administered only for the reoxygenation period. Prostaglandin E2 production was very low and was not altered by ASA. The AP-1 and NF-kappaB DNA binding activities increased after H/R. ASA increased the H/R-induced AP-1 binding but had no effect on NF-kappaB binding. H/R induced a sustained ERK1/2 activation followed by neuronal death, whereas no changes in p38 or c-Jun N-terminal kinase were detected. ASA strongly inhibited this ERK1/2 activation. PD98059, an ERK1/2 inhibitor, was also neuroprotective, prevented H/R-induced ERK1/2 activation, and had no effect on NF-kappaB binding activity. Inhibition of NMDA receptors, iNOS, or p38 MAPK did not provide neuroprotection. Conclusions: Inhibition of the sustained activation of ERK1/2 may partially contribute to neuroprotection achieved by ASA against H/R injury. Answer: Yes, inducible NOS (iNOS) appears to have a protective role against hypoxia/reoxygenation injury in rat heart. A study analyzing the role of nitric oxide (NO) derived from iNOS under cardiac hypoxia/reoxygenation situations found that the inhibition of iNOS with a selective inhibitor before a hypoxic episode led to decreased NO production, increased lipid peroxidation levels, and a higher percentage of apoptotic cells during the reoxygenation period. These results suggest that iNOS may have a protective effect on the heart against hypoxia/reoxygenation injuries (PUBMED:20418118). Additionally, other studies have shown that NO, which is produced by NOS, can be protective against cellular injury. For instance, L-arginine, which is a substrate for NO production, was found to protect human heart cells from low-volume anoxia and reoxygenation by increasing NO production and activating a protective NO-cGMP-dependent pathway (PUBMED:11834473). Furthermore, the involvement of NO and its decomposition products, including peroxynitrite and hydroxyradical, in hypoxia/reoxygenation injury has been investigated, suggesting that the enhanced formation of NO can lead to injury after degradation to peroxynitrite and hydroxyradical, but also indicating that NO formation is a complex process with potential protective roles depending on the context (PUBMED:10760374). In summary, the evidence suggests that iNOS and the NO it produces can have a protective role against hypoxia/reoxygenation injury in the rat heart, although the exact mechanisms and conditions under which this protection occurs may be complex and context-dependent.
Instruction: Hospital performance: competing or shared values? Abstracts: abstract_id: PUBMED:24560230 Teaching hospital performance: towards a community of shared values? This paper explores the performance dimensions of Italian teaching hospitals (THs) by considering the multiple constituent model approach, using measures that are subjective and based on individual ideals and preferences. Our research replicates a study of a French TH and deepens it by adjusting it to the context of an Italian TH. The purposes of this research were as follows: to identify emerging views on the performance of teaching hospitals and to analyze how these views vary among hospital stakeholders. We conducted an in-depth case study of a TH using a quantitative survey method. The survey uses a questionnaire based on Parsons' social system action theory, which embraces the major models of organizational performance and covers three groups of internal stakeholders: physicians, caregivers and administrative staff. The questionnaires were distributed between April and September 2011. The results confirm that hospital performance is multifaceted and includes the dimensions of efficiency, effectiveness and quality of care, as well as organizational and human features. There is a high degree of consensus among all observed stakeholder groups about these values, and a shared view of performance is emerging. Our research provides useful information for defining management priorities to improve the performance of THs. abstract_id: PUBMED:18035448 Hospital performance: competing or shared values? Objective: To find out which are the emerging views on hospital performance and to analyze how these views vary among hospital stakeholders. Methods: Study Setting: Three hospital stakeholder groups (physicians, caregivers, and administrative staff) in a large Paris teaching hospital. Study Design: A case study combining a qualitative (interviews of 80 key hospital stakeholders and a survey of hospital staff), and a quantitative analysis (a questionnaire composed of 4 theoretical dimensions, 13 sub-dimensions, 66 items) with triangulation of the results. Results: Hospital stakeholders assign greatest importance to the human relations dimension, i.e., organizational climate (professional and public service values) and quality of work life. These values attract a high degree of consensus among stakeholders (no statistical difference between physicians, caregivers and administrative staff). Conclusions: Our findings challenge the mainstream view that competing values underlie hospital performance. Currently, views are to some extent shared among different stakeholder groups. A reason for this could be the need to form a more united front in the face of recent reforms. This common emphasis on professional and public service values could be the basis for formulating management priorities in teaching hospitals in order to improve performance. abstract_id: PUBMED:31373091 Learning values in shared decision-making in undergraduate paramedic education. Rationale: While professional values are seen as a fundamental part of element of shared decision-making, there is little research on how they are learnt within the paramedic profession. Aims And Objectives: This study sought to understand how student paramedics developed their professional values within the ambulance practicum. Method: 37 paramedic students and 5 paramedic educators submitted diary entries and took part in focus groups. Results: There was a general consensus between participants that the study of professional values was viewed as a scholarly activity within the university, and the practicum subsequently viewed as unacademic. Students identified that much of their learning replicated values of the environment they were in, and educators voiced their concerns that this may not always result in the understanding behind professional values. Conclusion: This study has highlighted that professional values would be worthy of being explored within the paramedic undergraduate curriculum, in an effort to enable students to understand and develop their values during their education and appreciate the importance of values within shared decision-making. abstract_id: PUBMED:28917833 Values-based shared decision-making in the antenatal period. Despite advances in life-saving technology for critically ill neonates, challenges continue to arise for infants delivered with extreme prematurity, congenital anomalies, and genetic conditions that exceed the limits of currently available interventions. In these situations, parents are forced to make cognitively and emotionally difficult decisions, in discussion with a neonatologist, regarding how aggressively to provide supportive measures at the time of delivery and at what point burdens of therapy outweigh benefits. Current guidelines recommend that parents' values should guide these decisions; however, little is known about the values parents hold, and how those values are employed in the context of complexity, uncertainty, and emotionality of these situations. Systematic investigation of how parents derive their values and how clinicians should engage with parents about those values is necessary to guide the development of interventions to enhance shared decision-making processes, ultimately improving satisfaction, coping, and resilience and minimizing the potential for regret. abstract_id: PUBMED:10265225 Shared hospital services: study report. The study of shared hospital services reported in this article was conducted to determine the magnitude of shared administrative and clinical programs, what institutions participate, and most frequently shared services. To ensure that hospitals with different characteristics and in various areas were represented, the investigators mailed questionnaires to 1,731 of the nation's 5,987 short-term, acute care general hospitals in nine census regions. Responses indicated the following trends: Not-for-profit hospitals outrank other types of hospitals in using shared services. About 90 percent of hospitals with 200 to 499 beds shared services. For almost all categories of services, a percentage increase occurred in the number of hospitals participating in shared programs. The three most shared services were purchasing, data processing, and insurance programs. Hospitals shared more administrative than clinical services. According to administrators' responses, cost containment was the most common reason for sharing services. As the pressures to control health care costs increase, the investigators predict that hospitals will share more clinical services. Shared services decrease unit cost, however, only when providers have an excess capacity. abstract_id: PUBMED:38027782 Green shared vision: A bridge between responsible leadership and green behavior under individual green values. The pressure on businesses to be environmentally conscious and focus on sustainable development is accruing due to environmental challenges. Companies are adopting ecological practices and policies to improve their environmentally friendly performance. To achieve this, organizations must substantiate and change the behavior of workers to align their behavior with the organization's ecological objectives. The study endeavors to integrate research on the responsible style of leaders and green behaviors of employees (in-role and extra-role green behaviors) through the mediation of green shared vision and analyze the moderation mechanism of individual green values. For collecting the data, a questionnaire-based survey was conducted among MBA executive program students with at least a year of experience in manufacturing. Out of the 450 questionnaires distributed, only 307 useful responses were obtained. The collected data has been analyzed using SPSS and AMOS. Ethical standards were followed, and participants were assured that their responses would be confidential. The study found that responsible leadership positively impacts green behaviors among employees. This means that when leaders within an organization demonstrate responsible and environmentally conscious behavior, it tends to encourage employees to engage in green behaviors. The study also discovered that a "green shared vision" partially mediates the relationship between responsible leadership and in-role green behavior. In contrast, green shared vision does not mediate the relation between responsible leadership and extra role green behavior. Moreover, this study also finds that the relationship between green shared vision and in-role and extra-role green behavior is strengthened when individual green values moderate it. The study highlights the importance of responsible leadership and the role of green shared values and individual green values in promoting environmentally friendly behavior in the workplace. abstract_id: PUBMED:10276724 The concept and reality of shared hospital services. This decentralized model of sharing health care services, in which a small community hospital through its own initiative has continued to successfully provide services not only to its peer hospitals but to larger secondary hospitals, is a useful example of the effective co-operation and co-ordination between hospitals. Shared Services has successfully evolved from a concept to its present reality as a provider of quality health care services. These programs supply co-ordinated services which both enhance patient care and benefit all hospital staff members. As the merits of both the Respiratory Therapy and Library Services Programs continue to demonstrate themselves, and as the spirit of cooperation between member hospitals continues to grow, the potential for future development of the Shared Services Program is yet to be realized. abstract_id: PUBMED:36826129 The Use of Artificial Intelligence in Clinical Care: A Values-Based Guide for Shared Decision Making. Clinical applications of artificial intelligence (AI) in healthcare, including in the field of oncology, have the potential to advance diagnosis and treatment. The literature suggests that patient values should be considered in decision making when using AI in clinical care; however, there is a lack of practical guidance for clinicians on how to approach these conversations and incorporate patient values into clinical decision making. We provide a practical, values-based guide for clinicians to assist in critical reflection and the incorporation of patient values into shared decision making when deciding to use AI in clinical care. Values that are relevant to patients, identified in the literature, include trust, privacy and confidentiality, non-maleficence, safety, accountability, beneficence, autonomy, transparency, compassion, equity, justice, and fairness. The guide offers questions for clinicians to consider when adopting the potential use of AI in their practice; explores illness understanding between the patient and clinician; encourages open dialogue of patient values; reviews all clinically appropriate options; and makes a shared decision of what option best meets the patient's values. The guide can be used for diverse clinical applications of AI. abstract_id: PUBMED:10112305 Planning for shared services: lessons from experiences of the Princess Margaret Hospital. The plan to rebuild The Princess Margaret Hospital (PMH) on University Avenue in Toronto has provided several opportunities for institutions in the area to share programs and services. The PMH planning process has raised questions about the factors that influence the development of shared services plans, and the organizational and management challenges posed by such plans. This experience suggests 13 principles to develop effective shared services. abstract_id: PUBMED:30803904 Shared patient and provider values in end-stage renal disease decision making: Identifying the tensions. Objective: To examine concordance and tensions in values among stakeholder groups across the shared decision making process for end-stage renal disease patients treated with hemodialysis. Methods: A thematic analysis of transcripts from three stakeholder groups: hemodialysis patients (2 groups, n = 17), nephrologists (1 group, n = 9), and non-physician providers (dietitians, social workers, and registered nurses) (1 group, n = 8). A framework of decision-making components (communication, information, decision, behavior, and outcome) guided analysis of values within and across groups. Results: Shared values included communication that informed patients and involved family; information about function, trajectory, life context, and patient experience; behavior to manage diet; and outcomes of function, safety, and survival. Identified thematic tensions in patient-provider values were: personalized vs. routinized communication; shared vs. separate information; managed vs. adherent behavior; and participating vs. protected outcome. Patient behavior emerged as a contested area of control. Conclusion: Emphasizing shared values and bridging values conflicts may ease tensions surrounding self-care behavior by facilitating discussions about diet, medications, and consistent dialyzing. Practice Implications: More individualized information needs to be delivered to patients in a personalized communication approach that can still be achieved within a busy dialysis clinic setting. Answer: The concept of hospital performance can be approached from the perspective of whether the values driving performance are competing or shared among stakeholders. The research on this topic suggests that there is a significant degree of consensus among hospital stakeholders regarding the values that underpin hospital performance, indicating a trend towards shared rather than competing values. In a study of a large Paris teaching hospital, stakeholders placed the highest importance on the human relations dimension, such as organizational climate and quality of work life, with a high degree of consensus among physicians, caregivers, and administrative staff (PUBMED:18035448). This finding challenges the mainstream view that competing values underlie hospital performance and suggests that professional and public service values are shared among different stakeholder groups. Similarly, an Italian teaching hospital study found that performance is multifaceted, including efficiency, effectiveness, quality of care, organizational, and human features. There was a high degree of consensus among all observed stakeholder groups about these values, indicating a shared view of performance (PUBMED:24560230). In the context of shared decision-making, studies have emphasized the importance of aligning professional values with shared decision-making processes, whether it be in undergraduate paramedic education (PUBMED:31373091) or in the antenatal period when parents face difficult decisions regarding their critically ill neonates (PUBMED:28917833). The literature also discusses the concept of shared services in hospitals, which is a cooperative model that enhances patient care and benefits hospital staff. Shared services are more administrative than clinical, with cost containment being a common reason for sharing services (PUBMED:10265225, PUBMED:10276724). Moreover, the integration of responsible leadership and green behaviors in the workplace is facilitated by a green shared vision, which is influenced by individual green values (PUBMED:38027782). In clinical care, a values-based guide for shared decision-making is proposed to incorporate patient values when using artificial intelligence in healthcare (PUBMED:36826129). Additionally, shared patient and provider values in end-stage renal disease decision-making have been identified, with tensions arising in areas such as communication, information, behavior, and outcomes (PUBMED:30803904). Overall, the evidence suggests that within hospitals, there is a movement towards a community of shared values among stakeholders, which can serve as a foundation for improving hospital performance and decision-making processes.
Instruction: Can leisure-time physical activity improve health checkup results? Abstracts: abstract_id: PUBMED:27265532 Can leisure-time physical activity improve health checkup results? Evidence from Japanese occupational panel data. Objectives: We examined the extent to which changes in worker health, as measured by health checkup items, were associated with increased intensity of leisure-time physical activity (LTPA) after controlling for individual time-invariant attributes. Methods: We used panel data from two to four waves of a Japanese occupational cohort survey, focusing on 30,206 observations of 10,106 individuals (7,669 men and 2,437 women) aged 18-76 years. We estimated first-difference and mean-centered fixed effects models to examine how changes in 10 health checkup items were associated with changes in LTPA intensity. We considered four LTPA intensity levels (none, low, moderate, and vigorous), based on self-reported assessments. Results: For men, low-density lipoprotein cholesterol levels, glycated hemoglobin levels, body mass index, and waist circumference improved when LTPA intensity was increased even at a low level, whereas triglyceride, high-density lipoprotein cholesterol, and fasting blood glucose levels improved when LTPA intensity was increased to moderate or vigorous levels. Blood pressure (both systolic and diastolic) and total cholesterol levels were only modestly responsive to changes in LTPA intensity. For women, blood pressure (both systolic and diastolic) and waist circumference were negatively associated with LTPA intensity, whereas the other variables showed more modest effects. Conclusions: The results suggest that even low- to moderate-intensity LTPA can improve health checkup results; however, the lowest LTPA intensity associated with improvement in health depends on health-risk factors as well as gender. abstract_id: PUBMED:30728969 Association between leisure-time physical activity and sedentary behavior with cardiometabolic health in the ELSA-Brasil participants. Objectives: To assess associations, both individually and in combination, between leisure-time physical activity and sedentary behavior, and cardiometabolic health. Methods: This cross-sectional study included 13,931 civil servants participating in the Brazilian Longitudinal Study of Adult Health (ELSA-Brasil). Leisure-time physical activity was analyzed using the leisure-time domain of the long-form International Physical Activity Questionnaire, while questions related to cumulative sitting time and leisure-based screen time on a weekday and on one day on the weekend were used to establish sedentary behavior. Data analysis was performed using multivariate logistic regression. Results: Following adjustment for confounding variables, high levels of leisure-time physical activity and low levels of sedentary behavior were both associated with favorable cardiometabolic health markers in both genders. When these two factors were analyzed in conjunction, taking the combination of low levels of leisure-time physical activity and high levels of sedentary behavior as the reference, the inverse associations with cardiometabolic variables became even more significant. Conclusion: High levels of leisure-time physical activity and low levels of sedentary behavior were both inversely associated with the cardiometabolic variables analyzed; however, the two variables when evaluated in conjunction appear to produce more consistent associations, particularly when sedentary behavior is evaluated according to leisure-based screen time. abstract_id: PUBMED:25999602 Leisure-time physical activity moderates the longitudinal associations between work-family spillover and physical health. Previous research has documented cross-sectional associations between negative and positive work-family spillover and physical health. Using an effort-recovery model, the study tested the hypothesis that engagement in greater leisure-time physical activity would facilitate recovery processes that buffer the negative health effects of increasing work-family spillover. Employed adults (N = 1,354) completed two waves of the National Survey of Midlife Development in the United States (MIDUS). Results indicated that an increase in negative work-family spillover across nine years was associated with decreased physical health and increased number of chronic conditions at Time 2. Moreover, more time spent on moderate leisure-time physical activity buffered many of the associations between increasing negative spillover and declining health. Implications of the findings are discussed. abstract_id: PUBMED:15508804 Leisure time physical activity and subjective health state in adults Background: Leisure time physical activity improves health. Aim: The aim of this study was to analyze association between leisure time physical activity level and subjective health state in adults. Material And Methods: We have analyzed level of recreational physical activity and subjective health state in representative sample of 1029 adult people in Lodz including 488 men and 541 women. Physical activity assessment was carried out by the interviewer-administrated Seven-Day Physical Activity Recall (SDPAR). Indeed physical activity, subjective health state and physical fitness determined by maximal oxygen output (VO(2max)) was assessed in 271 objectively healthy people. Subjective health state was determined by visual analogue score taken from standardized questionnaire EuroQol 5D. RESULTS. More than half of subjects (60.8%) did not participate in leisure time sport activities. The number of those who had from 1 to 999 kcal/week, on sport was 23.4%. Only 15.,8% participants spent on leisure time physical activity more than 1000 kcal/week. In both groups (total n = 1029 and healthy people n = 271) subjects with 0.0 kcal/week energy expenditure on leisure time sport activities reported significantly lover subjective health state scores comparing to people with higher especially 1000-1999 kcal/week energy expenditure. In this study significant correlation was found for energy expenditure on leisure time physical activity and subjective health state (in group n = 1029 r = 0.03, p &lt; 0.0001 and in group n = 271 r = 0.4, p &lt; 0.0001). Conclusions: This study demonstrates that 80.4% of subjects did not achieve the level of physical activity recommended by experts in health promotion and prevention of cardiovascular and other chronic diseases. Level of leisure time physical activity influence subjective health state in adults. abstract_id: PUBMED:37174789 Ageing, Leisure Time Physical Activity and Health in Europe. The goal of this article is to analyse leisure time physical activity (LTPA) and health-driven motivations to engage in such activity among elderly people in the European Union. We use as a base the recommendations of the World Health Organisation (WHO) and the theory of the correlation between physical activity according to individual factors (age, gender, socio-economic status) and contextual factors (habitat, community infrastructures, the model of the welfare state of the country of residence). Data are taken from Eurobarometer 88.4. The Generalized Structural Equation Model (GSEM) methodology was used, with the STATA program. The results show that 65.3% of EU citizens over the age of 60 engage in some form of LTPA, that 40.4% do so for health reasons, and that only 32.3% engage in LTPA that meets the minimum guidelines set by the WHO. In addition, there are large differences based on individual and contextual characteristics. The following group was found to be those who practice the most: men, with high socio-economic status, belonging to the middle and upper social classes, living in rural areas where there is infrastructure for physical activity, and above all, in the countries of the Nordic model of social welfare. abstract_id: PUBMED:26557941 Leisure time physical activity and health-related behaviours after liver transplantation: a prospective, single-centre study. Introduction: Leisure time physical activity is of proven significance in surveys of fitness levels in various patient groups. Low physical functioning may affect recovery after liver transplantation (LTx). Aim: To assess patients' leisure time activity and health-related habits after transplantation. Material And Methods: One hundred and seven patients after LTx were included. They were divided into groups depending on aetiology of liver problem and the period after LTx. Minnesota Leisure Time Physical Activity Questionnaire (MILTPAQ) and Health Behaviour Inventory (HBI) were applied. Results: Neither the primary indication for the procedure nor the period after surgery had a significant relationship with physical activity assessed with MILTPAQ; however, activity was lower in females than males (1804.3 ±1848.9 vs. 2619.9 ±2067; p = 0.03). Age at survey/surgery was inversely associated with higher activity (p = 0.02 and p = 0.03, respectively). Health Behaviour Inventory analysis showed a correlation between all four of its domains and age at transplantation/survey (p &lt; 0.001 for both). There was a negative correlation between positive mental attitude and body mass index (BMI). Conclusions: The primary indications for grafting and, surprisingly, the period after surgery did not seem to be related to the patients' physical activity in leisure time. Younger and leaner patients appeared to understand the standards of healthy behaviour better and implement them in their daily activities. As higher BMI are associated with a negative mental attitude in patients after LTx, a particular emphasis should be placed on proper counselling in this subgroup of patients. abstract_id: PUBMED:34639454 The Role of Physical Activity-Related Health Competence and Leisure-Time Physical Activity for Physical Health and Metabolic Syndrome: A Structural Equation Modeling Approach for German Office Workers. Office workers (OWs) are prone to insufficient physical activity (PA), which increases their risk of metabolic syndrome (MetS) and impaired physical health. The Physical Activity-related Health Competence (PAHCO) model holds the potential to facilitate a healthy physically active lifestyle. Therefore, in this study, we investigate the interplay between PAHCO, leisure-time PA, physical health, and MetS in OWs in Germany. In a cross-sectional study, OWs (N = 316, 25% female) completed self-report questionnaires along with an occupational health checkup to examine their Metabolic Syndrome Severity Score (MetSSS) values. Structural equation modeling indicated a strong positive association between PAHCO and leisure-time PA and a small positive association with physical health. PAHCO showed a considerable negative association with the MetSSS. Leisure-time PA was a positive mediator for the PAHCO-physical health association but was not a significant mediator for the association between PAHCO and the MetSSS. These findings underscore the importance of PAHCO in the context of leisure-time PA, physical health, and MetS in OWs. Furthermore, our findings highlight the health-enhancing value of the qualitative aspects of PA, such as motivational and volitional components in PA participation, with respect to physical health and MetS. abstract_id: PUBMED:31407961 Reasons and Barriers for University Students' Leisure-Time Physical Activity: Moderating Effect of Health Education. This study aimed to investigate the relationship between health education and motives and barriers for university students' engagement in leisure-time physical activity (PA). The research sample included 709 students (312 females and 397 males) in different years of study, ranging in age between 18-25 years. A questionnaire survey method revealed a significant positive relationship between fitness and health motives and students' leisure-time PA. External barriers were negative predictors of students' leisure-time PA, while more health education-related courses per week were positively associated with students' leisure-time PA. Finally, we found that the number of health education-related courses per week moderated the relationship between fitness and health motives and students' leisure-time PA. These findings suggest that university students' fitness and health motives and external barriers to be physically active outweigh other motives and barriers in determining their leisure-time PA. In addition, health education in university studies can effectively increase students' health-related motivation for PA. abstract_id: PUBMED:36049961 Physical activity in leisure time and at work and risk of dementia: A prospective cohort study of 117,616 individuals. Background And Aims: Up to 40% of all dementia cases may be preventable, primarily by treating or acting on well-established cardiovascular risk factors such as diabetes, hypertension, smoking, and physical inactivity. Whether physical inactivity is associated with risk of non-Alzheimer's dementia - a disease influenced by cardiovascular risk factors - and whether a given association differs for physical activity in leisure time and at work remains unknown. Methods: We conducted a prospective cohort study including 117,616 individuals from the Copenhagen General Population Study and the Copenhagen City Heart Study with up to 43 years of follow-up. Results: Multifactorially adjusted hazard ratios for low versus high physical activity at leisure time was 1.60 (95% confidence interval 1.40-1.83) for non-Alzheimer's dementia and 0.94 (0.80-1.11) for Alzheimer's disease. Corresponding values for non-Alzheimer's dementia after additional adjustment for physical activity at work or apolipoprotein E (APOE) genotype were 1.60 (1.40-1.83) and 1.82 (1.34-2.15). Multifactorially and APOE adjusted hazard ratios for high versus low physical activity at work were 1.50 (1.10-2.05) for non-Alzheimer's dementia and 1.62 (1.14-2.31) for Alzheimer's disease. When combining the two types of physical activity, physical activity in leisure time had the strongest relationship with risk of non-Alzheimer's dementia. Conclusions: Physical inactivity in leisure time was associated with increased risk of non-Alzheimer's dementia, independent of modifiable risk factors and physical activity at work. The present study thus provides evidence for public health advice on physical activity in leisure time for the vascular part of dementia. abstract_id: PUBMED:37269629 Short version of the Minnesota Leisure Time Physical Activity Questionnaire for general population: MLTPAQ 9+2 The aim of this study was to propose a short version of the Minnesota Leisure Time Physical Activity Questionnaire (MLTPAQ) to estimate the prevalence of physical inactivity. MLTPAQ was included in the Galician Risk Behavior Information System survey. Each activity was associated with an intensity code, in multiples of 1 MET, and a weekly energy expenditure of less than 1000kcal was defined as physically inactive. The prevalence of physical inactivity was calculated with the complete and short list (nine activities are performed by at least 10% of the population). The complete and short list classify physical inactivity with an agreement of 98,8%. Misclassified people do one or two more activities, so two open response items were included. This study proposes a short version (9+2 items), suitable for inclusion in a general adult population health survey. Answer: Yes, leisure time physical activity (LTPA) can improve health checkup results. Evidence from Japanese occupational panel data indicates that increased intensity of LTPA is associated with improvements in various health checkup items such as low-density lipoprotein cholesterol levels, glycated hemoglobin levels, body mass index, waist circumference, triglyceride levels, high-density lipoprotein cholesterol levels, and fasting blood glucose levels in men. For women, improvements were noted in blood pressure and waist circumference with increased LTPA intensity (PUBMED:27265532). Additionally, high levels of LTPA and low levels of sedentary behavior were both inversely associated with cardiometabolic variables, and the combination of both factors produced more consistent associations (PUBMED:30728969). Moreover, leisure-time physical activity has been found to moderate the longitudinal associations between work-family spillover and physical health, buffering the negative health effects of increasing work-family spillover (PUBMED:25999602). It has also been shown that leisure time physical activity level is associated with subjective health state in adults, with those engaging in higher levels of physical activity reporting better subjective health states (PUBMED:15508804). In the context of aging, 65.3% of EU citizens over the age of 60 engage in some form of LTPA, and 40.4% do so for health reasons, although only 32.3% meet the minimum guidelines set by the WHO (PUBMED:37174789). After liver transplantation, leisure time physical activity and health-related habits are important for recovery, with younger and leaner patients more likely to engage in healthy behaviors (PUBMED:26557941). Furthermore, for German office workers, Physical Activity-related Health Competence (PAHCO) was positively associated with leisure-time PA, physical health, and showed a considerable negative association with the Metabolic Syndrome Severity Score (MetSSS) (PUBMED:34639454). Health education has also been found to positively influence university students' engagement in leisure-time physical activity (PUBMED:31407961). Lastly, physical inactivity in leisure time was associated with increased risk of non-Alzheimer's dementia, suggesting the importance of LTPA for the vascular part of dementia (PUBMED:36049961).
Instruction: Does the decline in road traffic injury in London vary by ethnicity? Abstracts: abstract_id: PUBMED:18836052 Does the decline in road traffic injury in London vary by ethnicity? Objective: To examine ethnic variations in trends in road traffic injuries in London. Design: Analysis of STATS19 data comparing trends in road traffic casualty rates by ethnic group. Setting: London, 2001-6. Subjects: Children (&lt;or=14 years) and adults (&gt;or=15 years). Main Outcome Measures: Annual casualty rates in white, black, and Asian pedestrians, cyclists, and car occupants. Results: Casualty rates in London declined each year between 2001 and 2006 by an average of 8.8% (95% CI 8.5% to 9.0%). After adjustment for area-level deprivation, there was good evidence that the average annual reduction in injury rates in car occupants was significantly less in Asian than in white adults (10.9% vs 14.4%, p&lt;0.001). There was some evidence that average annual reductions in injury rates were lower in black than in white adult pedestrians (7.4% vs 9.3%, p = 0.041) and car occupants (13.2% vs 14.4%, p = 0.03). Conclusions: Casualty rates in London have declined for pedestrians, cyclists, and car occupants in three broad ethnic groups. Asian car drivers appear to have benefited least from these reductions. abstract_id: PUBMED:21081310 The impact of 20 mph traffic speed zones on inequalities in road casualties in London. Background: Road traffic casualties show some of the widest socioeconomic differentials of any cause of morbidity or mortality, and as yet there is little evidence on what works to reduce them. This study quantified the current and potential future impact of the introduction of 20 mph zones on socioeconomic inequalities in road casualties in London. Methods: An observational study based on analysis of geographically coded police road casualties data, 1987-2006. Changes in counts of casualties from road collisions, those killed and seriously injured and pedestrian injuries by quintile of deprivation were calculated. Results: The effect of 20 mph zones was similar across quintiles of socioeconomic deprivation, being associated with a 41.8% (95% CI 21.0% to 62.6%) decline in casualties in areas in the least deprived quintile versus 38.3% (31.5% to 45.0%) in the most deprived quintile. Because of the greater number of road casualties in deprived areas and the targeting of zones to such areas, the number of casualties prevented by zones was substantially larger in areas of greater socioeconomic deprivation. However, the underlying decline in road casualties on all roads was appreciably greater in less deprived areas (p&lt;0.001 for trend) so that despite the targeting of 20 mph zones, socioeconomic inequalities in road injuries in London have widened over time. Extending 20 mph schemes has only limited the potential to reduce differentials further. Conclusions: The implementation of 20 mph zones targeted at deprived areas has mitigated widening socioeconomic differentials in road injury in London and to some degree narrowed them, but there is limited potential for further gain. abstract_id: PUBMED:22109387 Engaging the Somali community in the road safety agenda: a process evaluation from the London borough of Hounslow. In the UK the most disadvantaged in society are more likely than those more affluent to be injured or killed in a road traffic collision and therefore it is a major cause of health inequality. There is a strong link between ethnicity, deprivation and injury. Whilst national road traffic injury data does not collect ethnic origin the London accident and analysis group does in terms of broad categories such as 'white', 'black' and 'Asian'. Analysis of this data revealed the over-representation of child pedestrian casualties from a 'black' ethnic origin. This information led road safety practitioners in one London borough to map child pedestrian casualties against census data which identified the Somali community as being particularly at risk of being involved in a road traffic collision. Working with the community they sought to discuss and address road safety issues and introduced practical evidence based approaches such as child pedestrian training. The process evaluation of the project used a qualitative approach and showed that engaging with community partners and working across organisational boundaries was a useful strategy to gain an understanding of the Somali community. A bottom approach provided the community with a sense of control and involvement which appears to add value in terms of reducing the sense of powerlessness that marginalised communities often feel. In terms of evaluation, small projects like these, lend themselves to a qualitative process evaluation though it has to be accepted that the strength of this evidence may be regarded as weak. Where possible routine injury data needs to take into account ethnicity which is a known risk factor for road casualty involvement which needs to be continually monitored. abstract_id: PUBMED:26890575 Road traffic mortality in the Slovak Republic in 1996-2014. Objective: Road traffic mortality takes an enormous toll in every society. Transport safety interventions play a crucial role in improving the situation. In the period 1996-2014 several road safety measures, including a complex new road traffic law in 2009, were implemented in the Slovak Republic, introducing stricter conditions for road users. The aim of this study is to describe and analyze the trends in road user mortality in the Slovak Republic in individual age groups by sex during the study period 1996-2014. Methods: Data on overall mortality in the Slovak Republic for the period 1996-2014 were obtained from the Statistical Office of the Slovak Republic. Mortality rates were age-adjusted to the European standard population. Joinpoint regression was used to assess the statistical significance of change in time trends of calculated standardized mortality rates. Results: Mortality rates of all types of road users as well as all age groups and both sexes in the Slovak Republic in the period 1996-2014 are decreasing. The male : female ratio decreased from 4:1 in 1996 to 2:1 in 2014. Motor vehicle users (other than motorcyclists) and pedestrians have the highest mortality rates among road user groups. Both of these groups show a significant decline in mortality rates over the study period. Within the age groups, people age 65 years and over have the highest mortality rates, followed by the age groups 25-64 and 15-24 years old. Joinpoint regression confirmed a steady, significant decline in all mortality rates over the study period. A statistically significant decrease in mortality rates in the last years of the study period was observed in the age group 25-64 and in male motorcycle users. Assessing the impact of the 2009 road traffic law, a drop was observed in the average standardized mortality rate of all road traffic users from 14.56 per 100,000 person years in the period 1996-2008 to 7.69 per 100,000 person years in the period 2009-2014. A similar drop in the average standardized mortality rate was observed in all individual road user groups. Conclusions: The implementation of the new traffic regulations may have contributed significantly to the observed decrease in mortality rates of road users in the Slovak Republic. A significant decrease in mortality was observed in all population groups and in all groups of road users. The introduction of a new comprehensive road traffic law may have expedited the decrease of road fatalities, especially in the age group 25-64 years old. This type of evidence-based epidemiology data can be used for improved targeting of future public health measures for road traffic injury prevention. abstract_id: PUBMED:33309475 Influence of road types on road traffic accidents in northern Guizhou Province, China. Purpose: The increasing number of deaths due to road traffic accidents (RTAs) has attracted global attention. However, the influence of road types is rarely considered in the study of RTAs. This study evaluates the influence of different road types in RTAs in northern Guizhou to provide a basis for the formulation of evidence-based policies and measures. Methods: We obtained the data from the Zunyi Traffic Management Data Platform for the years 2009-2018. The mortality rates of RTAs were calculated. Descriptive methods and Chi-square tests were used to analyze the characteristics of road traffic collisions on different road types. We also examined the associations between the mortality rate per 10,000 vehicles and the growth of per capital gross domestic product (GDP) with Spearman's rank correlation analysis. According to the passing volume and the infrastructure, we defined different types of roads, like administrative road, functional road, general urban road and urban expressway. Results: In 2012, the traffic mortality rate of administrative roads was 8.9 per 100,000 people, and the mortality rate of functional roads was 7.4 per 100,000 people, which decreased in 2018 to 6.1 deaths per 100,000 people and 5.2 deaths per 100,000 people, respectively. The mortality rate per 10,000 vehicles reached the highest level in 2011 (28.8 per 10,000 vehicles and 22.5 per 10,000 vehicles on administrative and functional roads, respectively). The death rate of county roads was the highest among administrative roads (χ2 = 17.389, p &lt; 0.05) and that of fourth-class roads was the highest among functional roads (χ2 = 21.785, p &lt; 0.05). The mortality rate per 10,000 vehicles was negatively correlated with per capital GDP. Conclusion: Although our research shows that RTAs in northern Guizhou have steadily declined in recent years, the range of decline is relatively small. Many measures and sustainable efforts are needed to control road traffic death and accelerate the progress in road traffic safety in northern Guizhou. abstract_id: PUBMED:20007666 Effect of 20 mph traffic speed zones on road injuries in London, 1986-2006: controlled interrupted time series analysis. Objective: To quantify the effect of the introduction of 20 mph (32 km an hour) traffic speed zones on road collisions, injuries, and fatalities in London. Design: Observational study based on analysis of geographically coded police data on road casualties, 1986-2006. Analyses were made of longitudinal changes in counts of road injuries within each of 119 029 road segments with at least one casualty with conditional fixed effects Poisson models. Estimates of the effect of introducing 20 mph zones on casualties within those zones and in adjacent areas were adjusted for the underlying downward trend in traffic casualties. Setting: London. Main Outcome Measures: All casualties from road collisions; those killed and seriously injured (KSI). Results: The introduction of 20 mph zones was associated with a 41.9% (95% confidence interval 36.0% to 47.8%) reduction in road casualties, after adjustment for underlying time trends. The percentage reduction was greatest in younger children and greater for the category of killed or seriously injured casualties than for minor injuries. There was no evidence of casualty migration to areas adjacent to 20 mph zones, where casualties also fell slightly by an average of 8.0% (4.4% to 11.5%). Conclusions 20 mph zones are effective measures for reducing road injuries and deaths. abstract_id: PUBMED:37885752 The temporal trend of road traffic mortality in China from 2004 to 2020. Background: Road traffic accidents are one of the critical disasters that cause massive negative influences to the social economy and people's property, health and life safety. The purpose of this study is to analyze the temporal trend of road traffic mortality in China from 2004 to 2020, and further analyze the key factors that the influencing changes in China's road traffic mortality and provides information for the development of targeted interventions to reduce the number of preventable deaths. Methods: The data were retrieved from the Chinese Death Cause Monitoring Data Set from 2004 to 2020. The road traffic mortality rates were standardized by the Sixth National Population Census (6th NPC) data. Joinpoint regression method was manipulated to analyze temporal trends of standardized road traffic mortality. The study used annual percentage change (APC) and average annual percentage change (AAPC) calculated by the Joinpoint regression model to describe trends in road traffic mortality rates. Results: The overall age-adjusted road traffic mortality in China showed a declining trend from 2004 to 2020 (AAPC2004-2020 = - 4.2%), from 20.9 to 12.92 per 100,000. Rural road traffic mortality rates were generally higher than the one in urban areas, and males were generally higher than females. There was an overall downward trend of the standardized road traffic mortality in the East and Central regions between 2004 and 2020. It was worth noting that the road traffic mortality rates in the Western region showed an upward trend from 2006 to 2011 (APC2006-2011 = 3.3%) and continued to decline after 2011 (APC2011-2020 = - 6.7%). The road traffic mortality rates of aged 65 years and older was highest, which required focused attention. Conclusions: From 2004 to 2020, the road traffic mortality rates in China generally declined. At the same time, there was a slow reduction or even an upward trend in road traffic mortality rates among the elderly and in western regions. Rural males are a priority group for road traffic injury prevention. abstract_id: PUBMED:37947230 Road traffic injuries and associated mortality in the Islamic Republic of Iran. Background: Road traffic accidents are a major public health problem globally, causing millions of injuries, deaths and disabilities, and a huge loss of financial resources, especially in low- and middle-income countries. Aim: To determine the incidence of road traffic injuries and associated mortality from 1997 to 2020 in the Islamic Republic of Iran. Methods: This retrospective study used data from the Legal Medicine Organization of the Islamic Republic of Iran to estimate the annual rates of road traffic injuries and associated mortality from 21 March 1997 to 20 March 2020. The data were analysed using STATA version 14 and the annual rates are reported per 100 000 population. Results: During the study period, 5 760 835 road traffic injuries and 472 193 deaths were recorded in the Islamic Republic of Iran. The mortality rate increased from 22.4 per 100 000 in 1997 to 40 per 100 000 in 2005 and decreased to 18.4 per 100 000 in 2020. The injury rate increased from 111.1 per 100 000 in 1997 to 394.9 per 100 000 in 2005. It decreased in 2006 and 2007 and increased from then until 2010, finally reaching 331.8 per 100 000 in 2020. The male to female ratio for road traffic mortality was 3.9 in 1997 and 4.6 in 2020. The case fatality rate was highest (20.1%) in 1997 and decreased to 5.6% in 2020. Conclusion: Continuous interventions are needed to reduce the burden of road traffic injuries and associated mortality in the Islamic Republic of Iran. abstract_id: PUBMED:35821170 Estimating the health burden of road traffic injuries in Malawi using an individual-based model. Background: Road traffic injuries are a significant cause of death and disability globally. However, in some countries the exact health burden caused by road traffic injuries is unknown. In Malawi, there is no central reporting mechanism for road traffic injuries and so the exact extent of the health burden caused by road traffic injuries is hard to determine. A limited number of models predict the incidence of mortality due to road traffic injury in Malawi. These estimates vary greatly, owing to differences in assumptions, and so the health burden caused on the population by road traffic injuries remains unclear. Methods: We use an individual-based model and combine an epidemiological model of road traffic injuries with a health seeking behaviour and health system model. We provide a detailed representation of road traffic injuries in Malawi, from the onset of the injury through to the final health outcome. We also investigate the effects of an assumption made by other models that multiple injuries do not contribute to health burden caused by road accidents. Results: Our model estimates an overall average incidence of mortality between 23.5 and 29.8 per 100,000 person years due to road traffic injuries and an average of 180,000 to 225,000 disability-adjusted life years (DALYs) per year between 2010 and 2020 in an estimated average population size of 1,364,000 over the 10-year period. Our estimated incidence of mortality falls within the range of other estimates currently available for Malawi, whereas our estimated number of DALYs is greater than the only other estimate available for Malawi, the GBD estimate predicting and average of 126,200 DALYs per year over the same time period. Our estimates, which account for multiple injuries, predict a 22-58% increase in overall health burden compared to the model ran as a single injury model. Conclusions: Road traffic injuries are difficult to model with conventional modelling methods, owing to the numerous types of injuries that occur. Using an individual-based model framework, we can provide a detailed representation of road traffic injuries. Our results indicate a higher health burden caused by road traffic injuries than previously estimated. abstract_id: PUBMED:31847989 Road traffic accident severity analysis: A census-based study in China. Background: In China, despite the decrease in average road traffic fatalities per capita, the fatality rate and injury rate have been increasing until 2015. Purpose: This study aims to analyze the road traffic accident severity in China from a macro viewpoint and various aspects and illuminate several key causal factors. From these analyses, we propose possible countermeasures to reduce accident severity. Method: The severity of traffic accidents is measured by human damage (HD) and case fatality rate (CFR). Different categorizations of national road traffic census data are analyzed to evaluate the severity of different types of accidents and further to demonstrate the key factors that contribute to the increase in accident severity. Regional data from selected major municipalities and provinces are also compared with national traffic census data to verify data consistency. Results: From 2000 to 2016, the overall CFR and HD of road accidents in China have increased by 19.0% and 63.7%, respectively. In 2016, CFR of freight vehicles is 33.5% higher than average; late-night accidents are more fatal than those that occur at other periods. The speeding issue is severely becoming worse. In 2000, its CFR is only 5.3% higher than average, while in 2016, the number is 42.0%. Conclusion and practical implementation: A growing trend of accident severity was found to be contrasting to the decline of road traffic accidents. From the analysis of casual factors, it was confirmed that the release way of the impact energy and the protection worn by the victims are key variables contributing to the severity of road traffic accidents. Answer: Yes, the decline in road traffic injury in London does vary by ethnicity. An analysis of STATS19 data comparing trends in road traffic casualty rates by ethnic group in London from 2001 to 2006 showed that casualty rates in London declined each year by an average of 8.8%. However, after adjusting for area-level deprivation, there was significant evidence that the average annual reduction in injury rates in car occupants was significantly less in Asian than in white adults (10.9% vs 14.4%, p<0.001). There was also some evidence that average annual reductions in injury rates were lower in black than in white adult pedestrians (7.4% vs 9.3%, p = 0.041) and car occupants (13.2% vs 14.4%, p = 0.03) (PUBMED:18836052).
Instruction: Is applying the same exercise-based inpatient program to normal and reduced left ventricular function patients the best strategy after coronary surgery? Abstracts: abstract_id: PUBMED:23651129 Is applying the same exercise-based inpatient program to normal and reduced left ventricular function patients the best strategy after coronary surgery? A focus on autonomic cardiac response. Purpose: To assess whether the same exercise-based inpatient program applied to patients with normal and reduced left ventricular function (LVF) evokes a similar cardiac autonomic response after coronary artery bypass graft (CABG). Method: Forty-four patients post-CABG, subgrouped according to normal LVF [LVFN: n = 23; left ventricular ejection fraction (LVEF) ≥ 55%] and reduced LVF (LVFR: n = 21; LVEF 35-54%), were included. All initiated the exercise protocol on post-operative day 1 (PO1), following a whole progressive program until discharge. Cardiac autonomic response was assessed by the indices of heart rate variability (HRV) at rest and during exercise (extremity range of motion and ambulation). Results: During ambulation, lower values of HRV indices were found in the LVFR group compared with the LVFN group [standard deviation of all RR (STDRR; 6.1 ± 2.7 versus 8.9 ± 4.7 ms), baseline width of the RR histogram (TINN; 30.6 ± 14.8 versus 45.8 ± 24.9 ms), SD2 (14.8 ± 8.0 versus 21.3 ± 9.0 ms), Shannon entropy (3.6 ± 0.5 versus 3.9 ± 0.4) and correlation dimension (0.08 ± 0.2 versus 0.2 ± 0.2)]. Also, when comparing the ambulation to rest change, lower values were observed in the LVFR group for linear (STDRR, TINN, RR TRI, rMSSD) and non-linear (SD2 and correlation dimension) HRV indices (p &lt; 0.05). On PO1, we observed only intra-group differences between rest and exercise (extremity range of motion), for mean intervals between heart beats and heart rate. Conclusion: For patients with LVFN, the same inpatient exercise protocol triggered a more attenuated autonomic response compared with patients with LVFR. These findings have implications as to how exercise should be prescribed according to LVF in the early stages following recovery from CABG. Implications for Rehabilitation Exercise-based inpatient program, performed by post-CABG patients who have normal left ventricular function, triggered a more attenuated cardiac autonomic response compared with patients with reduced left ventricular function. Volume of the inpatient exercises should be prescribed according to the left ventricular function in the early stages following recovery from CABG. abstract_id: PUBMED:9596323 A randomized comparison of exercise training in patients with normal vs reduced ventricular function. Background: Exercise training is recommended after myocardial infarction (MI) or bypass surgery in order to improve exercise tolerance. In some patients, the decrement in exercise capacity secondary to deconditioning and the left ventricular stunning associated with MI or coronary artery bypass graft (CABG) spontaneously improves after the event. However, the impact of the status of the left ventricle on these improvements is unknown. Methods: Sixty-seven patients 1 month after MI or CABG were randomized to a training (n=34; age, 59+/-7 years) or a control group (n=33; age, 55+/-6 years). Forty-two patients had an ejection fraction &gt;50% (22 in the training group and 20 in the control group), and 25 patients had an ejection fraction &lt;40% (12 in the exercise group and 13 in the control group). After stabilization for approximately 1 month after the event, patients in the exercise group underwent 8 weeks of twice daily exercise at a residential rehabilitation center, while control patients received usual care. Initially and after 8 weeks, patients in both groups underwent maximal exercise testing with gas exchange and lactate analysis. Results: Exercise training increased peak oxygen consumption (VO2) only in the reduced ejection fraction group (19.4+/-3.0 to 23.9+/-4.8 mL/kg/min; p&lt;0.05); the exercise group with normal ventricular function did not change significantly. Changes in VO2 at the lactate threshold paralleled those of peak VO2 for both groups. Conversely, control patients with normal ventricular function increased peak VO2 spontaneously (20.8+/-3.9 to 24.8+/-3.5 mL/kg/min; p&lt;0.01), whereas control patients with reduced ventricular function did not improve peak VO2. Conclusion: These data suggest that patients with depressed left ventricular function strongly benefit from rehabilitation, whereas most patients with preserved left ventricular function following MI or CABG tend to improve spontaneously 1 to 3 months after the event. abstract_id: PUBMED:8279347 Effects of coronary angioplasty on left ventricular function. It is well known that myocardial revascularization after successful coronary bypass surgery results in improved left ventricular function. Coronary angioplasty also results in successful revascularization, favorably affecting both stunned and hibernating myocardium. We studied 22 patients with chronic stable angina who underwent successful angioplasty for an isolated narrowing of the proximal or midportion of the left anterior descending artery. These patients also performed isometric exercises before and after angioplasty, which can be used to characterize left ventricular function. Revascularization after angioplasty induced an immediate improvement in left ventricular function in those patients with dysfunction secondary to hibernating myocardium. Further studies are needed to assess the possibility of the myocardial stunning phenomenon occurring after angioplasty in those patients without left ventricular improvement. abstract_id: PUBMED:1772676 Silent left ventricular dysfunction during exercise after coronary artery bypass surgery. Serial changes in left ventricular function during exercise were assessed by radionuclide continuous ventricular function monitoring in 80 patients undergoing coronary artery bypass surgery before and after operation. This monitor records serial beat by beat radionuclide and electrocardiographic data and calculates the left ventricular ejection fraction every 20s. The profiles of ejection fractions during graded bicycle exercise were divided into 4 types. In type A, the ejection fraction continued to increase. In type B, the ejection fraction initially increased but decreased in late exercise. In type C, the ejection fraction did not change. In type D, the ejection fraction continued to decrease. Type A is considered to be the normal response to exercise and types B, C and, D are considered to be abnormal responses. Before operation, 8 patients showed type A, 21 type B, 13 type C, and 38 type D. After operation, 53 patients showed type A, 16 type B, 8 type C, and 3 type D. The mean ejection fraction decreased with exercise from 53% +/- 11% to 47% +/- 11% before surgery, but increased with exercise from 55% +/- 10% to 64% +/- 14% after surgery. During postoperative exercise, no patient developed chest pain but 19 patients showed a decrease in the ejection fraction in early or late exercise. A decrease in the ejection fraction is an earlier indicator of myocardial ischaemia than angina or electrocardiographic changes. Silent left ventricular dysfunction during postoperative exercise was considered to reflect myocardial ischaemia resulting from occluded grafts, ungrafted coronary arteries, or inadequate perfusion by arterial grafts. The radionuclide continuous ventricular function monitor can detect silent left ventricular dysfunction after coronary artery bypass surgery. abstract_id: PUBMED:1940518 Cardiac function during exercise in patients with coronary bypass surgery assessed by continuous ventricular function monitoring The response of left ventricular function during exercise and recovery after exercise was assessed in 52 patients with coronary artery bypass surgery by means of a radionuclide continuous ventricular function monitor. This system consists of 2 radionuclide detectors, recorder and a computer. After the equilibration of 20 mCi technetium 99m-labeled autologaous red blood cells into the intravascular space, the beat by beat radionuclide data were summed for 20-sec intervals to measure left ventricular ejection fraction (EF). Before surgery, the mean EF decreased with exercise from 51 +/- 9% to 45 +/- 11% (p less than 0.001). Cardiac response was divided into 4 types according to the profiles of the EFs during exercise. In 6 patients, EF continued to increase until maximal exercise (type A). In 10 patients, EF initially increased and then decreased in late exercise stages (type B). In 9 patients, EF did not change significantly during exercise (type C). In 27 patients, EF decreased throughout exercise (type D). After surgery, the mean EF increased with exercise from 53 +/- 10% to 60 +/- 13% (p less than 0.001). Thirty-five patients showed type A, 9 type B, 5 type C, and 3 type D. Two type D and 5 type B patients had occluded grafts or ungrafted coronary arteries. Four patients with complete revascularization including an internal thoracic artery and saphenous vein grafts showed type B. Three patients with extensive infarction and poor left ventricular function showed type C.(ABSTRACT TRUNCATED AT 250 WORDS) abstract_id: PUBMED:1977244 Alpha-adrenergic control of coronary circulation and left ventricular function during exertion Sympathetic activation is responsible for a striking inotropic and chronotropic response as well as for important coronary hyperemia during exercise. However the exact role of the coronary and myocardial alpha adrenergic receptors during sympathetic activation is not well defined. It is well accepted that coronary artery smooth muscle contains both alpha-1 and alpha-2 adrenergic receptors which elicit a vasoconstrictor response upon neurogenic and humoral stimulation. It seems however paradoxical that coronary vasoconstriction should occur in conditions of increased myocardial oxygen consumption such as during exercise. On the other hand, presynaptic alpha-2 receptors have been identified at the sympathetic nerve endings which, when stimulated by the locally released norepinephrine, will attenuate further norepinephrine release through a negative feed-back mechanism. A possible regulatory role of this mechanism for the left ventricular and coronary flow response during exercise has not been established. The major goal of this work was to define the exact role of alpha adrenergic receptors for the regulation of cardiac function and coronary circulation during exercise. Left ventricular function, coronary circulation, myocardial metabolism and catecholamine release were analysed during standardized treadmill exercise in conscious chronically instrumented dogs, prior to and following the administration of different alpha receptor blockers with different affinities for the different subtypes of alpha receptors. Three series of experiments were conducted to clarify: 1. the role of postsynaptic alpha receptors for the regulation of coronary blood flow during exercise, 2. the role of presynaptic alpha receptors for the regulation of left ventricular function during exercise, and 3. the role of postsynaptic alpha receptors for the regulation of transmural myocardial blood flow distribution during exercise. Dogs were instrumented with a solid state pressure transducer in the left ventricular cavity, a Doppler flow probe around the left circumflex coronary artery, and silastic catheters in aorta, left atrium and coronary sinus. 2-3 weeks after surgery the dogs were subjected to standard treadmill exercise. Simultaneous and continuous measurements were performed of heart rate, left ventricular systolic and end diastolic pressure, LV dP/dt/P, mean arterial pressure, late diastolic coronary vascular resistance, regional myocardial blood flow, myocardial oxygen consumption and plasma catecholamines. In the first series of experiments, coronary blood flow and myocardial oxygen consumption were analyzed in 15 conscious dogs at rest and during exercise prior to and following IV administration of phentolamine (1 mg/kg) or prazosin (0.5 mg/kg).(ABSTRACT TRUNCATED AT 400 WORDS) abstract_id: PUBMED:8492487 Coronary artery bypass surgery in patients with poor left ventricular function As the indications for coronary artery bypass grafting (CABG) among patients with poor left ventricular function remain incompletely defined, We undertook the study to assess the results of CABG surgery for a group of such patients. We retrospectively reviewed the clinical features and outcome for 7 patients with poor left ventricular function defined as an ejection fraction below 0.3 who underwent CABG. The mean age of the patients was 63 years (range, 38 to 78 years), and 4 were 70 years of age or older. All patients had history of previous myocardial infarction, and all had left main trunk stenosis or left main equivalent stenosis. The patients underwent CABG with three to six distal anastomoses (mean, 4.3/patient), the internal thoracic artery could be used in all patients, for one or two distal anastomoses (mean, 1.6/patient). Combined coronary endarterectomy was performed in 2 patients. With the exception of one 78-year-old patient with renal failure who died during the early postoperative period, all patients showed marked postoperative improvement in cardiac function. Cardiac catheterization and exercise stress test studies revealed significant improvement in left ventricular function and exercise tolerance in these six patients. These results suggest that excellent results can be obtained with CABG surgery in patients with poor left ventricular function when the viability of the myocardium in the bypassed area has been confirmed. abstract_id: PUBMED:1769041 Coronary surgery in patients with severely impaired left ventricular function Views on surgical treatment of patients with coronary disease and severely restricted left ventricular function are still controversial. In the present work the authors compare the results of direct reconstruction of the coronary arteries in 184 patients with satisfactory or medium restricted left ventricular function (SLV) with a group of 77 patients with severely restricted left ventricular function (RLV). Twenty-three clinical, angiographic, peroperative and postoperative indicators were compared. Patients with RLV had more frequently a myocardial infarction before operation, on average a more restricted functional capacity and lower ejection fraction of the left ventricle than patients with SLV. The mortality during hospitalization was 1.1% in patients with SLV and 5.2% in patients with RLV. These results confirm that reconstruction of the coronary arteries is safe also in patients with a significantly restricted left ventricular function. An essential prerequisite of successful operation is perfect peroperative protection of the heart muscle by the method of continuous local cooling. abstract_id: PUBMED:1920998 Assessment of coronary bypass surgery by continuous ventricular function monitoring The sequential change of left ventricular function during exercise and recovery after exercise was assessed in 50 patients who had undergone coronary bypass surgery before and after the operation by means of continuous ventricular function monitoring system (VEST). Cardiac response was divided into 4 types with respect to the profiles of the left ventricular ejection fraction during exercise. Type A continued to increase; type B initially increased but decreased in severe exercise stages; type C did not change during exercise; type D continued to decrease. Most patients showed type C or D before surgery but showed type A after surgery. 9 patients with occluded grafts or ungrafted coronary arteries showed type B, C or D. Two patients with extended infarction and poor left ventricular function showed type C after surgery. In recovery period after exercise, the ejection fraction showed an overshoot. The mean ratio of peak ejection fraction during recovery to ejection fraction at rest increased from +62 +/- 12% before operation to +68 +/- 16% after operation (p less than 0.05). The recovery time after exercise was decreased from 195 sec before operation to 98 sec after operation (p less than 0.01). VEST revealed response of left ventricular function during exercise and recovery after exercise as far as detail abnormalities. abstract_id: PUBMED:3498383 Exercise radionuclide assessment of left ventricular function before and after coronary bypass surgery. The effects of elective saphenous vein coronary artery bypass surgery on left ventricular ejection fraction were assessed by using exercise first-pass radionuclide angiography in 66 consecutive patients. All patients with left main coronary artery or concomitant valvular disease were eliminated from the study. Before surgery, 7 patients had normal postexercise left ventricular function (Group 1), 33 had normal resting left ventricular function with an abnormal response to exercise (Group 2), and 26 had an abnormal resting left ventricular ejection fraction with an abnormal response to exercise (Group 3). Following surgery, patients in all three groups had no change in mean resting left ventricular ejection fraction; however, patients in Groups 2 and 3 had significant improvement in mean postexercise left ventricular ejection fraction (p less than 0.0001 and p less than 0.0054 respectively), whereas patients in Group 1 did not. Previous studies reported improvement in postexercise ejection fraction in patients with reduced resting left ventricular function and with an ischemic response to exercise (Group 3). But this is the first study to confirm improvement in postexercise function in patients with normal resting function and an ischemic response to exercise (Group 2). Answer: Applying the same exercise-based inpatient program to patients with normal and reduced left ventricular function (LVF) after coronary surgery may not be the best strategy, as the cardiac autonomic response to exercise can differ between these two groups. A study assessing the cardiac autonomic response via heart rate variability (HRV) indices found that patients with reduced LVF (LVEF 35-54%) had lower HRV indices during ambulation compared to those with normal LVF (LVEF ≥ 55%), suggesting a more attenuated autonomic response in the latter group. This implies that the volume of inpatient exercises should be prescribed according to the LVF in the early stages following recovery from coronary artery bypass graft (CABG) surgery (PUBMED:23651129). Additionally, a randomized comparison of exercise training in patients with normal vs reduced ventricular function post-MI or CABG showed that exercise training increased peak oxygen consumption only in the reduced ejection fraction group, while the exercise group with normal ventricular function did not change significantly. Control patients with normal ventricular function increased peak oxygen consumption spontaneously, whereas control patients with reduced ventricular function did not improve peak oxygen consumption. This suggests that patients with depressed LVF benefit more from rehabilitation, whereas most patients with preserved LVF tend to improve spontaneously 1 to 3 months after the event (PUBMED:9596323). Therefore, while exercise is beneficial post-CABG, the approach and intensity may need to be tailored to the individual's LVF status to optimize recovery and outcomes.
Instruction: New drugs in Brazil: do they meet Brazilian public health needs? Abstracts: abstract_id: PUBMED:18764993 New drugs in Brazil: do they meet Brazilian public health needs? Objectives: To describe the new drugs marketed in Brazil during the period 2000-2004, compare the description to the country's burden of disease, and suggest initiatives capable of addressing the situation from the perspective of a developing country. Methods: Records of new drugs were surveyed in an official drug registration database. The new drugs were categorized by Anatomical Therapeutic Chemical classification, indication, and innovation, and compared with the needs of the country's burden of disease. Data on the morbidity and mortality rates of selected diseases (diabetes, Hansen's disease, hypertension, tuberculosis) were retrieved from official documents and the literature. Results: During the period investigated, 109 new drugs were launched. Most were general anti-infectives for systemic use (19), followed by antineoplastic and immunomodulating agents (16). The number of new drugs launched in 2004 was roughly one-third that of 2000. Of 65 new drugs, only one-third can be classified as innovative. Most new drugs were intended to treat noninfectious diseases that typically affect developed countries, diseases that constitute only a fraction of the country's challenges. Conclusions: A mismatch occurs between public health needs and the new drugs launched on the Brazilian market. Not only did the number of new drugs decrease in the study period, but only a few were actually new in therapeutic terms. Developing countries must acquire expertise in research and development to strengthen their capacity to innovate and produce the drugs they need. abstract_id: PUBMED:11357904 Public housing, public health: health needs of public housing tenants. Objectives: To determine the health needs of public housing tenants, measured in terms of self-reported health status, health risk factors and expressed need for health risk reduction intervention. Method: Face-to-face interviews were conducted with a randomly selected sample of public housing tenants in the Hunter Region of New South Wales. Results: Of 463 contactable tenants, 329 consented to participate in the study. Participants were 2.5 times more likely to rate their health as fair or poor relative to the community generally, and visited a doctor twice as often. The prevalence of smoking was more than twice that of the community generally, and the prevalence of falls was approximately three times greater. Risk of injury due to domestic violence was approximately six times greater, and the risk of injury due to violence in other locations was more than double that in the community. Between a quarter and a half of the participants requested support to reduce their health risks. Conclusions: The findings suggest that public housing tenants are one of the more severely health-compromised groups in the Australian community. Implications: An urgent need exists for public health initiatives that are directed at improving not only the current markedly poorer health status of public housing tenants, but also the greater prevalence of health risk factors that predict a likely continuation of such differentials into the future. abstract_id: PUBMED:8688568 Public health services between "new public health" and "new public management" Today, a substantial reorientation of the Public Health services in the Federal Republic of Germany is broadly seen necessary. Patterns of functional and organisational restructuring of Public Health services on the regional and the communal level are closely linked with concepts of prevention and health promotion. Hence, a number of agencies have already adopted new tasks and functions like comprehensive and transorganizational planning, coordination and evaluation as well as the establishment of new reporting systems. Presently, the transformation process from the bureaucratic mode of administering matters of health to a new Public Health orientation receives new impacts from the international "New Public Management" movement. Comparatively late, with the beginning of the 1990s, a growing number of German municipalities has introduced new concepts of administration. Local government administrations, of which the Public Health services are a part, follow the model of modern service organizations producing services in a more efficient, professionalized and consumer-oriented way. Specific elements of economising modernisation programmes like re-distribution of tasks, de-centralisation, extension of managerial capacities, setting of stimulating working conditions that provide employees with greater independence of action as well as career opportunities, are at the same time prerequisites for innovative strategies of health protection and coordination policies of Public Health services. abstract_id: PUBMED:31348164 Strategies and Tools for Public Health Workforce Training Needs Assessments in Diverse and Changing Population Health Contexts. Context: A key component of the improvement of public health infrastructure in the United States revolves around public health workforce development and training. Workforce challenges faced by the public health system have long been recognized, but there are additional challenges facing any region-wide or cross-jurisdictional effort to accurately assess priority workforce training needs and develop training resources to address those needs. These challenges include structural variability of public health organizations; diverse population health contexts; capturing both topic-specific skill sets and foundational competencies among public health workers; and reaching/representing the target population despite suspicion, disinterest, and/or assessment "fatigue" among employees asked to participate in workforce development surveys. Objective: The purpose of this article is to describe the challenges, strategies to meet those challenges, and lessons learned conducting public health workforce training needs assessments by academic and practice partners of the Region 2 Public Health Training Center (R2/PHTC). Setting And Participants: The R2/PHTC is hosted by the Mailman School of Public Health at Columbia University and serves New York, New Jersey, Puerto Rico, and the US Virgin Islands within its jurisdiction. Results: Strategies for responding to diverse organizational structures and population health contexts across the region; defining training priorities that address both foundational competencies for public health professionals and content-specific training to address local public health needs; reaching/representing target populations of public health workers; and analysis and report writing to encourage rapid response to identified needs and comprehensive workforce development planning are discussed. Lessons learned are likely instructive to other workforce training needs assessments in complex and ever-changing public health environments. abstract_id: PUBMED:29172220 Strengthening Public Health by Strengthening Public Health Research - Results of the Working Group 7 of the Forum Future Public Health, Berlin 2016 Public health research is application-oriented, its great potential lies in the interlinking of science and practice, as well as its interdisciplinarity and methodological competence. Public health research is focused on the development and evaluation of measures to protect and improve the health of the population. In doing so, it must make its contribution to tackling the new challenges posed by demographic change, digitalization, new forms of communication and medical progress. By pooling the forces, the public health scientists should actively contribute to policy advice and promoting adequate research funding. abstract_id: PUBMED:29973179 "Taking away the chaos": a health needs assessment for people who inject drugs in public places in Glasgow, Scotland. Background: Public injecting of recreational drugs has been documented in a number of cities worldwide and was a key risk factor in a HIV outbreak in Glasgow, Scotland during 2015. We investigated the characteristics and health needs of people involved in this practice and explored stakeholder attitudes to new harm reduction interventions. Methods: We used a tripartite health needs assessment framework, comprising epidemiological, comparative, and corporate approaches. We undertook an analysis of local and national secondary data sources on drug use; a series of rapid literature reviews; and an engagement exercise with people currently injecting in public places, people in recovery from injecting drug use, and staff from relevant health and social services. Results: Between 400 and 500 individuals are estimated to regularly inject in public places in Glasgow city centre: most experience a combination of profound social vulnerabilities. Priority health needs comprise addictions care; prevention and treatment of blood-borne viruses; other injecting-related infections and injuries; and overdose and drug-related death. Among people with lived experience and staff from relevant health and social care services, there was widespread - though not unanimous - support for the introduction of safer injecting facilities and heroin-assisted treatment services. Conclusions: The environment and context in which drug consumption occurs is a key determinant of harm, and is inextricably linked to upstream social factors. Public injecting therefore requires a multifaceted response. Though evidence-based interventions exist, their implementation internationally is variable: understanding the attitudes of key stakeholders provides important insights into local facilitators and barriers. Following this study, Glasgow plans to establish the world's first co-located safer injecting facility and heroin-assisted treatment service. abstract_id: PUBMED:7597130 Needs assessment in Scotland: collaboration in public health. The National Health Service reforms revitalized the national emphasis on the public's health and health needs assessment. In Scotland, in contrast with England, there was no central investment in the development of methods and programmes for needs assessment in the context of the new NHS. To achieve a concerted effort, therefore, a self-help network--the Scottish Needs Assessment Programme (SNAP)--was created by the Scottish Forum for Public Health Medicine. This paper describes its evolution to the point where it is now supported as part of a national network of information for purchasing. abstract_id: PUBMED:31141027 Added therapeutic value of new drugs approved in Brazil from 2004 to 2016. This study aimed to assess the level of therapeutic innovation of new drugs approved in Brazil over 13 years and whether they met public health needs. Comparative descriptive analysis of therapeutic value assessments performed by the Brazilian Chamber of Drug Market Regulation (CMED) and the French drug bulletin Prescrire for new drugs licensed in Brazil, from January 1st 2004 to December 31st 2016. The extent to which new drugs met public health needs was examined by: checking inclusions into government-funded drug lists and/or clinical guidelines; comparing Anatomical Therapeutic Chemical Classification (ATC) codes and drug indications with the list of conditions contributing the most to the national disease burden; and assessing new medicines aimed to treat neglected diseases. 253 new drugs were approved. Antineoplastics, immunossupressants, antidiabetics and antivirals were the most frequent. Thirty-three (14%) out of 236 drugs assessed by the Brazilian chamber and sixteen (8.2%) out of 195 assessed by the French bulletin Prescrire were considered innovative. Thirty-six drugs (14.2%) were selected for coverage by the Brazilian Unified National Health System (SUS), seven of which were therapeutically innovative, and none were aimed to treat neglected disease. About 1/3 of the drugs approved aimed to treat conditions among the top contributors to Brazil's disease burden. Few therapeutically innovative drugs entered the Brazilian market, from which only a small proportion was approved to be covered by the SUS. Our findings suggest a divergence between public health needs, research &amp; development (R&amp;D) and drug licensing procedures. abstract_id: PUBMED:26157789 Assessing the Education and Training Needs of Nebraska's Public Health Workforce. Introduction: In 2012, the Great Plains Public Health Training Center (Grant #UB6HP22821) conducted an online survey of state and local health departments and the American Indian (tribal clinics, tribal health departments, and urban Indian clinic) public health workforce across three professional levels. The objectives of the needs assessment were to determine the competency levels of the state's public health workforce, assess gaps in public health competencies, identify public health training interests, needs, and preferences, and to determine the barriers and motivators to participate in public health training. Methods: The assessment was developed using the Council on Linkages Between Academia and Public Health Practice, Core Competencies for Public Health Professionals survey (1). The final assessment was created and piloted by numerous individuals representing practice and academia. Results: Respondents identified cultural competency and communication skills as the two most important public health competency domains. Although the public health professionals perceived that they were least proficient in the area of policy development and program planning, participants identified the greatest needs for training in financial planning and management skills and analytical/assessment skills. In general, respondents preferred instructor-led interactive training sessions offered as onsite multi-day workshops or computer-based courses. Respondents identified obesity, health disparities, physical activity, chronic diseases, and diabetes as the top five public health topical areas. Conclusion: These priorities align with State and National public health plans. The findings of the needs assessment were used to tailor educational opportunities to build the capacity of Nebraska's public health system. Additionally, the results were used to develop workforce development plans for numerous local health departments throughout Nebraska. abstract_id: PUBMED:17666421 Health needs and public health functions addressed in French public health journals. Background: Our aim, as part of the collaborative study SPHERE (Strengthening Public Health Research in Europe), was to describe public health publications in the French language according to health needs and public health functions. Method: All articles published in six French public health journals, and one French/English language Canadian journal, over the period 1995-2004, were retrieved from three electronic databases. Original research articles were indexed by hand according to one main domain of health need, based on Global Burden of Disease categories, and into one of four public health functions. Results: After removing duplicates, 3984 original research articles were identified. Only 51% could be allocated to a health needs code. Of these, 71% were about non-communicable diseases, 25% communicable, maternal and perinatal conditions and 5% injuries. This compared only moderately with the global burden of disease for France (84, 5 and 11%, respectively). The other articles addressed health determinants, such as behavioural or environmental exposures, or a methodological issue. Ninety-two percent of the articles could be assigned a public health function code. Health monitoring and health services research accounted for 80% of references from French journals. Only 9% of articles from French journals were related to prevention, which was lower than that in the Canadian journal (17%). Only 1% of articles dealt with legislation. Conclusion: The distribution of articles in French public health journals broadly follows the distribution of health needs. History and data availability may explain the extra research focus on communicable diseases and maternal and child health research. Injuries, and prevention, are topics which appear to be under-represented in French language journals. Answer: The research indicates that there is a mismatch between the new drugs launched in Brazil and the country's public health needs. During the period from 2000 to 2004, 109 new drugs were launched in Brazil, with most being general anti-infectives for systemic use and antineoplastic and immunomodulating agents. However, the majority of these new drugs were intended to treat noninfectious diseases that are more typical of developed countries, rather than the infectious diseases that pose a significant challenge in Brazil. Furthermore, only one-third of the 65 new drugs could be classified as innovative, and the number of new drugs launched in 2004 was roughly one-third that of 2000 (PUBMED:18764993). Additionally, a study assessing the therapeutic innovation of new drugs approved in Brazil from 2004 to 2016 found that few therapeutically innovative drugs entered the Brazilian market, and only a small proportion of these were approved to be covered by the Brazilian Unified National Health System (SUS). About one-third of the drugs approved aimed to treat conditions among the top contributors to Brazil's disease burden, but none were aimed to treat neglected diseases. This suggests a divergence between public health needs, research & development (R&D), and drug licensing procedures (PUBMED:31141027). In conclusion, the evidence suggests that the new drugs launched in Brazil do not adequately meet the Brazilian public health needs, particularly in terms of addressing the country's burden of infectious and neglected diseases. There is a need for a greater focus on developing and approving drugs that are innovative and relevant to the health challenges faced by the Brazilian population.
Instruction: Is the use of arch bars or interdental wire fixation necessary for successful outcomes in the open reduction and internal fixation of mandibular angle fractures? Abstracts: abstract_id: PUBMED:18848111 Is the use of arch bars or interdental wire fixation necessary for successful outcomes in the open reduction and internal fixation of mandibular angle fractures? Purpose: The purpose of this retrospective cohort study was to analyze the complications associated with a series of mandibular angle fractures treated by open reduction and internal fixation and to determine if the method of intraoperative maxillomandibular fixation (MMF) affected patient outcome. Patients And Methods: The records of 162 consecutive patients with isolated mandibular fractures that were treated by the senior author (R.B.B.) with open reduction and internal fixation were retrospectively reviewed and a number of clinical variables were recorded. Of these, all patients with fractures involving the mandibular angle, alone or in combination with other mandibular fractures, were identified. Only patients in the permanent dentition with angle fractures treated with a single 2.0 mm titanium plate placed at the superior border using standard Champy technique were included in the study. Patients with less than 6 weeks follow-up, concomitant midface fractures, edentulous patients, patients with comminuted fractures or gunshot wounds, and those patients presenting with infected fractures were excluded from the primary study group, which totaled 75 patients with 83 angle fractures. Postoperative complications, including infection, malunion/nonunion, wound dehiscence, osteomyelitis, pain, and the need for secondary operative intervention, were tabulated. For purposes of comparison, patients were divided into 3 groups based upon the type of intraoperative MMF utilized: group 1, Erich arch bars (n = 24); group 2, 24 gauge interdental "Stout" wires (n = 25); and group 3, manual reduction alone (n = 26). Outcome measures were defined as successful bone healing, acceptable occlusion, minor complications, and major complications. Descriptive statistics were recorded and an analysis of variance was calculated to evaluate differences between the 3 groups. The Fisher's exact test was used to evaluate whether a complication occurred more frequently in any one particular group. Results: The mean age of the 75 patients included in the study was 28.2 years (M = 63, F = 12) and there were no significant demographic differences between the 3 groups (P = 0.22). All patients eventually achieved successful bony union with an acceptable occlusion. Thirty-two percent of patients in the cohort required a second procedure, usually outpatient removal of loose or symptomatic hardware under local anesthesia or intravenous sedation, but there was no difference in re-operation rate based upon the method of intraoperative fixation (P = .47). Major complications occurred in 2 patients that required secondary operations due to malunion and nonunion (2.7%). Twenty-two minor complications occurred in 16 patients (21.3%) and were evenly distributed amongst the 3 groups (P = .074), including infection (n = 4), wound dehiscence (n = 1), and/or symptomatic hardware (n = 16) that required hardware removal. All of the minor complications were treated in an outpatient setting under local anesthesia or under intravenous sedation. When the complications were pooled together, the Fisher exact test again yielded no difference in complications between the 3 groups (P = .33). Conclusion: The use of Erich arch bars or interdental wire fixation to assist with MMF during the open reduction and internal fixation of noncomminuted mandibular angle fractures treated in Champy fashion is not always necessary for successful outcome. abstract_id: PUBMED:24007800 Intermaxillary intraoperative fixation of mandibular fractures: arch bars or fixation screws? Introduction: The gold standard management of mandibular fractures is open reduction and osteosynthesis associated with intermaxillary fixation. The use of intermaxillary fixation screws for 20 years has considerably reduced the number of intermaxillary fixation with arch bars. The aim of our review was to identify current indications and contraindications of each technique. Intermaxillary Fixation Techniques: We present a short history and compare the two techniques with recently published assets, drawbacks, and complications. Discussion: The indications of intermaxillary fixation screws are uni- or bifocal fractures without or with minimal displacement. Their use is contraindicated in any other type of fracture, which should still be treated with arch bar fixation. abstract_id: PUBMED:29892326 Comparative Evaluation of the Embrasure Wire versus Arch Bar Maxillomandibular Fixation in the Management of Mandibular Fractures: Are Arch Bars Replaceable? The aim of this prospective study was to appraise the role of embrasure wiring in the treatment of mandibular fractures over the arch bar as adjunctive techniques of maxillomandibular fixation (MMF). This study was conducted on 40 patients who were surgically treated for mandibular fractures with accessory use of MMF (embrasure: group A vs. arch bars: group B). All patients were evaluated for demographic data, etiology, and location of fracture. Characteristically, the complications, including wire injury, infection, and malocclusion, were recorded. The data were analyzed using Student's t -test and chi-square test as appropriate. Statistical significance was set at p &lt; 0.05). In this study, data from 40 patients were included. In group A (embrasure wiring), time required for placement of MMF was significantly less than (7.85 ± 0.81 minutes) that in group B, and also there was less incidence of wire prick to the operator in group A than in group B ( p &lt; 0.05). However, in terms of wire prick and malocclusion, no statistically significant difference was noted in groups A and B ( p &gt; 0.05). Patient treated with embrasure wiring intermaxillary fixation had better outcomes especially in terms of time of placement and less incidence of wire prick injury when compared with arch bar. abstract_id: PUBMED:32467805 Management of Gunshot Injuries of Mandible with Open Reduction and Internal Fixation versus Closed Reduction and Maxillo-mandibular Fixation. Background/objectives Gunshot injuries are known to cause severe morbidity and mortality when facial regions are involved. Management of the gunshot wounds of the face comprises of securing an airway, controlling hemorrhage, identifying other injuries and definite repair of the traumatic facial deformities. The objective of the present study was to compare the clinical outcome (infection and nonunion) of open reduction and internal fixation versus closed reduction and maxillo-mandibular fixation (CR-MMF) in the treatment of gunshot injuries of the mandible. Materials &amp; methods This study was conducted at Oral and Maxillofacial Surgery Department of Shaheed Zulfiqar Ali Bhutto Medical University/Pakistan Institute of Medical Sciences Islamabad, Pakistan. Ninety gunshot mandibular fractures were randomly allocated in two equal groups. In group-A, 45 patients were treated by open reduction and internal fixation while in group-B, 45 patients were also managed by closed reduction and maxillo-mandibular fixation. Post-operative complications (infection, non-union) were evaluated clinically and radiographically in both groups. Results Patients treated by open reduction and internal fixation were having more complications in terms of infection (17.8%) as compared to closed reduction (4.4%) with a p-value 0.044. Whereas non-union was more in closed reduction (15.6%) as compared to open reduction and internal fixation group (2.2%) with a significant p-value 0.026. Conclusion Both the treatment modalities can be used in the management of gunshot injuries of mandible and there is need for further studies to have clear guideline in this regard in best interest of patients, community and health care providers. abstract_id: PUBMED:30900664 Evaluation of postoperative complications of open reduction and internal fixation in the management of mandibular fractures: A retrospective study. Background: Mandible is one of the most prominent bones of the facial skeleton leading to high prevalence of fracture during facial trauma. The management of mandibular fractures has evolved from intermaxillary fixation (IMF) to a combination of IMF and wire osteosynthesis, lag screw, and plate fixation. Despite the evolution in techniques, no technique including open reduction and internal fixation (ORIF) has proven to be completely devoid of complications. However, ORIF has many advantages and has resulted in better outcome when compared with the closed reduction methods. Aim: The aim of this retrospective study is to assess the incidence of postoperative complications in 98 patients treated with ORIF for mandibular fractures. Materials And Methods: A sample size of 98 patients with mandibular fractures reporting to Saveetha Dental College and Hospital, Chennai, from January 2014 to November 2016, treated by ORIF was included in the study and was split into three groups. The patient's records were analyzed for deriving the number and type of complications that occurred during the postoperative period of ORIF. Results: Patients with fractures in the condyle region reported with the highest number of complications when compared to the angle and parasymphysis regions. The most common complications that the patients reported were infection that led to plate removal and paresthesia. Others were wound dehiscence, occlusal disturbances, and facial nerve paralysis which were less common. Conclusion: The most common postoperative complication of the procedure in this retrospective study was found to be infection of the plates that are preventable with better aseptic techniques. abstract_id: PUBMED:38317388 Comparison of Clinical Efficacy of Screw-retained Arch Bar vs Conventional Erich's Arch Bar in Maxillomandibular Fixation: A Randomized Clinical Trial. Aim: This study aimed to compare the clinical outcomes of a conventional Erich's arch bar vs a modified screw-retained arch bar in maxillomandibular fixation of mandibular fracture. Materials And Methods: This parallel-arm randomized control trial included patients from the outpatient clinic with single favorable mandibular fractures that are indicated for closed reduction. They were subjected to maxillomandibular fixation using conventional Erich's arch bars in the control group and modified screw-retained arch bars in the study group. The outcome measures included operating time, glove perforations, postoperative pain, oral hygiene, fixation stability, occlusion, and mucosal coverage. Results: A total of 20 patients (12 males and 8 females) with a 1:1 allocation ratio were included. There was a significant statistical difference regarding operation time and number of glove perforations in favor of group B as p &lt; 0.001, p = 0.007, respectively. There was a significant statistical difference regarding pain after 1 day (p &lt; 0.001), 1 week (p &lt; 0.001) in favor of group B, and at 4 weeks (p = 0.015), and 6 weeks (p = 0.002) in favor of group A. Regarding oral hygiene at 1 week (p = 0.021) and at 6 weeks (p &lt; 0.001), there was a significant statistical difference in favor of group B. Regarding mucosal coverage at 6 weeks, there was a significant statistical difference in favor of group A (p = 0.005). Conclusion: The modified screw-retained arch bar can be considered an alternative to conventional arch bar as it provided less application time and better operator safety. It also showed better patient satisfaction regarding pain and oral hygiene. Clinical Significance: Maxillomandibular fixation with the conventional technique was modified to screw-retained arch bar which is less time-consuming and provides better patient and operator satisfaction. How to cite this article: Elhadidi MH, Awad S, Elsheikh HAE, et al. Comparison of Clinical Efficacy of Screw-retained Arch Bar vs Conventional Erich's Arch Bar in Maxillomandibular Fixation: A Randomized Clinical Trial. J Contemp Dent Pract 2023;24(12):928-935. abstract_id: PUBMED:29123991 The use of Synthes MatrixWAVE bone anchored arch bars for closed treatment of multiple concurrent mandibular fractures. Background: Maxillomandibular fixation (MMF) is an important tool in the treatment of facial fractures and bone-anchored arch bar systems have emerged as treatment options. This study illustrates our experience with the Synthes MatrixWAVE MMF system and the novel application in closed treatment of patients with multiple concurrent mandibular fractures. Methods: We enrolled eight patients with concurrent mandibular fractures and treated them with the MatrixWAVE MMF system. The malleable arch bars were fitted and then secured using 6-8 mm screws. The arch bar was and either stretched or compressed to ensure optimal positioning. 24-gauge interdental fixation wiring was placed using the fish loop technique on either side of the fracture to achieve stabilization of the mandible fractures. Results: All eight patients were retained in MMF until stable occlusion was achieved with clinical evidence of healing, approximately 4-6 weeks. Post-operative Panorex scans did not reveal any evidence of damage to tooth roots. None of the screws became overly embedded in the mucosa secondary to overgrowth; the arch bars were successfully removed in an outpatient setting without local anesthesia. Conclusions: The Synthes MatrixWAVE MMF system has unique features such as prominent screw anchor heads and horizontal malleability unique from other bone anchored arch bars. These features allow for completely closed treatment of patients with multiple concurrent mandible fractures. abstract_id: PUBMED:35712409 Are Embrasure Wires Effective and Reliable Method for Intraoperative Maxillomandibular Fixation in Mandibular Fractures? Purpose: Maxillomandibular fixation (MMF) is a basic and fundamental principle in the management of the maxillofacial trauma patients. Some fractures require only intraoperative MMF, during open reduction and internal fixation but not in the postoperative period. The present study was aimed to assess and compare the efficacy of embrasure wire with Erich arch bar as methods of intraoperative maxillomandibular fixation in the management of mandibular fractures. Materials And Methods: The prospective randomized study was undertaken in 30 patients who required intraoperative maxillomandibular fixation for mandibular fractures. Patients were randomly divided into two groups of 15 each (Erich arch bar in Group A and embrasure wire in Group B). The preoperative assessment included evaluation of demographic data, fracture location, mechanism of injury, degree of displacement of fracture and occlusion. Intraoperative parameters assessed were the time consumed for the application of MMF technique, injury to the operator/assistant, injury to the patient, stability of MMF technique, incidence of glove perforations and the cost of the MMF device. Results: The mean time required for MMF and incidence of glove perforation were significantly (P &lt; 0.001) less in embrasure wire group than the Erich arch bar group. The MMF technique maintained stable occlusion during open reduction and internal fixation in both the groups. Conclusion: Embrasure wire is an effective, reliable alternative form of intraoperative MMF, as needle-stick injury and time taken for placement were less as compared to the Erich arch bar group. However, Erich arch bar wiring is a versatile method and recommended where postoperative maxillomandibular fixation is also required. abstract_id: PUBMED:30302602 Comparison of Erich arch bar versus embrasure wires for intraoperative intermaxillary fixation in mandibular fractures. Purpose: Intermaxillary fixation (IMF) is a fundamental principle in the management of mandibular fractures but with recent advent of open reduction and internal fixation (ORIF), use of IMF is almost limited intraoperatively. Therefore, we compared the efficacy of Erich arch bar versus embrasure wires for intraoperative IMF in mandibular fractures. Method: This prospective study was comprised of 50 patients with mandibular fractures who required ORIF with intraoperative IMF. Patients were categorized into two groups of 25 patients each: Erich arch bar technique was used for group A and embrasure wire technique for group B. Parameters were time taken for IMF, needle stick injury, occlusal stability, iatrogenic complications, and periodontal status of patients. Statistical Analysis: Chi-squared test and unpaired t test analyses was run on IBM SPSS 21.0 version (2015) software. Result: Mean time for placing embrasure wire (3.48 min) was significantly less than that for Erich arch bar (48.08 min). Needle stick injury rates to the operator as well as the assistants were significantly less when using the embrasure wire than the Erich arch bar. The Erich arch bar had significantly superior postoperative occlusion stability. Iatrogenic injury was more common when placing the Erich arch bar than the embrasure wire. Postoperative oral hygiene status was good in patients that received the embrasure wire. Conclusion: Embrasure wire technique is a quick, easy, and reliable technique for minimally or moderately displaced fractured mandible and had better clinical outcomes than did patients that underwent the Erich arch bar technique. abstract_id: PUBMED:36587846 European multicenter prospective analysis of the use of maxillomandibular fixation for mandibular fractures treated with open reduction and internal fixation. Introduction: The goal of mandibular fracture treatment is to restore static and dynamic occlusal functions. Open reduction and internal fixation (ORIF) of these fractures can be associated with an intraoperative and/or postoperative maxillo-mandibular fixation (MMF). The aim of this study was to analyse the use of perioperative MMF and its effects on occlusal outcomes in the management of mandibular fractures. Material And Methods: This multicentric prospective study included adult patients with mandibular fractures treated with ORIF. The following data were collected: age, gender, pre-trauma dental status (dentate, partially dentate), cause of injury, fracture site, associated facial fractures, surgical approach, type of ORIF (rigid, non-rigid or mixed), thickness and number of plates, modality of intraoperative MMF (arch bars, self-tapping and self-drilling screws [STSDSs], manual, other) and duration of postoperative MMF. The primary outcome was malocclusion at 6 weeks and 3 months. Statistical analyses were performed with Fisher's exact test or chi-square test, as appropriate. Results: Between 1 May 2021 and 30 April 2022, 336 patients, 264 males and 72 females (median age, 28 years) with mandibular fractures (194 single, 124 double and 18 triple fractures) were hospitalized. Intraoperative MMF was performed in all patients. Osteosynthesis was rigid in 75% of single fractures, and rigid or mixed in 85% and 100% of double and triple fractures, respectively. Excluding patients who underwent manual reduction, postoperative MMF (median duration, 3 weeks) was performed in 140 (64%) patients, without differences by type or number of fractures (p &gt; 0.05). No significant difference was found in the incidence of malocclusion in patients with postoperative MMF (5%, 95% confidence interval [CI], 2-10%) compared to those without (4%; 95% IC, 1-11%) (p &gt; 0.05). Conclusion: Postoperative MMF was performed in more than half of the patients despite adequate fracture osteosynthesis, with wide variability amongst centers. No evidence of a reduction in the incidence of postoperative malocclusion in patients treated with postoperative MMF was found. Answer: The use of arch bars or interdental wire fixation is not always necessary for successful outcomes in the open reduction and internal fixation of mandibular angle fractures. A retrospective cohort study found that all patients eventually achieved successful bony union with an acceptable occlusion regardless of whether Erich arch bars, interdental "Stout" wires, or manual reduction alone was used during the procedure. There was no significant difference in the rate of re-operation or the occurrence of major and minor complications across the three groups, suggesting that the method of intraoperative maxillomandibular fixation (MMF) did not affect patient outcomes (PUBMED:18848111). Additionally, a prospective study comparing embrasure wiring to arch bars as adjunctive techniques of MMF in the management of mandibular fractures found that embrasure wiring required significantly less time for placement and had a lower incidence of wire prick injury compared to arch bars. However, there was no statistically significant difference in terms of wire prick and malocclusion between the two groups (PUBMED:29892326). Another study comparing Erich arch bars with embrasure wires for intraoperative IMF in mandibular fractures found that the embrasure wire technique was quicker and had fewer needle stick injuries to the operator and assistants. While the Erich arch bar had superior postoperative occlusion stability, the embrasure wire technique had better postoperative oral hygiene status (PUBMED:30302602). In summary, while arch bars or interdental wire fixation can be used during the open reduction and internal fixation of mandibular angle fractures, they are not strictly necessary for successful outcomes, and alternative methods such as manual reduction or embrasure wiring may be equally effective and have certain advantages (PUBMED:18848111; PUBMED:29892326; PUBMED:30302602).
Instruction: Long-term follow-up of corridor operation for lone atrial fibrillation: evidence for progression of disease? Abstracts: abstract_id: PUBMED:23174503 A flash from the past: a case on long term follow-up of a "corridor" operation. An electrophysiological study in a patient with a previous corridor operation was performed because of syncope. The atrial electrograms showed the persistence of the sinus rhythm in the right atrial corridor despite an organized atrial fibrillation in the left atrium. The first case described of a long term follow-up in a corridor operation, one of the first described surgical approach for the treatment of atrial fibrillation, that gave the beginning to the non-pharmacological approach of this arrhythmia. abstract_id: PUBMED:8130027 Long-term results of the corridor operation for atrial fibrillation. Objective: To investigate the long-term results of the corridor operation in the treatment of symptomatic atrial fibrillation refractory to drug treatment. Background: The corridor operation is designed to isolate from the left and right atrium a conduit of atrial tissue connecting the sinus node area with the atrioventricular node region in order to preserve physiological ventricular drive. The excluded atria can fibrillate without affecting the ventricular rhythm. This surgical method offers an alternative treatment when atrial fibrillation becomes refractory to drug treatment. Patients: From 1987 to 1993, 36 patients with drug refractory symptomatic paroxysmal atrial fibrillation underwent surgery. The in hospital rhythm was followed thereafter by continuous rhythm monitoring and with epicardial electrograms. After discharge Holter recording and stress testing were regularly carried out to evaluate the sinus node function and to detect arrhythmias; whereas Doppler echocardiography was used to measure atrial contraction and size. Main Outcome Measures: Maintained absence of atrial fibrillation without drug treatment after operation; preservation of normal chronotropic response in the sinus node. Results: The corridor procedure was successful in 31 (86%) of the 36 patients. After a mean (SD) follow up of 41 (16) months 25 (69%) of the 36 patients were free of arrhythmias without taking drugs (mean (SE) actuarial freedom at four years 72 (9)%)). Paroxysmal atrial fibrillation recurred in three patients; paroxysmal atrial flutter (two patients) and atrial tachycardia (one patient) developed in the corridor in three others. Among the 31 patients in whom the operation was successful sinus node function at rest and during exercise remained undisturbed in 26 and 25 patients respectively (mean (SE) actuarial freedom of sinus node dysfunction at four years (81(7)%)). Pacemakers were needed in five (16%) of the 31 patients for insufficient sinus node rhythm at rest only. Doppler echocardiography showed maintenance of right atrial contribution to right ventricle filling in 26 of the 31 patients after operation in contrast to the left atrium, which never showed such contribution. His bundle ablation was performed and a pacemaker implanted in the five patients in whom the corridor operation was unsuccessful. Conclusion: These results substantiate the idea of this surgical procedure. Modification of the technique is, however, needed to achieve a reliable isolation between left atrium and corridor, which would make this experimental surgery widely applicable in the treatment of drug refractory atrial fibrillation. abstract_id: PUBMED:9300292 Long-term follow-up of corridor operation for lone atrial fibrillation: evidence for progression of disease? Introduction: Currently, surgery- and catheter-mediated ablation is applied when drug refractoriness of atrial fibrillation is evident, although little is known about the long-term incidence of new atrial arrhythmia and the preservation of sinus node function. Methods And Results: To address this issue, 30 patients with successful corridor surgery for lone paroxysmal atrial fibrillation and normal preoperative sinus node function were followed in a single outpatient department. Five years after surgery, the actuarial proportion of patients with recurrence of atrial fibrillation arising in the corridor was 8% +/- 5%, with new atrial arrhythmias consisting of atrial flutter and atrial tachycardia in the corridor 27% +/- 8%, and with incompetent sinus node requiring pacing therapy 13% +/- 6%. Right atrial transport was preserved in 69% of the patients without recurrence of atrial fibrillation and normal sinus node function. Stroke was documented in two patients. Conclusions: Corridor surgery for atrial fibrillation is a transient or palliative treatment instead of a definitive therapy for drug refractory atrial fibrillation. This observation strongly affects patient selection for this intervention and constitutes a word of caution for other, nonpharmacologic interventions for drug refractory atrial fibrillation. abstract_id: PUBMED:24561164 Long-term progression from paroxysmal to permanent atrial fibrillation following transcatheter ablation in a large single-center experience. Background: The natural history of atrial fibrillation (AF) is characterized by gradual increase in duration and frequency of relapses until a definitive shift to permanent AF. Heart disease and comorbidities modulate AF progression. However, to date the influence of catheter ablation on AF evolution has rarely been investigated. Objective: The purpose of this study was to identify long-term predictors of AF progression in a large cohort of patients undergoing AF transcatheter ablation (AFTCA). Methods: A total of 889 patients (mean age 57 ± 11 years; 53.3% paroxysmal AF, 40.5% persistent AF, 6.2% long-standing AF) underwent AFTCA. All patients underwent pulmonary vein isolation, with linear lesions and complex fractionated atrial electrogram ablation reserved for patients with persistent/long-standing AF and/or confirmed structural heart disease. Results: After median follow-up of 64 months (range 41-84 years), AF progression despite AFTCA occurred in 57 cases (6.4%). However, AF progression was much more pronounced in patients with persistent (10%) or long-standing persistent AF (14.6%) than in those with paroxysmal AF (2.7%, P &lt;.001). Furthermore, AF progression was more frequently reported in patients who presented with underlying comorbidities/cardiomyopathies (9.1%) than in those who presented with lone AF (29.9%, P &lt;.001). At multivariate analysis, comorbidities/cardiomyopathies and baseline persistent/long-standing AF proved to be independent predictors of progression (odds ratio 11.3, 95% confidence interval 2.6-48.0, P &lt;.001, and odds ratio 1.6, 95% confidence interval 1.2-2.1, P &lt;.001, respectively). Conclusion: The presence of comorbidities/cardiomyopathies and persistent/long-standing AF seem to predict AF progression in patients undergoing AFTCA. Performing AFTCA in the paroxysmal phase of the arrhythmia may reduce progression of AF to its permanent form. abstract_id: PUBMED:17548732 Long-term progression and outcomes with aging in patients with lone atrial fibrillation: a 30-year follow-up study. Background: The long-term natural history of lone atrial fibrillation is unknown. Our objective was to determine the rate and predictors of progression from paroxysmal to permanent atrial fibrillation over 30 years and the long-term risk of heart failure, thromboembolism, and death compared with a control population. Methods And Results: A previously characterized Olmsted County, Minnesota, population with first episode of documented atrial fibrillation between 1950 and 1980 and no concomitant heart disease or hypertension was followed up long term. Of this unique cohort, 76 patients with paroxysmal (n=34), persistent (n=37), or permanent (n=5) lone atrial fibrillation at initial diagnosis met inclusion criteria (mean age at diagnosis, 44.2+/-11.7 years; male, 78%). Mean duration of follow-up was 25.2+/-9.5 years. Of 71 patients with paroxysmal or persistent atrial fibrillation, 22 had progression to permanent atrial fibrillation. Overall survival of the 76 patients with lone atrial fibrillation was 92% and 68% at 15 and 30 years, respectively, similar to 86% and 57% survival for the age- and sex-matched Minnesota population. Observed survival free of heart failure was slightly worse than expected (P=0.051). Risk for stroke or transient ischemic attack was similar to the expected population risk during the initial 25 years of follow-up but increased thereafter (P=0.004), although CIs were wide. All patients who had a cerebrovascular event had developed &gt; or = 1 risk factor for thromboembolism. Conclusions: Comorbidities significantly modulate progression and complications of atrial fibrillation. Age or development of hypertension increases thromboembolic risk. abstract_id: PUBMED:1554262 Surgical therapy of paroxysmal atrial fibrillation with the "corridor" operation. Patients with paroxysmal atrial fibrillation may be extremely disabled despite medical therapy. Based on recent concepts of atrial fibrillation, a surgical open heart procedure was designed to isolate a "corridor" from the right and the left atrium. The corridor consists of the sinus node area, the atrioventricular nodal junction, and the connecting right atrial mass, small enough to prevent atrial fibrillation. Between 1987 and 1990, 20 patients with severely disabling symptoms due to frequent paroxysmal atrial fibrillation underwent the corridor operation, with permanent success in 16 patients. In 8 patients, left atrium to corridor conduction reappeared shortly after the procedure. Reoperation was performed in these patients without extracorporeal circulation. The site of persistent conduction between the left atrium and the corridor could consistently be localized adjacent to the coronary sinus. Nevertheless, reoperation failed to isolate permanently the corridor in 4 patients. During a mean follow-up of 20 months, atrial fibrillation dominating the ventricles was never observed nor inducible in the corridor in the 16 patients with a successful operation. In all cured patients, sinus node function remained undisturbed. Paroxysmal atrial flutter inside the corridor arose in 1 patient and a paroxysmal focal tachycardia in another. All 16 cured patients experienced a clear improvement in quality of life. Refinement of the surgical technique to obtain persistent isolation between the left atrium and the corridor is needed. These results demonstrate that the concept of the corridor operation is sound and justify its use in the treatment of drug-refractory paroxysmal atrial fibrillation. abstract_id: PUBMED:1999635 Sinus node-atrioventricular node isolation: long-term results with the "corridor" operation for atrial fibrillation. The "corridor" operation is designed to restore sinus rhythm to patients with atrial fibrillation by electrically isolating the sinus node, a band of atrial tissue and the atrioventricular (AV) node from the remaining atrial tissue. Nine patients with drug-refractory atrial fibrillation underwent this operation; four patients had chronic atrial fibrillation and five had paroxysmal atrial fibrillation; the mean duration of symptoms was 12 +/- 8 years. Patient ages ranged from 25 to 68 years (mean 48 +/- 12). At preoperative electrophysiologic study, no patient had evidence of an accessory AV pathway or AV node reentry. Sinus node recovery time could not be determined in five patients because of recurrent atrial fibrillation during or before programmed stimulation. At operation the corridor of atrial tissue connecting the sinus and AV nodes was successfully isolated from the remaining left and right atrial tissue in all patients. One patient required early reoperation for recurrent atrial fibrillation before hospital discharge. At the predischarge electrophysiologic study, the corridor remained isolated in all patients except for one patient who had intermittent conduction between the corridor and excluded right atrium. One patient had nonsustained atrial fibrillation and one had atrial tachycardia evident in the corridor. Atypical AV node reentry of uncertain significance was induced in one other patient. Over a total follow-up of 191 patient months (mean 21 +/- 20), seven patients remained free of atrial fibrillation. Two patients had recurrent atrial fibrillation, which in one patient was effectively controlled by a single antiarrhythmic agent. A permanent pacemaker was implanted in four patients for sinus node dysfunction.(ABSTRACT TRUNCATED AT 250 WORDS) abstract_id: PUBMED:34032120 Long-Term Cardiovascular Outcomes After Pregnancy in Women With Heart Disease. Background Women with heart disease are at risk for pregnancy complications, but their long-term cardiovascular outcomes after pregnancy are not known. Methods and Results We examined long-term cardiovascular outcomes after pregnancy in 1014 consecutive women with heart disease and a matched group of 2028 women without heart disease. The primary outcome was a composite of mortality, heart failure, atrial fibrillation, stroke, myocardial infarction, or arrhythmia. Secondary outcomes included cardiac procedures and new hypertension or diabetes mellitus. We compared the rates of these outcomes between women with and without heart disease and adjusted for maternal and pregnancy characteristics. We also determined if pregnancy risk prediction tools (CARPREG [Canadian Cardiac Disease in Pregnancy] and World Health Organization) could stratify long-term risks. At 20-year follow-up, a primary outcome occurred in 33.1% of women with heart disease, compared with 2.1% of women without heart disease. Thirty-one percent of women with heart disease required a cardiac procedure. The primary outcome (adjusted hazard ratio, 19.6; 95% CI, 13.8-29.0; P&lt;0.0001) and new hypertension or diabetes mellitus (adjusted hazard ratio, 1.6; 95% CI, 1.4-2.0; P&lt;0.0001) were more frequent in women with heart disease compared with those without. Pregnancy risk prediction tools further stratified the late cardiovascular risks in women with heart disease, a primary outcome occurring in up to 54% of women in the highest pregnancy risk category. Conclusions Following pregnancy, women with heart disease are at high risk for adverse long-term cardiovascular outcomes. Current pregnancy risk prediction tools can identify women at highest risk for long-term cardiovascular events. abstract_id: PUBMED:28232263 Progression of paroxysmal to persistent atrial fibrillation: 10-year follow-up in the Canadian Registry of Atrial Fibrillation. Background: Progression from paroxysmal to persistent atrial fibrillation (AF) has important clinical implications and is relevant to the management of patients with AF. Objective: The purpose of this study was to define the long-term rate of progression from paroxysmal to persistent AF and the relevant clinical variables. Methods: The Canadian Registry of Atrial Fibrillation enrolled patients after a first electrocardiographic diagnosis of paroxysmal AF. Associations between baseline characteristics and clinical outcomes were evaluated using a multivariable Cox proportional hazard model and a competing risk model accounting for death as a competing risk, where appropriate. Results: We enrolled 755 patients (61.7% men) aged between 14 and 91 years (mean age 61.2 ± 14.2 years). The median follow-up was 6.35 years (interquartile range 2.93-10.04 years), with a rate of progression to persistent AF at 1, 5, and 10 years was 8.6%, 24.3%, and 36.3%, respectively. All-cause mortality was 30.3% at 10 years. Factors associated with AF progression were increasing age (hazard ratio [HR] 1.40; 95% confidence interval [CI] 1.23-1.60, for each 10-year increment), mitral regurgitation (HR 1.87; 95% CI 1.28-2.73), left atrial dilatation (HR 3.01; 95% CI 2.03-4.47), aortic stenosis (HR 2.40; 95% CI 1.05-5.48), and left ventricular hypertrophy (HR .47; 95% CI 1.04-2.08). Factors associated with a lower rate of progression were a faster heart rate during AF (HR 0.94; 95% CI 0.92-0.96 per 5-beat/min increment) and angina (HR 0.54; 95% CI 0.38-0.77). After accounting for death as a competing risk, left ventricular hypertrophy and aortic stenosis were no longer significant. Conclusion: Within 10 years of presenting with paroxysmal AF, &gt;50% of patients will progress to persistent AF or be dead. Increasing age, mitral regurgitation, aortic stenosis, left ventricular hypertrophy, and left atrial dilatation were associated with progression to persistent AF. abstract_id: PUBMED:29525133 Long-Term Follow-up of Adults Following the Atrial Switch Operation for Transposition of the Great Arteries - A Contemporary Cohort. Background: The atrial switch operation for transposition of the Great Arteries (TGA) (Mustard or Senning Procedure) provides excellent short-term survival. Significant long-term concerns exist for these patients, however, including the ability of the right ventricle to maintain systemic perfusion and the risk of arrhythmia. We seek to describe long-term mortality and morbidity of this group of adult patients. Methods: Consecutive patients who had undergone an atrial switch procedure, who were aged over 16 years and who were followed up at our tertiary level adult congenital heart disease (ACHD) service in Sydney, Australia since 2000 were included. We documented mortality using a National Death Index and analysed the prospectively defined composite endpoint of "Serious Adverse Events" including death, heart failure hospitalisation and/or documented ventricular arrhythmia. Results: There were 83 patients included; mean age at most recent follow-up was 35 ± 5 years. Overall survival was 82% at 35 years and 22% of patients experienced a serious adverse event. Atrial and ventricular arrhythmias occurred in 45% and 7% of patients respectively. Eighteen (22%) patients required a pacemaker and six (7%) required implantable cardiac defibrillator (ICD) implantation. Significant right ventricular dysfunction was present in 26% of patients on their most recent visit and this, or requirement for permanent pacing, was associated with an increased risk of serious adverse events (OR 10.22, p&lt;0.001), (OR 4.998, p=0.04) respectively. Conclusions: Significant mortality and morbidity accrues by mid-adult life after an atrial procedure for TGA. Right ventricular dysfunction and permanent pacing are associated with serious adverse events. Answer: The long-term follow-up of the corridor operation for lone atrial fibrillation suggests that there may be evidence for the progression of the disease. The corridor operation, designed to isolate a conduit of atrial tissue to preserve physiological ventricular drive while allowing the excluded atria to fibrillate without affecting the ventricular rhythm, has shown mixed results over time. In one study, after a mean follow-up of 41 months, the corridor procedure was successful in 86% of patients, with 69% free of arrhythmias without taking drugs. However, there were cases of recurrence of paroxysmal atrial fibrillation, atrial flutter, and atrial tachycardia in the corridor, and some patients required pacemakers due to sinus node dysfunction (PUBMED:8130027). Another study with a 5-year follow-up after corridor surgery for lone paroxysmal atrial fibrillation reported an 8% recurrence of atrial fibrillation, 27% new atrial arrhythmias in the corridor, and 13% requiring pacing therapy for sinus node incompetence. This led to the conclusion that corridor surgery may be more of a transient or palliative treatment rather than a definitive therapy for drug-refractory atrial fibrillation (PUBMED:9300292). These findings suggest that while the corridor operation can initially be successful in treating atrial fibrillation, there is a risk of progression of the disease over time, with new arrhythmias developing and potential deterioration of sinus node function. This progression of disease post-corridor operation indicates the need for careful patient selection and consideration of long-term outcomes when choosing this surgical intervention for atrial fibrillation (PUBMED:9300292).
Instruction: Do sexual risk behaviors differ between heterosexual youth infected with HIV perinatally versus sexually? Abstracts: abstract_id: PUBMED:23628136 Do sexual risk behaviors differ between heterosexual youth infected with HIV perinatally versus sexually? Purpose: To determine if sexually active heterosexual HIV-infected 15- to 24-year-old youth have different sexual or other risk behaviors depending on whether they were infected perinatally or heterosexually. Methods: We compared youth aged 15 to 24 years who acquired HIV perinatally or sexually and were interviewed in-person or by phone for partner services by the New York City Department of Health and Mental Hygiene. We included heterosexually active youth with at least one sexual partner of the opposite sex in the past 12 months, and excluded men who have sex with men and injection drug users. We used χ(2) tests and t tests to compare demographics, sexual risk behaviors, partner services outcomes, and viral loads. Results: Both groups reported few partners (median 1, mean 2), and only 12% of partners of perinatally infected youth were previously diagnosed (18% of sexually infected youths' partners). A minority reported always using condoms. Both groups had similar rates of sexually transmitted infections and median HIV plasma RNA (perinatally infected: 5,140 copies/mL; sexually infected: 6,835 copies/mL). Despite these similarities, among tested partners not previously HIV diagnosed, none of 17 named by perinatally infected youth was newly HIV diagnosed, whereas 21% (8/39, p = .09) of those named by sexually infected youth were newly diagnosed. Conclusions: Though perinatally infected youth did not transmit HIV infection to previously undiagnosed partners, they had similar HIV-related risk behaviors to youth infected sexually who reported on risks that led to their infection. HIV prevention among HIV-infected youth remains a critical challenge. abstract_id: PUBMED:25506754 Knowledge, attitudes, and practices regarding antiretroviral management, reproductive health, sexually transmitted infections, and sexual risk behavior among perinatally HIV-infected youth in Thailand. More than 30% of perinatally HIV-infected children in Thailand are 12 years and older. As these youth become sexually active, there is a risk that they will transmit HIV to their partners. Data on the knowledge, attitudes, and practices (KAP) of HIV-infected youth in Thailand are limited. Therefore, we assessed the KAP of perinatally HIV-infected youth and youth reporting sexual risk behaviors receiving care at two tertiary care hospitals in Bangkok, Thailand and living in an orphanage in Lopburi, Thailand. From October 2010 to July 2011, 197 HIV-infected youth completed an audio computer-assisted self-interview to assess their KAP regarding antiretroviral (ARV) management, reproductive health, sexual risk behaviors, and sexually transmitted infections (STIs). A majority of youth in this study correctly answered questions about HIV transmission and prevention and the importance of taking ARVs regularly. More than half of the youth in this study demonstrated a lack of family planning, reproductive health, and STI knowledge. Girls had more appropriate attitudes toward safe sex and risk behaviors than boys. Although only 5% of the youth reported that they had engaged in sexual intercourse, about a third reported sexual risk behaviors (e.g., having or kissing boy/girlfriend or consuming an alcoholic beverage). We found low condom use and other family planning practices, increasing the risk of HIV and/or STI transmission to sexual partners. Additional resources are needed to improve reproductive health knowledge and reduce risk behavior among HIV-infected youth in Thailand. abstract_id: PUBMED:29526603 Association Between Psychiatric Disorders, Substance Use, and Sexual Risk Behaviors in Perinatally HIV-Exposed Youth. As youth with perinatally acquired HIV infection age, there is a need for studies that identify predictors and correlates of sexual risk behaviors. We examined the association between psychiatric disorders and substance use disorders (SUD) with sexual risk behaviors in youth with perinatally acquired HIV infection and perinatally HIV-exposed but uninfected youth. Participants were recruited from four medical centers in New York City. The Diagnostic Interview Schedule for Children and the Adolescent Sexual Behavior Assessment were administered to assess psychiatric disorders and sexual behaviors, respectively. SUD and behavior disorders were correlated with either ever having had penetrative sex or recent condomless sex for participants with perinatally acquired HIV infection only. Results suggest that clinicians should screen and treat patients for SUD and behavioral disorders to reduce sexual risk behaviors in youth with perinatally acquired HIV infection. abstract_id: PUBMED:31248272 Sexual practices and risk behaviors of Israeli adult heterosexual men. The knowledge of sexual practices of heterosexual males (HM) in Israel is limited despite the increase inthe incidence of sexually transmitted diseases (STI). This study assessed sexual practices among Israeli HM and the associations between demographic characteristics and sexual risk behaviors. The Cross-sectional study included a representative sample of 913 Jewish HM aged 18-44, which completed a questionnaire including their demographics, sexual practices, and risky sexual behavior. Of all participants, 66.8% had monogamous sex with their steady partner. These participants were more likely to be older, religious, involved ina limited repertoire of sexual practices, and less likely to engage in risky sexual behavior. Of the participants, 9.6% were in steady relationships but had concomitant sexual casual partner/s. They were more likely to be secular, paid for sex, and had more lifetime sexual partners. Of all the participants, 23.6% were singles and had sex with casual partner/s. They were more likely to be younger and engage in risky sexual behavior. Of all participants, 10.3% were involved in risky sexual behavior. These men were more likely to be singles, pay for sex and have more lifetime sexual partners. We conclude that preventive interventions aimed to prevent STI-infections should target HM who are in singles and those in steady relationships and have concomitant sex partners. abstract_id: PUBMED:17075438 Prevalence of sexually transmitted infections and high-risk sexual behaviors in heterosexual couples attending sexually transmitted disease clinics in Peru. Objectives: The objectives of this study were to determine the prevalence of sexually transmitted infection (STIs) in heterosexual couples and the sexual behaviors associated with their acquisition. Goal: The goal of this study was to better direct educational efforts to decrease STI among heterosexual couples in Lima, Peru. Study Design: We conducted a case-control study in 195 heterosexual couples without HIV infection who attended 2 sexually transmitted disease clinics in Lima, Peru. A case was defined as an individual with one or more newly diagnosed STIs such as gonorrhea, chlamydia, trichomoniasis, herpes simplex, syphilis, mycoplasma, or ureaplasma. Results: Thirty-three percent of individuals (41 men and 89 women) had at least one STI and 26 couples (13%) had the same STI detected. Men who have sex with men (MSM) accounted for 13% of all men, had higher rate of STIs and higher risk behaviors than non-MSM. Ureaplasma infection was the most prevalent STI found in both men and women and was associated with oral sexual contact. In heterosexual pairs, condom use during anal sex occurred less than 10% of the time. Conclusions: The heterosexual couples in sexually transmitted disease clinics have high-risk behaviors and STIs are frequent. The educational effort concerning prevention of STIs requires higher effort. abstract_id: PUBMED:31485155 Parent-Youth Communication to Reduce At-Risk Sexual Behavior: A Systematic Literature Review. There are nearly 110 million cases of sexually transmitted infections (STI) in the United States. The Centers for Disease Control and Prevention estimates that annually there are more than 19.7 million new STI cases. Of those, more than half are accounted for by youth aged 15 to 24 years. Although some STIs are not considered to be life threatening, they can lead to severe health problems, risk of HIV infection or infertility if they are not properly treated. Some research has shown that parent-youth communication can reduce youth's at-risk sexual behaviors. The following is a systematic review of the literature on parent-youth sexual communication and family-level interventions designed to reduce risky sexual behavior in youth. abstract_id: PUBMED:33571066 Disclosure of HIV status to sexual partners among perinatally HIV-infected youth in Thailand. Data regarding disclosure of HIV status to sexual partners among perinatally acquired HIV-infected (PHIV) youth are limited, particularly from Asian countries. This cross-sectional study assessed the patterns of, attitudes about, and factors associated with HIV disclosure to sexual partners among PHIV youth aged 15-24 years who attended a pediatric HIV clinic in Thailand. Participants were interviewed using a semi-structured questionnaire designed to elicit demographic and sexual behavior information. Those who had sexual partners were queried about their relationship and HIV status disclosure. Reasons to disclose or not to disclose were assessed accordingly. Among the 51 (44.4%) youth who had sexual partners, 55.8% had steady partners, 45.1% did not always use condoms, and 41.2% disclosed their HIV status. Reasons to disclose included trust and concern about the risk of infection to their partners. Reasons not to disclose included fear of rejection and stigmatization, and not knowing how to disclose. Most youth reported a need for disclosure-related guidance. Knowing their sexual partner's HIV status was the only independent factor associated with disclosure (aOR 14.6; 95% CI 2.8-75.4). This study demonstrates a high rate of unsafe sex and a low rate of partner disclosure in PHIV youth, and highlights the need for a comprehensive guideline for HIV disclosure to sexual partners. abstract_id: PUBMED:28070411 An intersectional approach for understanding the vulnerabilities of English-speaking heterosexual Caribbean youth to HIV/AIDS and sexually transmitted infections: Prevention and intervention strategies. Caribbean youth comprise about 30 percent of the English-speaking Caribbean population, and about 81,000 Caribbean and Latin American youth are HIV infected. AIDS is the leading cause of death for 15- to 24-year-old English-speaking Caribbean youth. This article relies on intersectionality theory in the assessment of the macro-level, or structural variables, and micro-level, or individual level, variables that influence the risk-taking sexual behaviors of heterosexual English-speaking Caribbean youth and increase their vulnerability to HIV/sexually transmitted infections. This article offers macro- and micro-level prevention/intervention strategies for reducing the prevalence of sexually transmitted infections in English-speaking Caribbean youth, including the promotion of condom use, voluntary male circumcision, and HIV testing and counseling. Suggestions are offered for future research investigations to explore the contributing factors to youth's vulnerability to sexually transmitted infections and to empirically verify the relationship between and among variables that account for desired outcomes, including decreases in risky sexual behaviors. abstract_id: PUBMED:20802343 Sexual transmission risk behavior of adolescents With HIV acquired perinatally or through risky behaviors. Objective: To describe the prevalence and predictors of the transmission-related behaviors of adolescents with HIV acquired perinatally (perinatal) or through risky behaviors (behavioral). Methods: HIV-positive adolescents (n = 166) aged 13-21, receiving care in 3 US cities, reported sexual behaviors, drug use, and psychosocial and demographic characteristics. HIV-related data were abstracted from medical records. Results: Of 105 sexually experienced adolescents reporting risk history (42 perinatal, 63 behavioral), 49 had engaged in unprotected sex since learning their diagnosis (12 perinatal, 37 behavioral). Of sexually experienced girls, 19 had been pregnant (5 of 24 perinatal, 14 of 31 behavioral). Risk information was provided for 115 of 132 recent sex partners, 61 of whom had unprotected sex with study participants (10 with 8 perinatal participants; 51 with 33 behavioral participants). Recent unprotected sex was associated with sexual abuse during adolescence (adjusted odds ratio = 9.61, 95% CI: 1.07 to 86.12) and greater HIV knowledge (adjusted odds ratio = 1.29, 95% CI: 1.00 to 1.66) when transmission category, age, and sexual orientation were controlled. Conclusions: To limit HIV transmission and prevent unplanned pregnancies, developmentally appropriate risk-reduction interventions, and screening and treatment referral for sexual abuse, must be integrated into the care of both perinatally and behaviorally HIV-infected adolescents. abstract_id: PUBMED:24797317 Racial discrimination and posttraumatic stress symptoms as pathways to sexual HIV risk behaviors among urban Black heterosexual men. In light of evidence that racial discrimination and posttraumatic stress symptoms (PTSS) are neither rare nor extraordinary for many Black urban men, we examined the relationship between everyday racial discrimination and sexual HIV risk behaviors in a predominantly low-income sample of 526 urban Black heterosexually identified men; 64% of whom were unemployed and 55% of whom reported a history of incarceration. We tested the hypothesis that PTSS would mediate the relationship between everyday racial discrimination and sexual risk. Participants in the predominantly low-income urban sample ranged in age from 18 to 45 (M = 28.80, SD = 7.57). Three multiple regression models were used to test the study's mediational model. As hypothesized, PTSS mediated the relationship between everyday racial discrimination and sexual risk behaviors. Most participants (97%) reported experiences with everyday racial discrimination. Results empirically support the notion of racial discrimination-based traumatic stress as a pathway to Black heterosexual men's increased sexual risk behaviors. Results also highlighted key demographic differences with older men reporting fewer PTSS and sexual risk behaviors compared with younger men. Incarceration was related to both PTSS and sexual risk, underscoring the role that incarceration may play in Black heterosexual men's adverse health outcomes. Our study highlights the need for more qualitative and quantitative research to understand the nature of PTSS in Black heterosexual men and mechanisms such as substance use that may link traumatic experiences and sexual risk. Future research could also assess experiences with childhood sexual abuse, violence, and incarceration to gain a more in-depth understanding of the sources of traumatic stress in Black heterosexual men's lives. We advocate for the development of community-based individual and structural-level interventions to help Black heterosexual men in urban areas develop effective strategies to cope with racial discrimination-based traumatic stress to reduce sexual HIV risk behaviors in Black communities. Answer: Sexual risk behaviors do not appear to differ significantly between heterosexual youth infected with HIV perinatally versus sexually. A study comparing sexually active heterosexual HIV-infected youth aged 15 to 24 years, who acquired HIV either perinatally or sexually, found that both groups reported a similar number of sexual partners (median 1, mean 2), and only a minority reported always using condoms. The rates of sexually transmitted infections (STIs) and median HIV plasma RNA levels were also similar between the two groups. However, none of the tested partners not previously diagnosed with HIV named by perinatally infected youth were newly diagnosed with HIV, whereas 21% of those named by sexually infected youth were newly diagnosed. Despite this, the perinatally infected youth did not transmit HIV to previously undiagnosed partners, but they engaged in similar HIV-related risk behaviors to those infected sexually (PUBMED:23628136). In Thailand, more than 30% of perinatally HIV-infected children are 12 years and older, and as these youth become sexually active, there is a risk of transmitting HIV to their partners. The knowledge, attitudes, and practices (KAP) regarding antiretroviral management, reproductive health, STIs, and sexual risk behavior were assessed among perinatally HIV-infected youth. The study found that while a majority of youth correctly answered questions about HIV transmission and prevention, there was a lack of knowledge regarding family planning, reproductive health, and STIs. Girls had more appropriate attitudes toward safe sex and risk behaviors than boys. Low condom use and other family planning practices were reported, increasing the risk of HIV and/or STI transmission to sexual partners (PUBMED:25506754). Overall, the evidence suggests that perinatally HIV-infected youth engage in sexual risk behaviors similar to those infected through sexual contact, highlighting the need for HIV prevention and education among all HIV-infected youth to address these behaviors.
Instruction: Sample size calculations in surgery: are they done correctly? Abstracts: abstract_id: PUBMED:12947329 Sample size calculations in surgery: are they done correctly? Background: Randomized controlled trials (RCTs) are considered the gold standard for evidence-based clinical research, but prior work has suggested that there may be poor reporting of sample sizes in the surgical literature. Sample size calculations are essential for planning a study to minimize both type I and type II errors. We hypothesized that sample size calculations may not be performed consistently in surgery studies and, therefore, many studies may be "underpowered." To address this issue, we reviewed RCTs published in the surgical literature to determine how often sample size calculations were reported and to analyze each study's ability to detect varying degrees of differences in outcomes. Methods: A comprehensive MEDLINE search identified RCTs published in Annals of Surgery, Archives of Surgery, and Surgery between 1999 and 2002. Each study was evaluated by two independent reviewers. Sample size calculations were performed to determine whether they had 80% power to detect differences between treatment groups of 50% (large) and 20% (small), with one-sided test, alpha = 0.05. For the underpowered studies, the degree to which sample size would need to be increased was determined. Results: One hundred twenty-seven RCT articles were identified; of these, 48 (38%) reported sample size calculations. Eighty-six (68%) studies reported positive treatment effect, whereas 41 (32%) found negative results. Sixty-three (50%) of the studies were appropriately powered to detect a 50% effect change, whereas 24 (19%) had the power to detect a 20% difference. Of the studies that were underpowered, more than half needed to increase sample size by more than 10-fold. Conclusions: The reporting of sample size calculations was not provided in more than 60% of recently published surgical RCTs. Moreover, only half of studies had sample sizes appropriate to detect large differences between treatment groups. abstract_id: PUBMED:31155391 Statistical power and sample size calculations: A primer for pediatric surgeons. Background/purpose: Determining the appropriate sample size is an integral component of any well-designed research study, grant application, or scientific manuscript. Surgeons intuitively understand the concept of statistical power, but have limited knowledge in how to go about performing the calculations correctly. Our goal is to provide a strategy for pediatric surgeons to use when planning a study to determine the sample sizes required for detecting a clinically meaningful effect, which is important for interpreting and validating their results. Methods: We present a general 5-step approach for performing a sample size justification and statistical power analysis, and illustrate this approach using several surgical research examples. The 5 steps are: 1) Define the primary outcome of interest, 2) Define the magnitude of the effect or effect size and power desired, 3) Determine the appropriate statistics and statistical test that will be considered, 4) Perform the calculations to estimate the required sample size using software or a reference table, 5) Write the formal power and sample size statement for the manuscript, grant application, or project proposal. Conclusions: Understanding sample size considerations and statistical power in the surgical research community will improve the quality of published articles. This primer can be used by pediatric surgeons in the process of determining the appropriate sample sizes for detecting a clinically meaningful effect with sufficient statistical power. Virtually all research studies in pediatric surgery should include a justification of sample size based on a power calculation as this leads to more meaningful inferences from the data and analysis. Type Of Study: Review article. Level Of Evidence: N/A. abstract_id: PUBMED:36266091 Statistical power and sample size calculations for time-to-event analysis. Objective: To provide thoracic and cardiovascular surgeons with the necessary tools for performing sample size and power calculations for studies with time-to-event outcomes. Methods: Power and sample size calculations enhance the overall quality of research studies by providing readers with assurance and insight into the number of patients included in the study. A 5-step approach is presented for performing sample size calculations in comparing groups on time-to-event endpoints. The steps are as follows: (1) identify the primary outcome of interest, (2) define size of the effect and desired power, (3) determine the appropriate statistical test, (4) perform calculations of the required sample size, and (5) write formal power and sample size statement. This approach is demonstrated with 5 clinical examples for time-to-event studies in cardiovascular surgery, featuring Cox regression, 2-sample log-rank test, 1-sample log-rank test, and competing risks analysis. Conclusions: Statistical power is an essential element for designing studies to ensure sufficient sample sizes for detecting treatment effects or group differences in time-to-event patient outcomes. Power and sample size justification not only adds statistical rigor and credibility to research manuscripts, but also provides the reader with assurance that the findings and conclusions are valid and based on a sufficient number of patients. abstract_id: PUBMED:15834629 Sample size matters: a guide for surgeons. Considerations of sample size computations in the medical literature have gained increasing importance over the past decade and are now often mandatory for scientific grant proposals, protocols, and publications. However, many surgeons are ill-prepared to understand the parameters on which the appropriate sample size is based. The present article has several objectives: first, to review the need for sample size considerations; second, to explain the ingredients necessary for sample size computations in simple, nonmathematic language; third, to provide options for reducing the sample size if it seems impracticably large; and fourth, to help avoid some of the more common mistakes encountered when computing sample sizes. abstract_id: PUBMED:11341427 Sample size and statistical power of randomised, controlled trials in orthopaedics. We reviewed all 717 manuscripts published in the 1997 issues of the British and American volumes of the Journal of Bone and Joint Surgery and in Clinical Orthopaedics and Related Research, from which 33 randomised, controlled trials were identified. The results and sample sizes were used to calculate the statistical power of the study to distinguish small (0.2 of standard deviation), medium (0.5 of standard deviation), and large (0.8 of standard deviation) effect sizes. Of the 33 manuscripts analysed, only three studies (9%) described calculations of sample size. To perform post-hoc power assessments and estimations of deficiencies of sample size, the standard effect sizes of Cohen (small, medium and large) were calculated. Of the 25 studies which reported negative results, none had adequate power (beta &lt; 0.2) to detect a small effect size and 12 (48%) lacked the power necessary to detect a large effect size. Of the 25 studies which did not have an adequate size of sample to detect small differences, the average used was only 10% of the required number Our findings suggest that randomised, controlled trials in clinical orthopaedic research utilise sample sizes which are too small to ensure statistical significance for what may be clinically important results. abstract_id: PUBMED:32138476 Sample Size and Sampling Considerations in Published Clinical Research Articles. Aim: : Appropriate calculation of sample size and choosing the correct sampling technique are of paramount importance to produce studies that are capable of drawing clinically relevant conclusions with generalizability of results. The current study was planned with an objective to determine reporting of sample size and sampling considerations in clinical research articles published in the year 2017. Methods: One high impact factor journal and one low impact factor journal belonging to the specialities of Medicine, Surgery, Obstetrics and Gynaecology, Paediatrics and Pharmacology were selected and checked for adherence to reporting of sample size and sampling considerations. Results: A total of 264 articles were examined. These consisted of 55 interventional studies and 209 observational studies. Interventional studies showed higher reporting of sample size calculation/justification for sample size selection (29.1%) compared to observational studies (14.8%). Only 33 out of 155 articles from high impact factor journals and 14 out of 109 articles from low impact factor journals mentioned about sample size calculation or justified the sample size. In addition to this, merely 68 out of 209 observational studies mentioned about sampling considerations such as sampling technique/participant follow up/matching details. Conclusion: The reporting of sample size and sampling considerations was found to be low in both high impact factor and low impact factor journals. Though interventional studies had better reporting compared to other study designs, the reporting was still not adequate and there is an immense scope for improvement. abstract_id: PUBMED:31787318 Adequate reporting of the sample size calculation in surgical randomized controlled trials. Background: The sample size calculation for a randomized controlled trial should assure that a statistically significant and clinically relevant difference can be detected with a high probability. To be certain that readers of a randomized controlled trial can critically judge whether the assumptions for the sample size calculation were realistic, a number of specific subitems should be reported. Methods: In this minireview, the frequency of adequate reporting of specific sub-items of the sample size was assessed from surgical randomized controlled trials that were published in the 5 leading general surgical journals in 2017. Subitems from the Consolidated Standards for Reporting Trials and from the Standard Protocol Items: Recommendations for Interventional Trials Statement were considered. Results: A total of 85 relevant randomized controlled trials were identified over PubMed. The overall adequate reporting of the sample size according to the Consolidated Standards for Reporting Trials was 53% (45 of 85). When considering additional subitems from the Standard Protocol Items: Recommendations for Interventional Trials Statement which are not explicitly requested by the Consolidated Standards for Reporting Trials Statement, the adherence decreased to 12% (10 of 85). Conclusion: The adherence in reporting potentially important subitems of the sample size from the Consolidated Standards for Reporting Trials Statement was moderate among surgical randomized controlled trials. It is crucial that surgeons who plan and conduct randomized controlled trials be aware of these subitems. abstract_id: PUBMED:10638992 Sample size calculation for clinical trials: the impact of clinician beliefs. The UK Medical Research Council (MRC) randomized trial of gastric surgery, ST01, compared conventional (D1) with radical (D2) surgery. Sample size estimation was based upon the consensus opinion of the surgical members of the design team, which suggested that a change in 5-year survival from 20% (D1) to 34% (D2) could be realistic and medically important. On the basis of these survival rates, the sample size for the trial was 400 patients. However, this trial was exceptional in the way that a survey of surgeons' opinions was made at the start of the trial, in 1986, and again before results were analysed but after termination of the trial in 1994. At the initial survey, the three surgeons from the trial steering committee and 23 other surgeons experienced in treating gastric carcinoma were given detailed questionnaires. They were asked about the expected survival rate in the D1 group, anticipated difference in survival from D2 surgery, and what difference would be medically important and influence future treatment of patients. The consensus opinion of those surveyed was that there might be a survival improvement of 9.4%. In 1994, prior to closure of the trial, and before any survival information was disclosed, the survey was repeated with 21 of the original 26 surgeons. At this second survey, the opinion of the trial steering committee was that 9.5% difference was more realistic. This was in accord with the opinion of the larger group, which remained little changed since 1986. The baseline 5-year D1 survival was thought likely to be about 32%, which corresponded closely to the actual survival of recruited patients. Revised sample size calculations suggested that, on the basis of these more recent opinions, between 800 and 1200 patients would have been required. Both surveys assessed the level of treatment benefit that was deemed to be sufficient for causing surgeons to change their practice. This showed that the 13% difference in survival used as the study target was clinically relevant, but also indicated that many clinicians would remain unwilling to change their practice if the difference is only 9.5%. The experience of this carefully designed trial illustrates the problems of designing long-term, randomized trials. It raises interesting questions about the common practice of basing sample size estimates upon the beliefs of a trial design committee that may include a number of enthusiasts for the trial treatment. If their opinion of anticipated effect sizes drives the design of the trial, rather than the opinion of a larger community of experts that includes sceptics as well as enthusiasts, there is likely to be a serious miscalculation of sample size requirements. abstract_id: PUBMED:22743957 A systematic review of power and sample size reporting in randomized controlled trials within plastic surgery. Background: The randomized controlled trial is a reliable study design for assessing the effectiveness of a surgical intervention, provided it is adequately powered. This systematic review examines the appropriateness of reporting of power and sample size in randomized controlled trials within the plastic surgery literature. Methods: Original randomized controlled trials published from January of 1990 to December of 2010 in nine high-impact plastic surgery journals were appraised. The data extracted from each study included calculation of power and sample size, number of patients, and effect size. A Jadad score was calculated, providing a quality assessment of the randomized controlled trial. Results: : Of the 736 original articles, 463 met the inclusion criteria; 88 (19.0 percent) of these 463 reported performing a priori power analysis or sample size calculation. Of these 88 studies, 68 (77.3 percent) had an adequate sample size. In most studies, a standard of 0.05 for the type I error and 0.20 for type II error was used. There has been some improvement in the reporting of power and sample size in the decades from 1990 to 2010. Conclusions: Nineteen percent of 463 randomized controlled trials in the plastic surgery literature reported performing an a priori power analysis or sample size calculation. The implication is that when we read the results of a published randomized controlled trial in plastic surgery, in 81 percent of cases we cannot trust the findings. Although the reporting of power and sample size has improved in the last decade, it is still inadequate. Lack of such reporting casts doubt on the validity (truthfulness) of the study's findings. Clinical Question/level Of Evidence: Therapeutic, IV. abstract_id: PUBMED:29089192 Use of surveillance data to calculate the sample size and the statistical power of randomized clinical trials testing Staphylococcus aureus vaccine efficacy in orthopedic surgery. Background: Patients undergoing primary total hip arthroplasty (THA) would be a worthy population for anti-staphylococcal vaccines. The objective is to assess sample size for significant vaccine efficacy (VE) in a randomized clinical trial (RCT). Methods: Data from a surveillance network of surgical site infection in France between 2008 and 2011 were used. The outcome was S. aureus SSI (SASSI) within 30 days after surgery. Statistical power was estimated by simulations repeated for theoretical VE ranging from 20% to 100% and for sample sizes from 250 to 8000 individuals per arm. Results: 18,688 patients undergoing THA were included; 66 (0.35%) SASSI occurred. For a 1% SASSI rate, the sample size would be at least 1316 patients per arm to detect significant VE of 80% with 80% power. Conclusion: Simulations with real-life data from surveillance of hospital acquired infections allow estimation of power for RCT and sample size to reach the required power. Answer: Sample size calculations are a critical component of designing randomized controlled trials (RCTs) in surgery, as they are necessary to minimize both type I and type II errors and to ensure that a study is adequately powered to detect clinically meaningful differences between treatment groups. However, the literature suggests that sample size calculations in surgical RCTs are often not performed correctly or are inadequately reported. A review of RCTs published in prominent surgical journals between 1999 and 2002 found that more than 60% of the studies did not report sample size calculations, and only half of the studies had sample sizes appropriate to detect large differences between treatment groups. Moreover, many studies that were underpowered would need to increase their sample size by more than 10-fold to be adequately powered (PUBMED:12947329). Similarly, a systematic review of power and sample size reporting in randomized controlled trials within plastic surgery found that only 19% of the trials reported performing an a priori power analysis or sample size calculation. This lack of reporting casts doubt on the validity of the study's findings (PUBMED:22743957). Another study assessing the reporting of sample size calculations in surgical RCTs published in 2017 found that the overall adequate reporting according to the Consolidated Standards for Reporting Trials was 53%, and adherence decreased significantly when considering additional subitems from the Standard Protocol Items: Recommendations for Interventional Trials Statement (PUBMED:31787318). These findings highlight a significant issue in the surgical research community, where there is a need for better understanding and reporting of sample size considerations and statistical power to improve the quality of published articles (PUBMED:31155391, PUBMED:15834629). Furthermore, the use of real-life data from surveillance of hospital-acquired infections can help estimate the power for RCTs and the sample size needed to achieve the required power (PUBMED:29089192). In conclusion, the evidence suggests that sample size calculations in surgery are not consistently done correctly, and there is a need for improvement in both the performance and reporting of these calculations to ensure that surgical RCTs can produce reliable and clinically relevant results.