input
stringlengths
6.82k
29k
Instruction: Are United States Medical Licensing Exam Step 1 and 2 scores valid measures for postgraduate medical residency selection decisions? Abstracts: abstract_id: PUBMED:21099388 Are United States Medical Licensing Exam Step 1 and 2 scores valid measures for postgraduate medical residency selection decisions? Purpose: United States Medical Licensing Examination (USMLE) scores are frequently used by residency program directors when evaluating applicants. The objectives of this report are to study the chain of reasoning and evidence that underlies the use of USMLE Step 1 and 2 scores for postgraduate medical resident selection decisions and to evaluate the validity argument about the utility of USMLE scores for this purpose. Method: This is a research synthesis using the critical review approach. The study first describes the chain of reasoning that underlies a validity argument about using test scores for a specific purpose. It continues by summarizing correlations of USMLE Step 1 and 2 scores and reliable measures of clinical skill acquisition drawn from nine studies involving 393 medical learners from 2005 to 2010. The integrity of the validity argument about using USMLE Step 1 and 2 scores for postgraduate residency selection decisions is tested. Results: The research synthesis shows that USMLE Step 1 and 2 scores are not correlated with reliable measures of medical students', residents', and fellows' clinical skill acquisition. Conclusions: The validity argument about using USMLE Step 1 and 2 scores for postgraduate residency selection decisions is neither structured, coherent, nor evidence based. The USMLE score validity argument breaks down on grounds of extrapolation and decision/interpretation because the scores are not associated with measures of clinical skill acquisition among advanced medical students, residents, and subspecialty fellows. Continued use of USMLE Step 1 and 2 scores for postgraduate medical residency selection decisions is discouraged. abstract_id: PUBMED:33878014 The United States Medical Licensing Exam Step 2 Clinical Skills Examination: Potential Alternatives During and After the COVID-19 Pandemic. We feel that the current COVID-19 crisis has created great uncertainty and anxiety among medical students. With medical school classes initially being conducted on the web and the approaching season of "the Match" (a uniform system by which residency candidates and residency programs in the United States simultaneously "match" with the aid of a computer algorithm to fill first-year and second-year postgraduate training positions accredited by the Accreditation Council for Graduate Medical Education), the situation did not seem to be improving. The National Resident Matching Program made an official announcement on May 26, 2020, that candidates would not be required to take or pass the United States Medical Licensing Examination Step 2 Clinical Skills (CS) examination to participate in the Match. On January 26, 2021, formal discontinuation of Step 2 CS was announced; for this reason, we have provided our perspective of possible alternative solutions to the Step 2 CS examination. A successful alternative model can be implemented in future residency match seasons as well. abstract_id: PUBMED:37842448 A Survey of Otolaryngology Residency Programs on Adapting to the United States Medical Licensing Examination (USMLE) Step 1 Transitioning to Pass/Fail. Objectives In February 2020, the National Board of Medical Examiners (NBME) announced that the United States Medical Licensing Examination (USMLE) Step 1 licensing examination would change from a numerical score to Pass/Fail (P/F). After implementation, many believe that USMLE-Step 2-Clinical Knowledge (CK) will become an important metric for students applying to otolaryngology (ENT). The purpose of this study is to determine factors important to resident selection after these changes. Methods A survey containing 15 questions related to resident selection practices and how changing USMLE Step 1 to P/F would impact future resident selection was designed. It was distributed to all ENT residency programs accredited by the Accreditation Council for Graduate Medical Education (ACGME). Results Forty percent of programs responded; 66% (95% confidence interval (CI): 51.1%-78.4%) felt that changing Step 1 scoring would not lead to students being more prepared for clinical rotations; 55% believe class rank will increase in significance (95% CI: 35.7%-64.3%). There was also an increase in the importance of Step 2 CK, which had a mean ranking of 10.67 prior to changes in Step 1 scoring and increased to 7.80 after P/F. Conclusions The changes in Step 1 scoring will likely lead to increasing importance of other objective measures like class rank or Step 2 CK. This may defeat the intended purpose put forth by the NBME. Therefore, further guidance on measures correlated with student performance as a resident will be integral to the selection process. abstract_id: PUBMED:33162317 Potential Impact of a Pass or Fail United States Medical Licensing Exam Step 1 Scoring System on Radiology Residency Applications. Rationale And Objectives: To assess the anticipated impact of the change in United States Medical Licensing Examination Step 1 scoring from numerical to pass or fail on the future selection of radiology residents. Materials And Methods: An anonymous electronic 14-item survey was distributed to 308 members of the Association of Program Directors in Radiology and included questions regarding the anticipated importance of various application metrics when Step 1 becomes pass or fail. Secondary analyses compared responses based on the current use of a Step 1 scoring screen. Results: Eighty eight respondents (28.6% [88/308]) completed the survey. Most (64% [56/88]) noted that the United States Medical Licensing Examination Step 2 Clinical Knowledge (CK) score will likely be one of the top three most important factors in assessing applicants, followed by class ranking or quartile (51% [45/88]) and the Medical Student Performance Evaluation/Dean's Letter (42% [37/88]). Over 90% (82/88) of respondents anticipate potentially or definitively requiring Step 2 CK scores before application review, and 50% (44/88) of respondents anticipate extending interview invites at a later date to receive Step 2 CK scores. These results did not significantly differ between programs who currently use a Step 1 scoring screen and those who do not. Conclusion: As Step 1 transitions from a numerical score to pass or fail, radiology residency program directors will likely rely on Step 2 CK scores as an objective and standardized metric to screen applicants. Further investigation is needed to identify other objective metrics to evaluate applicants before Step 1 becomes pass or fail. abstract_id: PUBMED:34076626 Assessment of Plastic Surgery Residency Applications without United States Medical Licensing Examination Step 1 Scores. Summary: The United States Medical Licensing Examination announced the changing of Step 1 score reporting from a three-digit number to pass/fail beginning on January 1, 2022. Plastic surgery residency programs have traditionally used United States Medical Licensing Examination Step 1 scores to compare plastic surgery residency applicants. Without a numerical score, the plastic surgery residency application review process will likely change. This article discusses advantages, disadvantages, and steps forward for residency programs related to the upcoming change. The authors encourage programs to continue to seek innovative methods of objectively and holistically evaluating applications. abstract_id: PUBMED:26244259 A Plea to Reassess the Role of United States Medical Licensing Examination Step 1 Scores in Residency Selection. The three-step United States Medical Licensing Examination (USMLE) was developed by the National Board of Medical Examiners and the Federation of State Medical Boards to provide medical licensing authorities a uniform evaluation system on which to base licensure. The test results appear to be a good measure of content knowledge and a reasonable predictor of performance on subsequent in-training and certification exams. Nonetheless, it is disconcerting that the test preoccupies so much of students' attention with attendant substantial costs (in time and money) and mental and emotional anguish. There is an increasingly pervasive practice of using the USMLE score, especially the Step 1 component, to screen applicants for residency. This is despite the fact that the test was not designed to be a primary determinant of the likelihood of success in residency. Further, relying on Step 1 scores to filter large numbers of applications has unintended consequences for students and undergraduate medical education curricula. There are many other factors likely to be equally or more predictable of performance during residency. The authors strongly recommend a move away from using test scores alone in the applicant screening process and toward a more holistic evaluation of the skills, attributes, and behaviors sought in future health care providers. They urge more rigorous study of the characteristics of students that predict success in residency, better assessment tools for competencies beyond those assessed by Step 1 that are relevant to success, and nationally comparable measures from those assessments that are easy to interpret and apply. abstract_id: PUBMED:31114422 The association between United States Medical Licensing Examination scores and clinical performance in medical students. Purpose: United States Medical Licensing Examination (USMLE) Step 1 and Step 2 Clinical Knowledge (CK) scores are frequently used to evaluate applicants to residency programs. Recent literature questions the value of USMLE scores for evaluation of residency applicants, in part due to a lack of evidence supporting a relationship with clinical performance. This study explored the relationship between USMLE scores and medical students' clinical performance, as measured by the count of honors grades received in core clinical clerkships. Methods: USMLE Step 1 and Step 2 CK scores and number of honors grades per student in seven core clinical clerkships were obtained from 1,511 medical students who graduated in 2013-2017 from two medical schools. The relationships between variables were analyzed using correlation coefficients, independent-samples t-tests, and hierarchical multiple regression. Results: Count of honors grades correlated with both Step 1 (R=0.480, P<0.001) and Step 2 CK (R=0.542, P<0.001). After correcting for gender, institution, and test-taking ability (using MCAT scores as a proxy for test-taking ability) in a hierarchical multiple regression model, Step 1 and Step 2 CK scores together explained 22.2% of the variance in count of honors grades. Conclusion: USMLE Step 1 and Step 2 CK scores moderately correlate with the number of honors grades per student in core clinical clerkships. This relationship is maintained even after correcting for gender, institution, and test-taking ability. These results indicate that USMLE scores have a positive linear association with clinical performance as a medical student. abstract_id: PUBMED:17972155 Residency applicants misinterpret their United States Medical Licensing Exam Scores. Proper interpretation of the results of the United States Medical Licensing Exam (USMLE) is important for program directors, residents, and faculty who advise applicants about applying for residency positions. We suspected that applicants often misinterpreted their performance in relationship to others who took the same examination. In 2005, 54 consecutive applicants to the University of Tennessee, College of Medicine internal medicine residency program were asked to complete a brief survey about their performance on Parts 1 and 2 of the USMLE exam. When asked what percentile their Part I score represented, slightly over one in ten were unsure or did not know, and nearly three out of five wrote in their two-digit score. Of the 15 who wrote in another number, only a third were within ten percentile points of their estimated percentile. Similar results were seen for Part II. A higher percentage of international medical graduates (IMGs) than U.S. applicants responded to the percentile question with their two-digit scores (79% vs. 50% p = .08). Applicants to residency programs frequently misinterpret their performance on the USMLE compared to their peers. Unless they are high performers, those who misinterpret their two-digit scores as percentiles markedly overestimate their performance. The sponsors of the USMLE should discontinue reporting the two-digit score and consider re-instating the reporting of percentiles. abstract_id: PUBMED:34857361 United States medical licensing examination Step 1 scores are a quantifiably better predictor of American Board of Surgery In-Training Examination success compared to Step 2 scores. Background: United States Medical Licensing Examination Step 1 (STEP 1) and Step 2 (STEP 2) scores are used in the selection of surgery residents. The American Board of Surgery In-Training Examination (ABSITE) is employed to assess resident knowledge. We sought to determine whether both STEP 1 and/or STEP 2 were predictive of ABSITE performance. Methods: We performed a 10-year retrospective analysis of all PGY levels comparing STEP 1 and 2 scores with raw ABSITE scores and percentile rank. Additionally, we analyzed STEP 1 and 2 scores for the likelihood of scoring above the 20th percentile on the ABSITE. Results: When STEP 1 and STEP 2 scores were evaluated separately, both were independent predictors of ABSITE performance. However, when STEP 1 and STEP 2 scores were evaluated together, only STEP 1 scores were predictive of ABSITE performance. Finally, only STEP 1 scores were predictive of scoring above the 20th percentile. Conclusion: STEP 1 scores are a better predictor of ABSITE performance than STEP 2. abstract_id: PUBMED:37842409 Minimum United States Medical Licensing Examination and Comprehensive Osteopathic Medical Licensing Examination Scores Often Do Not Align. Background Many residency programs do not accept the Comprehensive Osteopathic Medical Licensing Examination (COMLEX-USA) alone for osteopathic applicants. Furthermore, among those programs that do accept the COMLEX-USA, it is unknown how programs scale their minimum COMLEX-USA scores compared to their minimum United States Medical Licensing Examination (USMLE) scores. Objective Our objective was to examine the variation of relative within-program differences between minimum USMLE Step and COMLEX-USA Level scores required for consideration by United States residency programs. Methods We performed a cross-sectional analysis of the Fellowship and Residency Electronic Interactive Database Access (FREIDA) database from April 2023, including the 10 specialties with the most training spots in 2022. These specialties were internal medicine, family medicine, pediatrics, emergency medicine, psychiatry, surgery, anesthesiology, obstetrics-gynecology, orthopedic surgery, and neurology. Within-program differences were calculated by subtracting the minimum USMLE Step 1 and 2 scores from the converted minimum USMLE Step 1 and 2 scores calculated from the minimum COMLEX-USA Level 1 and 2 scores using two conversion tools. We present differences as medians with interquartile ranges (IQR). Additionally, we report the proportion of programs with greater than 10-point differences for each step (1 and 2). Results Of the 3,364 accredited programs from the examined specialties, we included 1,477 in the Step 1 analysis and 1,227 in the Step 2 analysis with complete data. The median within-program difference between the minimum Step 1 score and the predicted Step 1 score was 12.0 (IQR 2.0, 17.0) using the Barnum and colleagues' conversion tool and -1.7 (IQR -6.2, 6.3) using the Smith and colleagues' tool. The median differences for Step 2 were 2.0 (IQR -8.0, 12.0) and -6.5 (IQR -13.9, -1.5) for each tool, respectively. Using the Barnum and Smith conversion tools, 937 (63%) and 435 (29%) programs had a greater than 10-point Step 1 score difference, respectively. Similarly, for Step 2, 564 (46%) and 515 (42%) programs had a greater than 10-point difference with each conversion tool. Conclusion There is wide variation in the within-program differences between minimum USMLE and predicted minimum USMLE (from COMLEX-USA) scores. Many programs have greater than 10-point differences, which may be a source of bias in osteopathic applicant selection. Answer: The validity of using United States Medical Licensing Exam (USMLE) Step 1 and 2 scores for postgraduate medical residency selection decisions is contested. A research synthesis indicates that USMLE Step 1 and 2 scores are not correlated with reliable measures of medical students', residents', and fellows' clinical skill acquisition, challenging the validity argument for using these scores in residency selection decisions (PUBMED:21099388). This suggests that the continued use of USMLE Step 1 and 2 scores for residency selection is not evidence-based and should be discouraged. The transition of USMLE Step 1 to a Pass/Fail (P/F) system has led to speculation that USMLE Step 2 Clinical Knowledge (CK) will become a more important metric for residency applications, particularly in fields such as otolaryngology and radiology (PUBMED:37842448, PUBMED:33162317). This shift may increase the significance of other objective measures like class rank or Step 2 CK scores, potentially undermining the intended purpose of the change to reduce the overemphasis on Step 1 scores (PUBMED:37842448). In plastic surgery residency applications, the change to a pass/fail system for Step 1 scores is expected to alter the application review process, with programs encouraged to seek innovative methods of objectively and holistically evaluating applications (PUBMED:34076626). Similarly, a plea has been made to reassess the role of USMLE Step 1 scores in residency selection, advocating for a more holistic evaluation of applicants and the development of better assessment tools for competencies beyond those assessed by Step 1 (PUBMED:26244259). However, a study found that USMLE Step 1 and Step 2 CK scores moderately correlate with the number of honors grades per student in core clinical clerkships, indicating a positive linear association with clinical performance as a medical student (PUBMED:31114422). Additionally, USMLE Step 1 scores have been shown to be a better predictor of American Board of Surgery In-Training Examination success compared to Step 2 scores (PUBMED:34857361). In summary, while there is evidence suggesting that USMLE scores may have some predictive value for certain aspects of medical training, their use as a primary tool for residency selection decisions is increasingly questioned, and alternative measures are being considered to evaluate residency applicants more holistically.
Instruction: Endoscopic detection of early esophageal cancer in a high-risk population: does Lugol staining improve videoendoscopy? Abstracts: abstract_id: PUBMED:9199904 Endoscopic detection of early esophageal cancer in a high-risk population: does Lugol staining improve videoendoscopy? Background: The aim of this study was to prospectively compare the diagnostic accuracy of videoendoscopy, with and without Lugol staining, for the detection of esophageal cancer in alcoholic or smoking patients older than 40 years. Methods: Daily alcohol and tobacco consumption and overt and latent symptoms were noted. The 158 patients included were examined by videoendoscopy and with Lugol dye. Results: The mean consumption of alcohol and tobacco was 86 +/- 49 gm/day for 26 +/- 11 years, and 30 +/- 18 pack-years, respectively. Twenty-five patients had no symptoms. Before Lugol staining, 12 patients had endoscopically identified cancerous lesions. After Lugol staining, 13 patients had 17 esophageal cancers, 3 of which were high-grade dysplasia. The prevalence of esophageal cancer was 8.2%: 95% CI [4,14]. Dye-free surfaces were significantly larger than the endoscopic patterns observed before Lugol staining (11.6 +/- 9.2 cm2 vs 1.4 +/- 1.7 cm2; p < 0.02). Conclusions: In an alcoholic smoking population, the prevalence of esophageal cancer detected by endoscopy is high and not related to symptoms described by patients. Lugol staining only moderately improves the diagnostic accuracy of videoendoscopy; its main advantage is the assessment of the mucosal extension of esophageal cancer. abstract_id: PUBMED:26699332 Massive Endoscopic Screening for Esophageal and Gastric Cancers in a High-Risk Area of China. Objective: This study aims to describe the findings from a massive endoscopic screening program in a high-risk area of China and to evaluate the prognosis of patients diagnosed through endoscopic screening compared with those diagnosed at usual hospital visits because of illness. Methods: In 2006, an early detection and treatment program was initiated in Yangzhong county, China. Local residents aged 40-69 years were eligible for free endoscopic screening. Endoscopic examination was performed with Lugol's iodine staining, followed by biopsies. Patients diagnosed with esophageal or gastric cancer were referred for treatment and followed to assess their long-term survival status. Results: From 2006 through 2012, we screened 12453 participants, including 5334 (42.8%) men and 7119 (57.2%) women. The average age was 52.8 ± 8.0 years. We detected 166 patients with upper digestive tract cancers, including 106 cancers in the esophagus (detection rate: 0.85%) and 60 cancers in the stomach (detection rate: 0.48%). Of these patients, 98.11% with esophageal cancer and 100% with gastric cancer were defined as at the early stage. In the process of follow-up, 17 patients died from cancer-related causes, and the median survival time was greater than 85 months. The overall survival rates for 1, 3 and 5 years were 98.0%, 90.0% and 89.0%, respectively. A significant positive effect was observed for the long-term survival of patients diagnosed through massive endoscopic screening. Conclusions: In a high-risk population, massive endoscopic screening can identify early stage carcinoma of esophageal and gastric cancers and improve patients' prognosis through early detection and treatment. abstract_id: PUBMED:12600295 Significance of screening by iodine staining of endoscopic examination in the area of high incidence of esophageal carcinoma Background & Objective: In order to improve early detection and visualization of esophageal premalignant lesion and carcinoma, this study was designed to explore significance of the screening by iodine staining of endoscopic examination in the area of high incidence of esophageal carcinoma. Methods: Three screenings were completed through endoscopic examination for 3164 persons in high-risk population. Esophageal epithelium was stained with iodine in endoscopic examination. According to mucosal color and pathological result, esophageal carcinoma and premalignant lesion were confirmed. Results: After staining, the normal epithelium became brown and precancerous or malignant epithelium manifested yellow with sharp margin. (1) 100% early esophageal cancer was iodine staining positive, all lesions were grade I to grade II. The incidence rates of early esophageal carcinoma and advanced esophageal carcinoma were 1.60%-4.59% and 0.29%-1.09%, respectively. Early detection rate of esophageal carcinoma was all over 75%. (2) 95.6% severe dysplasia were staining positive; 91.3% lesion were grade I and grade II in staining. The incidence rate of severe dysplasia was 4.49%-7.68%. (3) 96.6% moderate dysplasia was positive in iodine staining, 73.3% lesion were grade II and grade III. (4) 92.3% mild dysplasia was positive in iodine staining; most of lesions were grade III. (5) 0.9% esophagitis and 0.4% normal esophageal mucosal were grade I and grade II in iodine staining, most of lesion were negative or grade III. Conclusion: The results demonstrated that the screening with endoscopic staining examination could improve early detection of esophageal carcinoma and precancerous lesion. Iodine staining showed highly sensitivity and specificity for identifying these precancerous and early squamous cancer lesions; it may be helpful for diagnosis of early esophageal carcinoma and precancerous lesion. abstract_id: PUBMED:37580267 Application of novel non-endoscopic device in the screening and early diagnosis of esophageal cancer Esophageal cancer (EC) is a dreadful disease with a poor prognosis and poses heavy health burden worldwide. Developing effective methods to identify high-risk individuals is urgently needed for preliminary screening before endoscopy. The novel non-endoscopic device has the potential advantages of low cost, simple operation, and minimal invasiveness. Approximately 90% of participants can swallow the device successfully with high safety profiles, and sufficient esophageal exfoliated cells can be collected for cytological examination and biomarker detection. Cytological examination based on the device combined with trefoil factor 3 (TFF3) protein or DNA methylation examinations could effectively screen Barrett's esophagus-associated dysplasia and early esophageal adenocarcinoma, but large prospective studies are needed to further validate the diagnostic value of this device to improve the quality of evidence. Although the device-based cytological examination in combination with biomarker detection holds promise in the early screening of esophageal squamous dysplasia and early esophageal squamous cell carcinoma, related research is still in its infancy, and there is still a lack of sufficient evidence for population screening in China. Active research into the application of this novel non-endoscopic device in EC screening and early diagnosis is of great significance for optimizing EC screening strategies and improving the early diagnosis of EC. abstract_id: PUBMED:28361837 Early squamous neoplasia of the esophagus: The endoscopic approach to diagnosis and management. Considerable focus has been placed on esophageal adenocarcinoma in the last 10 years because of its rising incidence in the West. However, squamous cell cancer (SCC) continues to be the most common type of esophageal cancer in the rest of the world. The detection of esophageal SCC (ESCC) in its early stages can lead to early endoscopic resection and cure. The increased incidence of ESCC in high-risk groups, such as patients with head and neck squamous cancers, highlights the need for screening programs. Lugol's iodine chromoendoscopy remains the gold standard technique in detecting early ESCC, however, safer techniques such as electronic enhancement or virtual chromoendoscopy would be ideal. In addition to early detection, these new "push-button" technological advancements can help characterize early ESCC, thereby further aiding the diagnostic accuracy and facilitating resection. Endoscopic resection (ER) of early ESCC with negligible risk of lymph node metastases has been widely accepted as an effective therapeutic strategy because it offers similar success rates when compared to esophagectomy, but carries lesser morbidity and mortality. Endoscopic submucosal dissection (ESD) is the preferred technique of ER in lesions larger than 15 mm because it provides higher rates of en bloc resections and lower local recurrence rates when compared to endoscopic mucosal resection (EMR). abstract_id: PUBMED:11154480 Endoscopic screening for early esophageal cancer by iodine staining in patients with other current or prior primary cancers. Background: Although early diagnosis of esophageal carcinoma is an important determinant of outcome, early stage carcinomas are rarely detected on routine endoscopic examination. Iodine staining may be useful in the detection of early stage esophageal carcinoma, especially in high-risk populations. In a prior study, 29.2% (68/233) patients with superficial esophageal carcinoma (SEC) undergoing resection at our hospital had other nonesophageal primary carcinomas, suggesting that patients with nonesophageal primary cancers are at increased risk for the development of esophageal cancer. The aim of the current study was to determine whether endoscopic screening with iodine staining is useful for the detection of SEC in patients with a history of non-esophageal cancers. Patients And Methods: In a prospective study, 331 patients with nonesophageal primary cancers were screened for esophageal cancer by endoscopic examination with iodine staining. All patients were men 55 years of age or older. Results: Nine patients (2.7%) with SEC were identified. Among these patients, 7 with mucosal cancer were treated by endoscopic mucosal resection, and 2 with submucosal cancer were treated by surgical resection or radiation therapy. Five patients (1.5%) were found to have cancer confined to the epithelium or the lamina propria. These lesions were difficult to detect by conventional endoscopic screening and were cured by endoscopic mucosal resection. After exclusion of 51 patients with head and neck cancer, the incidence of SEC was still 2.1% (6/280), higher than reported in prior screening studies. Conclusions: All men with a history of nonesophageal primary cancers should undergo endoscopic screening for esophageal cancer. Endoscopic iodine staining in such patients may be useful in screening for early stage esophageal carcinoma. abstract_id: PUBMED:31142308 Endoscopic features and associated histology of an basaloid squamous cell carcinoma of the esophagus at the very early stage. Background: Basaloid squamous cell carcinoma of the esophagus (BSCCE) is generally detected at advanced stage and the prognosis is poorer than advanced conventional esophageal squamous cell carcinoma. Therefore, early detection is a critical to improve patients' survival. However, only a few cases of early BSCCE have been reported and the endoscopic features of early BSCCE are not well described. We herein report the endoscopic features and associated histology of an early BSCCE limited within the mucosal lamina propria (m2). To our knowledge, this is the earliest BSCCE reported to date. Case Presentation: A 62-year-old male patient was admitted to our hospital because of epigastric pain for 3 months. White light endoscopy revealed a flat lesion with mild sloping at the periphery. The lesion was covered with leukoplakia, and normal vascular network could not be seen in the lesion. Magnifying endoscopy with narrow-band imaging showed abnormal intra-papillary capillary loop categorized as type B1 according to the classification of the Japan Esophageal Society. Iodine staining revealed a less-stained lesion. The lesion was completely resected through endoscopic submucosal dissection. Histopathologically, tumor cells, which were limited within the mucosal lamina propria, formed solid nests and lobule structures, with ribbon-like arrangement of sparse cytoplasm and round to ovoid hyperchromatic nuclei. A final diagnosis of early BSCCE was established. Conclusions: This is the earliest BSCCE reported to date. The prominent lesion with a gentle rising slope and less-staining or abnormal stain might be initial feature of early BSCCE. abstract_id: PUBMED:16273632 Value of endoscopic methylene blue and Lugol's iodine double staining and detection of GST-Pi and telomerase in the early diagnosis of esophageal carcinoma. Aim: To explore the expressions of GST-Pi and telomerase activity in esophageal carcinoma and premalignant lesions and to investigate the value of endoscopic methylene blue (MB) and Lugol's iodine double staining. Methods: Seventy-two patients with esophagopathy were sprayed endoscopically with MB and Lugol's iodine in proper order and the areas stained blue and brown, and the area between the blue and brown stains were obtained. Depending on the pattern of mucosal staining, biopsy specimen was obtained. GST-Pi and telomerase activity in specimens were examined by immunohistochemistry and PCR-based silver staining telomeric repeat amplification protocol, respectively. Results: After MB and Lugol's iodine staining, the area between both the colors was obtained in 64 of the 72 patients and the areas were stained blue and brown in all of the 72 patients. Association test of two simultaneous ordinal categorical data showed a correlation between the esophageal mucosal staining and the esophageal histology (P<0.005). The expression of GST-Pi and telomerase activity in esophageal carcinoma and premalignant lesions increased. The expression of GST-Pi and telomerase activity in dysplasia and carcinoma was significantly higher than that in normal epithelium (P<0.005). The expression in hyperplasia was slightly higher than that in normal epithelium. With the lesions progressing from low- to moderate- to high-grade dysplasia, the positive rate increased (P<0.025). Expression of GST-Pi was correlated with that of telomerase activity in dysplasia and carcinoma (phis = 0.4831, P<0.005; phis = 0.3031, P<0.025, respectively); but there was no correlation between them in normal epithelium and hyperplasia. Conclusion: The expression of GST-Pi and telomerase may be an early event in the carcinogenesis of esophagus. They may play an induced and synergistic role with each other in the carcinogenesis of esophagus. Endoscopic MB and Lugol's iodine double staining and detection of GST-Pi and telomerase activity may contribute to the early diagnosis of esophageal carcinoma. abstract_id: PUBMED:24796469 Analysis of results of endoscopic screening of esophageal, gastric cardia and gastric cancers in high risk population Objective: To summarize the results of endoscopic screening of esophageal, gastric cardiac and gastric cancers in the high-risk population, and analyze the influencing factors such as age, gender and biopsy rate on their detection and early diagnosis rates. Methods: Nine high incidence cities and counties of esophageal cancer in Henan province were included in this study. People aged 40-69 years were set to the target population. Excluding contraindications for gastroscopy, in accordance with the national technical scheme of early cancer diagnosis and treatment, gastroscopic screening and biopsy pathology for human esophageal, cardiac and gastric cancers were carried out. Results: During the 3-year period, a total of 40 156 subjects were screened. Among them, 18 459 cases of various precancerous lesions (46.0%) were detected. The cancer detection rate was 2.3% (916 cases), including 763 cases of early cancers. The diagnosis rate of early cancers was 83.3%. Precancerous lesions were detected in 9297 cases (23.2%) for esophagus and 9162 cases (22.8%) for gastric cardia as well as stomach, respectively. Conclusions: The results of this study demonstrate that endoscopic screening is feasible for early detection, diagnosis and treatment of esophageal, gastric cardia and gastric cancers among high risk population in high incidence area. Exploration analysis of relevant affecting factors may help to further improve the screening project for early diagnosis and treatment of those cancers. abstract_id: PUBMED:26031930 Endoscopic therapy in early adenocarcinomas (Barrett's cancer) of the esophagus. The incidence of early esophageal adenocarcinoma has been increasing significantly in recent decades. Prognosis depends greatly on the choice of treatment. Early cancers can be treated by endoscopic resection, whereas advanced carcinomas have to be sent for surgery. Esophageal resection is associated with high perioperative mortality (1-5%) even in specialized centers. Early diagnosis enables curative endoscopic treatment option. Patients with gastrointestinal symptoms and a familial risk for esophageal cancer should undergo upper gastrointestinal endoscopy. High-definition endoscopes have been developed with technical add-on that helps endoscopists to find fine irregularities in the esophageal mucosa, but interpreting the findings remains challenging. In this review we discussed novel and old diagnostic procedures and their values, as well as our own recommendations and those of the authors discussed for the diagnosis and treatment of early Barrett's carcinoma. Endoscopic resection is the therapy of choice in early esophageal adenocarcinoma. It is mandatory to perform a subsequent ablation of all residual Barrett's mucosa to avoid metachronous lesions. Answer: The use of Lugol staining in conjunction with videoendoscopy for the detection of early esophageal cancer in high-risk populations has been evaluated in several studies. The findings suggest that Lugol staining does indeed improve the diagnostic accuracy of videoendoscopy. In a study involving alcoholic or smoking patients older than 40 years, the prevalence of esophageal cancer detected by endoscopy was found to be high and not related to symptoms described by patients. Lugol staining only moderately improved the diagnostic accuracy of videoendoscopy, but its main advantage was the assessment of the mucosal extension of esophageal cancer (PUBMED:9199904). Another study in a high-risk area of China showed that massive endoscopic screening with Lugol's iodine staining could identify early-stage carcinoma of esophageal and gastric cancers and improve patients' prognosis through early detection and treatment (PUBMED:26699332). Further research indicated that screening with endoscopic staining examination could improve early detection of esophageal carcinoma and precancerous lesions. Iodine staining showed high sensitivity and specificity for identifying these precancerous and early squamous cancer lesions, which could be helpful for the diagnosis of early esophageal carcinoma and precancerous lesions (PUBMED:12600295). Moreover, the application of novel non-endoscopic devices for screening and early diagnosis of esophageal cancer also shows promise, although large prospective studies are needed to further validate the diagnostic value of these devices (PUBMED:37580267). In summary, Lugol staining does improve the detection of early esophageal cancer when used with videoendoscopy, particularly in high-risk populations. It enhances the visibility of mucosal abnormalities that may not be apparent with videoendoscopy alone, thus aiding in the early diagnosis and potential treatment of esophageal cancer (PUBMED:9199904, PUBMED:26699332, PUBMED:12600295).
Instruction: Differences in effectiveness of intensive programs of treatment for neurotic and personality disorders. Is it worth to monitor the effectiveness of the therapeutic team? Abstracts: abstract_id: PUBMED:24946442 Differences in effectiveness of intensive programs of treatment for neurotic and personality disorders. Is it worth to monitor the effectiveness of the therapeutic team? Objectives: To test whether three different intensive programs of treatment for neurotic and personality disorders are effective in decreasing neurotic symptoms and traits of neurotic personality and whether there are differences between them in clinical outcome. Method: The sample consisted of 105 patients (83% female, mean age 35) diagnosed with neurosis and personality disorders that were treated in three therapeutic wards under routine inpatient conditions. The therapeutic programs are designed for patients with neurotic and personality disorders. They consist of 6-12 weeks of approximately 5 hours of eclectic group treatment (group psychotherapy, psychodrama, psychoeducation etc.). Participants filled in Symptoms' Questionnaire KS-II, and Neurotic Personality Questionnaire KON-2006 at the beginning and at the end of the course of psychotherapy. Results: The treatment proved to be effective in diminishing neurotic symptoms (d Cohen = 0.56). More detailed analysis revealed that there was a significant interaction between the three analysed therapeutic wards and the effectiveness (12 = 0.09). The treatments offered in two institutions were effective (d Cohen = 0.80) while one of the programs did not lead to significant improvement of the patients. None of the therapeutic wards proved to be effective in changing the neurotic personality traits. Conclusions: There are significant differences in effectiveness of the intensive programs of treatment for neurotic and personality disorders. In the light of the literature, one can assume that the differences are more connected with the characteristics of therapeutic teams than with the methods used. The need for standard methods of effectiveness monitoring is discussed. abstract_id: PUBMED:25844408 Effectiveness of intensive group psychotherapy in treatment of neurotic and personality disorders Aim: The aim of this study was to analyze the effectiveness of intensive (150 - 210 sessions), short-term (10 - 14 weeks) group psychotherapy in the treatment of neurotic disorders, and selected personality disorders. Methods: The study involved 145 patients treated in day hospital due to neurotic disorders and selected personality disorders. The measurements were performed twice - at the beginning and at the end of hospitalization. The effectiveness of psychotherapy in the reduction of symptoms of neurotic disorders was measured with symptom checklist "0". The change in the severity of personality traits characteristic for neurotic disorders was examined using Neurotic Personality Questionnaire KON-2006. Results: A significant reduction in symptoms of neurotic disorders was found in 84% of patients (85,7% women and 80,8% men). 70,3% of the examined patients obtained in the measurement at the end of treatment results indicating the level of severity of neurotic disorders symptoms below the cutoff point separating population of healthy individuals from patients. A significant reduction in the severity of personality traits characteristically occurring in patients suffering from neurotic disorders was found in 76% of patients (74.5% women and 78.7% men). 42.8% of the examined patients obtained in the measurement at the end of the treatment the value of neurotic personality traits index (X-KON) typical for healthy population. Conclusions: Intensive, short-term group psychotherapy with elements of individual therapy is an effective treatment for neurotic disorders. The majority of treated persons obtains a significant symptomatic improvement and a reduction in the severity of neurotic personality traits. abstract_id: PUBMED:26276917 Effectiveness of therapy in terms of reduction of intensity and elimination of suicidal ideation in day hospital for the treatment of neurotic and behavioral disorders. Objectives: Analysis of changes in prevalence and intensity of suicidal ideation (SI) in patients who underwent an intensive psychotherapy conducted in integrative approach with predominance of psychodynamic approach in the Day Hospital for Neurotic and Behavioral Disorders. Methods: Symptom checklist KO"O" and Life Inventory completed by 461 women and 219 men who were treated in the day hospital due to neurotic, behavioral and personality disorders in 2005-2013. Results: Patients initially reporting SI showed greater than others global intensity of the neurotic symptoms (p < 0.001 for both sexes) and greater intensity in almost all the neurotic symptoms scales. Improvement (elimination or reduction of intensity) in terms of SI was observed in 84.3% of women and 77.5% of men. The prevalence of SI-reporting patients decreased substantially from 29.1% to 10.2% in women and from 36.5% to 13.7% in men. Conclusions: The patients reporting SI before the commencement of psychotherapy constituted a large part of the studied population - approximately 1/3. They showed greater intensity of neurotic symptoms than those who were not reporting SI. Intensive psychotherapy in the day hospital was characterized by high effectiveness in terms of reduction and elimination of SI. Prevalence of SI in both women and men until the end of therapy decreased almost three times. Cases of improvement in terms of SI were several times more frequent than cases of deterioration in terms of SI (increase in its intensity or its occurrence at the end of therapy). Etiopathogenesis of SI and its methods of treatment require further research. abstract_id: PUBMED:26688852 Effectiveness of intensive psychotherapy in a day hospital evaluated with Neurotic Personality Inventory KON-2006. Unlabelled: AIM : The aim of this study was to determine the effectiveness of intensive psychotherapy in the day hospital for neurotic and behavioral disorders as well as the assessment of the usefulness of the Neurotic Personality Inventory KON-2006 for routine evaluation of psychotherapy effectiveness. Methods: The results of the questionnaires KON-2006 completed by 690 patients (women - 69%, men - 31%, mean age 29 years, SD 8 years) were analyzed. All persons have received comprehensive, mainly psychodynamic psychotherapy (group with elements of individual therapy), in the years 2004-2009 in the Day Hospital for Neurotic and Behavioral Disorders in Krakow. Results: The vast majority of patients achieved after the end of psychotherapy beneficial changes in personality corresponding to various degrees of improvements in terms of the questionnaire KON-2006. Only a few patients deteriorated, somewhat more numerous group did not achieve significant changes or the effects are not possible for unambiguous interpretation. These results are highly correlated with those obtained in the personality questionnaire NEO-PI-R. Conclusions: The Neurotic Personality Inventory KON-2006 appears to be an adequate tool to assess the results of intensive, comprehensive psychotherapy, conducted in the day hospital for neurotic and behavioral disorders. abstract_id: PUBMED:29432511 Trait and state anxiety in patients treated with intensive short-term group psychotherapy for neurotic and personality disorders. Objectives: The study aimed to determine the severity of trait and state anxiety in patients treated for neurotic and personality disorders with intensive, short-term group psychotherapy and to analyse the change of anxiety intensity in the course of treatment. Methods: 116 patients (81 females and 35 males) participated in the study. The measurement of anxiety intensity was conducted with the State-Trait Anxiety Inventory (STAI) at the beginning and at the end of treatment. Results: The majority of patients (69% of the study group) demonstrated a high intensity of state anxiety at the beginning of the treatment with 47.4% being classified as very high. High intensity of trait anxiety was observed in 64.7% of participants (50.9% -very high). RCI (Reliable Change Index) and RCV (Reliable Change Value) indicated a significant change in state anxiety intensity in 62.1% of participants (48.3% - significant decrease, 13.8% increase). At the end of treatment more than a half of patients manifested medium and low intensity of both trait and state anxiety. Conclusions: At the beginning of psychotherapy the majority of patients demonstrated high intensity of both trait and state anxiety. In the course of therapy a considerable reduction of intensity of trait and state anxiety is observed. Significant deterioration in trait anxiety is observed in one out of every thirteen treated patients. Polish adaptation of STAI questionnaire is a useful tool for monitoring effectiveness of psychotherapy and may be successfully applied for screening and detailed diagnosis of neurotic and personality disorders. abstract_id: PUBMED:1493951 Therapeutic community treatment for personality disordered adults: changes in neurotic symptomatology on follow-up. Personality disordered patients are important as they place high continuing demands on services and are often refractory to traditional treatments. Often personality disorders may co-exist with neurotic symptomatology, worsening prognosis of the latter. This paper reports change in neurotic symptomatology following intensive, long term, therapeutic community treatment for such patients. Sixty two subjects with personality disorder were followed up for eight months after discharge (response rate 65%). Results showed a highly significant reduction in symptomatic distress as measured by the SCL-90R questionnaire. Investigation of the reliability and clinical importance of the change in individual subjects demonstrated that 55% of subjects had improved reliably, and in 32% this change was also clinically significant, whilst only 6.5% of subjects had deteriorated reliably. abstract_id: PUBMED:28866722 Changes in intensity of neurotic symptoms associated with reduction of suicidal ideation in patients who underwent psychotherapy in the day hospital for the treatment of neurotic and behavioral disorders. Objectives: Analysis of associations between changes in the intensity of neurotic symptoms and reduction of suicidal ideation (SI) or lack of SI reduction, in patients who underwent a course of intensive psychotherapy conducted in integrative approach with predominance of psychodynamic approach in a day hospital. Methods: Symptom Checklist KO"O" and Life Inventory completed by 461 women and 219 men treated in the psychotherapeutic day hospital due to neurotic, behavioral or personality disorders between 2005-2013. During the qualification for the therapy 134 women and 80 men reported SI, of whom 84.3% and 77.5% respectively improved. Results: The reduction of neurotic symptoms intensity was greater in patients of both genders who improved in terms of SI than in those who did not. This referred to global intensity of neurotic symptoms (OWK coefficient) in both genders (p<0.001), as well as to the majority of the neurotic symptoms subtypes in both genders (p<0.05): Phobic disorders, Other anxiety disorders, Obsessive-compulsive disorders, Conversions and dissociations, Autonomic disorders, Somatization disorders, Neurasthenia, Avoidance and dependence, Sexual dysfunctions, Dysthymia; and in case of women (p<0.05): Hypochondriasis, Depersonalization and derealization, Impulsiveness and histrionism and Non-organic sleep disorders. Conclusions: The results show effectiveness and comprehensiveness of intensive psychotherapy as a treatment method that leads to improvement in terms of both SI and neurotic symptoms. This suggests that the applied therapy may be effective in preventing suicidality in this group of patients. The observed associations also point in favor of hypothesis on similarities in etiopathogenesis of and partly identical personality-related factors predisposing to SI and neurotic disorders. abstract_id: PUBMED:26909404 Reduction of suicidal ideation in patients undergoing psychotherapy in the day hospital for the treatment of neurotic and behavioral disorders and their neurotic personality traits measured before the hospitalization. Aim: Analysis of associations between initial neurotic personality traits and subsequent reduction of suicidal ideation (SI) - or lack of such reduction - obtained until the end of hospitalization in patients who underwent the course of intensive psychotherapy conducted in integrative approach with predominance of psychodynamic approach. Methods: Symptom Checklist KO"O", Neurotic Personality Questionnaire KON-2006 and Life Inventory completed by 461 women and 219 men hospitalized in the day-hospital due to neurotic, behavioral and personality disorders between 2005-2013. At the stage of qualification 134 women and 80 men reported SI, of whom subsequently 84.3% and 77.5% respectively improved. Results: Women with prominent Tendency to risk-taking (p=0.002) and Impulsiveness (p=0.038) constituted subgroups with decreased chances of improvement in terms of SI, while men with prominently elevated level of Envy (p=0.041) and women who seemed to have difficulties in expressing anger adequately (p<0.05) had increased chances of SI reduction. Conclusions: Initially prominent Tendency to risk-taking and Impulsiveness may coexist with SI of increased resistance to psychotherapy. Thus, those subgroups require special attention and diligent selection of therapeutic methods. Also, it is probable that focusing therapy at the above-mentioned personality components may increase effectiveness of SI treatment. Reducing SI during psychotherapy appears to be highly effective especially in women with difficulties in expressing anger adequately and in men with prominently elevated level of Envy, which suggests adequacy of this treatment choice and of targeting those difficulties during psychotherapy. abstract_id: PUBMED:26909405 Changes in neurotic personality profile associated with reduction of suicidal ideation in patients who underwent psychotherapy in the day hospital for the treatment of neurotic and behavioral disorders. Aim: Analysis of associations between changes in neurotic personality profile and reductionof suicidal ideation (SI) - or lack of such reduction - defined as its elimination or reduction of its intensity in patients who underwent a course of intensive psychotherapy conducted in integrative approach with predominance of psychodynamic approach. Methods: Symptom Checklist KO"O", Neurotic Personality Questionnaire KON-2006 and Life Inventory completed by 461 women and 219 men treated due to neurotic, behavioral or personality disorders in a day hospital between 2005-2013. During the qualification for the therapy 134 women and 80 men reported SI, of whom 84.3% and 77.5% respectively improved. Results: Patients who improved in terms of SI obtained significantly greater reduction of global neurotic personality disintegration (neuroticism) than others (p<0.0005 in women and p=0.015 in men). Associations were found between improvement in terms of SI and greater reduction of many neurotic personality traits (p<0.05) in both genders: Negative self-esteem, Impulsiveness, Sense of alienation, Demobilization, Difficulties in emotional relations, Lack of vitality, Sense of lack of control, Sense of guilt, Difficulties in interpersonal relations, Sense of being in danger, Exaltation, Ponderings; and only in women: Feeling of being dependent on the Environment, Asthenia, Difficulties with decision making, Conviction of own resourcelessness in life, Deficit in internal locus of control and Imagination. indulging in fiction. Conclusions: The results confirm effectiveness of intensive psychotherapy as a treatment method that leads to comprehensive improvement encompassing reduction of neurotic personality disorders (neuroticism) and of majority of neurotic personality traits, as well as SI reduction. The revealed associations weigh in favor of hypothesis on neuroticism as SI predisposing factor in patients with neurotic, behavioral and personality disorders. abstract_id: PUBMED:26488359 Reduction of suicidal ideation in patients undergoing psychotherapy in the day hospital for the treatment of neurotic and behavioral disorders and neurotic symptoms reported by them before the hospitalization. Objectives: Analysis of associations between symptoms reported before the beginning of the hospitalization and reduction of suicidal ideation - or its lack - obtained until the end of the hospitalization in patients of the day hospital for the treatment of neurotic and behavioral disorders. Methods: Symptoms Checklist KO"O" and Life Inventory completed by 461 women and 219 men treated with intensive integrative psychotherapy with predominance of psychodynamic approach in the day hospital due to neurotic, behavioral and personality disorders between 2005-2013. Percentages of patients reporting SI initially and at the end of the treatment were 29.1% and 10.2% respectively in women and 36.5% and 13.7% in men. The improvement in terms of initially reported SI was obtained by 84.3% of women and 77.5% of men. Among patients, those initially reporting SI were characterized by greater intensity of neurotic symptoms (p<0.001) and greater intensity of nearly all of 14 subtypes of neurotic symptoms(p<0.05). Results: Among those reporting SI, subgroups of women with greater intensity of Obsessive-compulsive symptoms (p=0.003), Neurasthenia (p=0.005), Autonomic disorders (p=0.044) and women reporting episodes of uncontrollable hunger (p<0.01) had significantly lower chances of improvement in terms of SI than others. Conclusions: Patients initially reporting SI constituted approximately 1/3 in both genders and were characterized by greater intensity of neurotic disorders. Among those, women with particularly higher intensity of Obsessive-compulsive symptoms, Neurasthenia and Autonomic disorders and women reporting episodes of uncontrollable hunger seemed to suffer from SI that were more resistant to the psychotherapy. As such, those subgroups of women require special attention and diligent selection of the therapeutic methods. Answer: The effectiveness of intensive treatment programs for neurotic and personality disorders varies, and monitoring the effectiveness of the therapeutic team may be beneficial. A study with 105 patients treated in three therapeutic wards found that while the treatment was generally effective in diminishing neurotic symptoms (d Cohen = 0.56), there were significant differences in effectiveness between the wards. Two programs were effective (d Cohen = 0.80), but one did not lead to significant patient improvement. None of the wards effectively changed neurotic personality traits, suggesting that differences in effectiveness might be more related to the characteristics of the therapeutic teams than the methods used (PUBMED:24946442). Another study with 145 patients treated in a day hospital showed that intensive, short-term group psychotherapy with elements of individual therapy was effective for neurotic disorders, with 84% of patients experiencing a significant reduction in symptoms. Additionally, 76% of patients showed a significant reduction in the severity of neurotic personality traits (PUBMED:25844408). Furthermore, intensive psychotherapy in a day hospital was found to be highly effective in reducing and eliminating suicidal ideation (SI), with the prevalence of SI-reporting patients decreasing substantially by the end of therapy (PUBMED:26276917). The Neurotic Personality Inventory KON-2006 was also deemed an adequate tool for assessing the results of intensive psychotherapy (PUBMED:26688852). The study on trait and state anxiety in patients undergoing intensive short-term group psychotherapy found that the majority of patients demonstrated high intensity of both trait and state anxiety at the beginning of therapy, but there was a considerable reduction in anxiety intensity by the end of treatment (PUBMED:29432511). Overall, these findings suggest that intensive psychotherapy programs can be effective for treating neurotic and personality disorders, and that monitoring the effectiveness of therapeutic teams could be valuable in understanding and improving treatment outcomes.
Instruction: Shoulder pain and functional consequences: does it differ when it is at dominant side or not? Abstracts: abstract_id: PUBMED:20023354 Shoulder pain and functional consequences: does it differ when it is at dominant side or not? Purpose: The shoulder complex plays an integral role in the activities of daily living and the limitation of its function can greatly affect the patient. The aim of this study was to research the functional consequences of painful conditions in the shoulder region. Methods: Sixty five adult patients with shoulder pain for longer than one month in duration were included to the study. Dominant hands of the patients were determined. The subjects were divided into two groups: shoulder pain at the dominant upper extremity (Group I) or the nondominant one (Group II). The subjects were evaluated for shoulder pain with visual analog scale (VAS), range of motion and functional status with the Disabilities of the Arm, Shoulder, and Hand (DASH) questionnaire. Results: In Group I, DASH score was significantly correlated with the VAS score measuring the pain during activity (p< 0.0001, r=0,625). In Group II, no relationship between DASH score, VAS scores and any other parameters was detected. Conclusion: The most important parameter disturbing the daily living of a patient with shoulder disease is the pain at the shoulder of dominant extremity occurring in active movement. abstract_id: PUBMED:26942858 Dominant vs. non-dominant shoulder morphology in volleyball players and associations with shoulder pain and spike speed. The aims of our study were to compare the dominant (DOM) and non-dominant (NDOM) shoulders of high-level volleyball athletes and identify possible associations of shoulder adaptations with spike speed (SS) and shoulder pathology. A total of 22 male volleyball players from two teams participating in the first division of the Cypriot championship underwent clinical shoulder tests and simple measurements around their shoulder girdle joints bilaterally. SS was measured with the use of a sports speed radar. Compared with the NDOM side, the DOM scapula was more lateralised, the DOM dorsal capsule demonstrated greater laxity, the DOM dorsal muscles stretching ability was compromised, and the DOM pectoralis muscle was more lengthened. Players with present or past DOM shoulder pain demonstrated greater laxity in their DOM dorsal capsule, tightening of their DOM inferior capsule, and lower SS compared with those without shoulder pain. Dorsal capsule measurements bilaterally were significant predictors of SS. None of the shoulder measurements was associated with team roles or infraspinatus atrophy, while scapular lateralisation was more pronounced with increasing years of experience, and scapular antetilting was greater with increasing age. Adaptations of the DOM shoulder may be linked to pathology and performance. We describe simple shoulder measurements that may have the potential to predict chronic shoulder injury and become part of injury prevention programmes. Detailed biomechanical and large prospective studies are warranted to assess the validity of our findings and reach more definitive conclusions. abstract_id: PUBMED:23282669 Adaptive changes in the dominant shoulders of female professional overhead athletes: mutual association and relation to shoulder injury. The aim of our study was to evaluate adaptive changes in the dominant shoulders of female professional overhead athletes, their mutual association, and relation between adaptive changes and shoulder injury. Thirty-six female professional volleyball and handball players were divided into two groups: 14 athletes were included in the symptomatic group (positive shoulder injury history and specific shoulder tests) and 22 athletes were included in the asymptomatic group (negative shoulder injury history and specific shoulder tests). Clinical examinations with specific shoulder tests, evaluation of rotational mobility, and symptoms of malposition and dyskinesis of the dominant scapula (SICK scapula syndrome) were performed. Glenohumeral rotators were isokinetically tested at 60 and 150°/s, with evaluation of stability ratios and rotator fatigability. On average, the participants had decreased internal rotation (P<0.001) and increased external rotation (P<0.001), lower spiking (P<0.01 at 60 and 150°/s) and conventional ratios (P≤0.01 at 60 and 150°/s), lower eccentric external rotator peak torques (eER) (P≤0.05 at 60 and 150°/s), and marginally lower eccentric internal rotator peak torques at 60°/s (P=0.061) on the dominant side compared with the nondominant side. The symptomatic group showed decreased ER (P=0.021), higher deficit of dominant eER at 60°/s (P=0.049), and higher fatigability of internal (P=0.013) and external rotators (P=0.028). The athletes with increased ER had more scapular lateralization (ρ=0.340, P=0.042), higher spiking ratios at 60°/s (ρ=0.349, P=0.037) and 150°/s (ρ=0.330, P=0.049), and lower cocking ratios at 60°/s (ρ=-0.477, P=0.003). Decreased dominant ER, higher deficit of dominant eccentric ER peak torques, and higher dominant rotator fatigability correlate with previous shoulder pain/injury. Different adaptive changes (rotational mobility, SICK scapula signs, and glenohumeral muscular imbalance) are inter-related. As a form of both prevention and rehabilitation for the athletes at risk, we recommend individually adjusted shoulder training on the basis of clinical and isokinetic testing. abstract_id: PUBMED:31850253 Comparative Analyses of the Dominant and Non-Dominant Upper Limbs during the Abduction and Adduction Motions. Background: Asymmetry in repeated motion may lead to dyskinesia through imbalance in the involved musculoskeletal structures. The dominance sides are also involved greater movement involvement over the non-dominant sides. The upper limbs with multiple joints and largest range of motion are prone for unsynchronized coordination. Natural movement analysis is required for application to everyday activities. Methods: Thirty participants were first recruited from Inha University, Incheon, Korea in 2019. Twenty subjects were assessed for comparisons of asymmetrical motion between the dominant and non-dominant arms during the abduction and adduction lateral raises after excluding ten subjects for shoulder pain and left-handedness. Results: The abduction and adduction motions of the bilateral arms were compared for the angular locations, velocity, and acceleration for every 10 degrees. The angular locations of the dominant side occurred significant earlier in the initial (10°, 20°, 30°) phase and later in the last (10°, 20°) phase of abduction and adduction in comparison to the non-dominant side (P<.05). The angular accelerations of the dominant side were also significantly greater during the initial phase (0°, 10°, 30°) and last phase (0°, 10°, 30°) (P <.05). The angular velocities were significantly greater during the later phase (40, 50, 60°) of abduction (P <.04). Conclusion: Comparative dominant side indicated more controlled movements through the range of motion with greater stability in angular acceleration and deceleration especially during the initial and last phase of abduction and adduction, respectively. Training for control of the specific angular points should be considered during abduction and adduction motions to prevent asymmetry of the bilateral arms. abstract_id: PUBMED:20131062 Muscle performance during isokinetic concentric and eccentric abduction in subjects with subacromial impingement syndrome. Peak torque (PT), total work (TW) and acceleration time (AT) were measured during isokinetic concentric and eccentric abduction of the shoulder in subjects with subacromial impingement syndrome (SIS) and healthy subjects. The SIS group consisted of 27 subjects (33.48 +/- 9.94 years) with unilateral SIS and it was divided into two groups: (1) SIS with the dominant involved side, (2) SIS with the nondominant involved side. The control group consisted of 23 healthy subjects (32.26 +/- 9.04 years). PT, TW and AT were measured bilaterally at 60 and 180 degrees /s. No significant interactions were found between group and side (P > 0.05), as well as no significant main effects of group and side (P > 0.05) for all variables during concentric abduction of the shoulder at both tested speeds. During the eccentric contractions, lower TW (P < 0.05) was demonstrated by the nondominant uninvolved side of SIS group at 60 degrees /s, and by the uninvolved sides of the SIS groups at 180 degrees /s compared to the dominant side of the controls. At 180 degrees /s, the nondominant uninvolved side of the SIS group demonstrated slower AT (P < 0.05) compared to the dominant side of the control group, as well as the nondominant involved side of the SIS group compared to the nondominant side of the controls. The results showed no alterations in the measured parameters during concentric and eccentric phases of isokinetic abduction in subjects with SIS when compared to a control group. However, alterations may be observed in the contralateral side during the eccentric phase of abduction in subjects with unilateral conditions of SIS. abstract_id: PUBMED:35845606 Abnormal Dynamics of Functional Connectivity Density Associated With Chronic Neck Pain. Background: Chronic neck pain (CNP) is highly prevalent and complicated, associated with limited movement, and accompanied by shoulder pain and other clinical manifestations such as dizziness, anxiety, and insomnia. Brain structural and functional abnormalities often occur in patients with CNP. However, knowledge of the brain's functional organization and temporal dynamics in CNP patients is limited. Dynamic functional connectivity density (dFCD) can reflect the ability of brain areas or voxels to integrate information, and could become neuroimaging markers for objectively reflecting pain to a certain extent. Therefore, this study compared the dFCD between CNP patients and healthy controls (HCs) and investigated potential associations of the abnormal density variability in dynamic functional connectivity with pain characteristics in CNP patients. Methods: Resting functional magnetic resonance imaging was performed for 89 CNP patients and 57 HCs. After preprocessing resting-state fMRI images by the Data Processing and Analysis of Brain Imaging toolbox, the sliding window method was applied to investigate dFCD changes in CNP patients and HCs using the DynamicBC toolbox. Then we quantified dFCD variability using their standard deviation. Based on the pain-associated factors collected from the case report form of CNP patients, the mean dFCD variability values of each dFCD from region of interest were extracted to calculate Pearson's correlation coefficient to study the potential correlation between dFCD abnormal variability and pain. Results: Compared with HCs, the dFCD values of the anterior cingulate cortex, occipital lobe, temporal lobe, and cerebellum were statistically different in patients with CNP. Subsequent correlation analysis showed that the variable dFCD in the related brain region was correlative with the course of the disease and clinical symptoms, such as pain and depression, in patients with CNP. Conclusion: Dynamic functional alterations were observed in the brain regions of CNP patients, and the dFCD of these brain regions could become neuroimaging markers for objectively reflecting pain to a certain extent. This suggests that chronic pain may cause changes in pain processing and emotional feedback and highlights the link between dynamic neural communication in brain regions and disease conditions, deepening our understanding of chronic pain diseases, and guiding clinical practice. abstract_id: PUBMED:30702460 Strength Reduction in Unilateral Shoulder Pain: Is the Healthy Side Really Healthy in Rotator Cuff Disease? Objective: The primary aim was to ascertain whether unilateral shoulder pain is implicated in strength reduction both on the ipsilateral and contralateral side. Secondarily, we aimed to determine whether strength was affected by sonographic tendon abnormalities. Design: A total of 122 subjects were evaluated. Sixty-six female subjects with unilateral shoulder pain in the dominant arm were recruited. Abduction strength was measured in both the dominant and nondominant arm. High-resolution ultrasonography was also conducted on both shoulders. A match-paired control group (n = 66) composed of healthy volunteers underwent the same strength and sonography tests. Subjects with any radiographic anomaly were excluded from the control group. A mixed analysis of variance was performed to test the effect of unilateral shoulder pain on abduction strength. The effect of tendinopathy on shoulder strength was investigated using a mixed 2 × 2 analysis of variance. Results: Analysis of variance showed that patients with dominant shoulder pain had lower shoulder strength (11.65 ± 4.05 kg) when compared with controls (14.37 ± 4.00 kg; F = 10.454, P = 0.002). No statistically significant effects were found when comparing subjects with and without tendinopathy among the study group. Conclusions: In patients with unilateral shoulder pain, abduction strength was found to be lower both on the ipsilateral and contralateral side. The presence of tendinopathy did not affect the reduction in strength. Future research is needed to substantiate these findings. abstract_id: PUBMED:26702179 Comorbid Functional Shoulder Pain and Zolpidem Dependence Treated with Pramipexole. Pramipexole is a dopamine agonist with higher affinity for D3 receptors. Treatment with pramipexole in clinical conditions such as restless legs syndrome, fibromyalgia, and parkinsonism has been found to significantly improve measures of pain and sleep along with the other symptoms. There is no research data available that explores the usefulness of pramipexole in somatoform/functional pain syndromes. We report a case of a 65-year-old male with bilateral functional shoulder pain associated with insomnia and zolpidem dependence effectively treated with pramipexole. abstract_id: PUBMED:18282782 Quantification of shoulder tightness and associated shoulder kinematics and functional deficits in patients with stiff shoulders. Measurement of anterior/posterior shoulder tightness, humeral external/internal rotation range of motion (ROM), scapular upward rotation/tipping ROM, and functional limitations were made in 46 patients with unilateral stiff shoulders (SSs) using a clinical measurement (shoulder tightness), a three-dimensional electromagnetic tracking device (shoulder ROM), and self-reports of function. Patients with SSs in their dominant shoulder demonstrated statistically greater posterior shoulder tightness compared to nondominant shoulder. Control dominant shoulders demonstrated decreased internal ROM as compared with control nondominant shoulders (p=0.021). In SSs, significant relationships were found between humeral internal rotation ROM and posterior shoulder tightness (R=0.49, p<0.0005), humeral external rotation ROM and anterior shoulder tightness (R=0.59, p=0.0002), scapular tipping and anterior shoulder tightness (R=0.57, p=0.004). Specifically, in patients with dominant SSs, posterior shoulder tightness and functional limitation were related (R=0.56, p=0.002). In patients with dominant involved shoulders, emphasise on posterior tightness stretch may improve functional ability directly. In addition to stretching program in patients with SSs, internal rotation ROM of control dominant shoulder is also important to consider in the rehabilitation of patients with SSs. abstract_id: PUBMED:31891713 Psychometric Properties of Performance-Based Functional Tests in Patients With Shoulder Pathologies: A Systematic Review and Meta-analysis. Objectives: To identify, critically appraise, and synthesize the reported psychometric properties of shoulder performance-based functional tests in patients with shoulder diseases. Data Sources: MEDLINE, Embase, Scopus, and Cumulative Index to Nursing and Allied Health databases from inception until March 2019 were searched. Study Selection: Randomized/prospective studies of patients with shoulder diseases that reported on the psychometric properties (reliability, validity, responsiveness) of performance-based functional tests (Standardized Index of Shoulder Function [FI2S], Functional Impairment Test-Hand and Neck/Shoulder/Arm, Closed Kinetic Chain Upper Extremity Stability Test, Timed Functional Arm and Shoulder Test, Shoulder Function Index [SFInX], and hand to neck, scapula, and opposite scapula). Data Extraction: We used the Consensus-Based Standards for the Selection of Health Measurement Instruments 2018 guideline for systematic reviews. We performed a qualitative synthesis in which the results were summarized based on reported measurement properties and study quality. Data Synthesis: Eight eligible studies were included with 28 measures (16 reliability; 10 validity; 2 responsiveness). Performance-based functional tests reliability (test-retest, intra- and interrater) measures indicated excellent reliability properties. Intraclass correlation coefficient (ICC) was ≥0.83 and standard error of the mean (SEM) ranged from 0.03-13.3 points. Validity (construct/convergent/concurrent) measures displayed correlations of -0.76 to 0.91 between performance-based functional tests and other patient-reported outcomes (Disabilities of the Arm, Shoulder, and Hand, Constant, Shoulder Pain and Disability Index). Two studies assessed the responsiveness measures performance-based functional tests. Effect sizes of 0.44 and 1.50 and minimal clinically important differences of 10.3 using the anchor-based approach were reported. Conclusions: The FI2S and the SFInX are reliable, valid, and responsive in patients with shoulder-related diseases. Answer: Shoulder pain and its functional consequences can indeed differ depending on whether it occurs on the dominant or non-dominant side. A study that included 65 adult patients with shoulder pain for longer than one month found that pain in the shoulder of the dominant extremity, particularly during active movement, was the most important factor disturbing daily living. This was evidenced by a significant correlation between the Disabilities of the Arm, Shoulder, and Hand (DASH) score and the visual analog scale (VAS) score measuring pain during activity in patients with pain in their dominant shoulder (PUBMED:20023354). In contrast, for patients with shoulder pain in the non-dominant upper extremity, no relationship was detected between the DASH score, VAS scores, and any other parameters (PUBMED:20023354). This suggests that shoulder pain in the dominant arm may have a more pronounced impact on functional status compared to pain in the non-dominant arm. Moreover, studies on volleyball players have shown that adaptations of the dominant shoulder, which is more frequently used for activities like spiking, may be linked to pathology and performance. Players with present or past dominant shoulder pain demonstrated greater laxity in their dominant dorsal capsule and lower spike speed compared to those without shoulder pain (PUBMED:26942858). Additionally, in female professional overhead athletes, adaptive changes in the dominant shoulder, such as decreased internal rotation and increased external rotation, were associated with previous shoulder pain/injury. The study recommended individually adjusted shoulder training based on clinical and isokinetic testing as a form of both prevention and rehabilitation for athletes at risk (PUBMED:23282669). In summary, shoulder pain does have different functional consequences depending on whether it affects the dominant or non-dominant side, with pain in the dominant shoulder having a more significant impact on daily activities and functional status.
Instruction: Piezoelectric bone surgery: a revolutionary technique for minimally invasive surgery in cranial base and spinal surgery? Abstracts: abstract_id: PUBMED:16369140 Anesthesia for minimally invasive cranial and spinal surgery. The field of minimally invasive neurosurgery has evolved rapidly in its indications and applications over the last few years. New, less invasive techniques with low morbidity and virtually no mortality are replacing conventional neurosurgical procedures. Providing anesthesia for these procedures differs in many ways from conventional neurosurgical operations. Anesthesiologists are faced with the perioperative requirements and risks of newly developed procedures. This review calls attention to the anesthetic issues in various minimally invasive neurosurgical procedures for cranial and spinal indications. Among the procedures specifically discussed are endoscopic third ventriculostomy, endoscopic transsphenoidal hypophysectomy, endoscopic strip craniectomy, deep brain stimulation, video-assisted thorascopic surgery, vertebroplasty and kyphoplasty, cervical discectomy and foraminectomy, and laparoscopically assisted lumbar spine surgery. abstract_id: PUBMED:38301865 Minimally invasive approach for skull base meningiomas. Introduction: Skull base meningiomas constitute a complex group of skull base tumors. The endoscopic endonasal approaches (EEA) and endoscopic Keyhole have a minimally invasive philosophy with high effectiveness, safety, and a significant decrease in postoperative morbidity in these tumors. Objective: To describe the results of the use of minimally invasive approaches to skull base meningiomas. Method: A descriptive, retrospective study was carried out in 140 patients at the "Hermanos Ameijeiras" Hospital who underwent surgery for cranial base meningiomas using minimally invasive approaches to the cranial base. The degree of tumor resection, tumor recurrence, reinterventions, and complications were evaluated. Results: ESA were used in 72.8% of the study, while endoscopic Keyholes were used in 26.4% and combined approaches in 0.7%. In relation to total tumor resection, 91.9% was generally achieved. Overall tumor recurrence and reinterventions were less than 8.5% and 4%, respectively. Complications in ESA were: cerebrospinal fluid (CSF) fistula (2.1%), diabetes insipidus, hydrocephalus, cerebral infarction, surgical site hematoma, worsening of cranial nerves I-IV (1.4%) respectively. While in the Keyhole type approaches: seizures (2.9%), hydrocephalus (1.4%), CSF fistula (1.4%), worsening of Nerves IX-XII (0.7%). Conclusions: The following study reaffirms that minimally invasive approaches for skull base meningiomas constitute advanced surgical techniques for these tumors, associated with a low rate of postoperative complications. abstract_id: PUBMED:28958927 Microscopic Minimally Invasive Keyhole Technique for Surgical Resection of Spinal Dumbbell Tumors. Background: Multiple surgical modalities have been developedl and applied for surgical excision of spinal dumbbell tumors. The microscopic minimally invasive keyhole technique has not yet been well characterized for this purpose. We report our experience with 1-stage surgical resection of both intraforaminal and extraforaminal/paraspinal components of spinal dumbbell tumors using the microsurgical keyhole technique. Methods: We retrospectively reviewed 12 cases of spinal dumbbell tumors treated via 1-stage surgical resection by the same senior spinal surgeon from January 2013 to April 2017. All patients were treated surgically using the microsurgical keyhole technique through a tubular retractor. Demographic data, perioperative variables, and associated complications were recorded. The clinical outcomes were evaluated at the last follow-up using a visual analog scale and the American Spinal Injury Association scale. Results: Mean duration of surgical resection was 184.58 minutes, and mean estimated total blood loss was 208.33 mL. The mean hospitalization time was 9.17 days. Postoperatively, 11 patients (91.67%) who originally presented with pain had a significantly lower visual analog scale score, indicating a reduction in their pain intensity. All patients demonstrated neurologic stability or improvement by 1-3 grades: 16.67% remained stable, 41.67% improved by 1 grade, 33.33% improved by 2 grades, and 8.33% improved by 3 grades. Conclusions: The microscopic minimally invasive keyhole technique can be used successfully for resection of spinal dumbbell tumors. Further comparative studies are warranted to demonstrate the benefits of the microsurgical keyhole technique compared with traditional open procedures for surgical resection of spinal dumbbell tumors. abstract_id: PUBMED:31053867 Minimally invasive decompression techniques for spinal cord stenosis Background: Lumbar spinal canal stenosis is frequently found among elderly patients and significantly limits their quality of life. Non-surgical therapy is an initial treatment option; however, it does not eliminate the underlying pathology. Surgical decompression of the spinal canal has now become the treatment of choice. Objective: Minimalization of surgical approach strategies with maintaining sufficient decompression of the spinal canal and avoiding disadvantages of macrosurgical techniques, monolateral paravertebral approach with bilateral intraspinal decompression, specific surgical techniques. Materials And Methods: Minimally invasive decompression techniques using a microscope or an endoscope are presented and different surgical strategies depending on both the extent (mono-, bi-, and multisegmental) and the location of the stenosis (intraspinal central, lateral recess, foraminal) are described. Results: Minimally invasive microscopic or endoscopic decompression procedures enable sufficient widening of the spinal canal. Disadvantages of macrosurgical procedures (e. g., postoperative instability) can be avoided. The complication spectrum overlaps partially with that of macrosurgical interventions, albeit with significantly less marked severity. Subjective patient outcome is clearly improved. Conclusions: Referring to modern minimally invasive decompression procedures, surgery of lumbar spinal canal stenosis represents a rational and logical treatment alternative, since causal treatment of the pathology is only possible with surgery. abstract_id: PUBMED:25860494 Minimally invasive surgery of the anterior skull base: transorbital approaches Minimally invasive approaches are becoming increasingly popular to access the anterior skull base. With interdisciplanary cooperation, endonasal endoscopic in particular approaches have seen an impressive expansion of indications over the past decades. The more recently described transorbital approaches represent minimally invasive alternatives with a differing spectrum of access corridors. The purpose of the present paper is to discuss transorbital approaches to the anterior skull base in the light of the current literature.The transorbital approaches allow excellent exposure of difficult to reach areas like the the anterior and posterior wall of the frontal sinus; working angles may be more favorable and the paranasal sinus system can be preserved while exposing the base of skull. Because of their minimal morbidity and the cosmetically excellent results, the transorbital approaches represent an important addition to established endonasal endoscopic and open approaches to the anterior skull base. Their execution requires an interdisciplinary team approach. abstract_id: PUBMED:33548528 Minimally Invasive Resection of Spinal Tumors with Tubular Retractor: Case Series, Surgical Technique, and Outcome. Background: Traditional laminectomy for excision of spinal tumors involves extensive dissection of the midline spinous ligaments, greater blood loss, and risk of delayed segmental instability. The minimally invasive technique of spinal tumor resection using tubular retractors can achieve safe and complete tumor resection while preserving the structural and functional integrity. The authors present their experience of minimally invasive spinal surgery for spinal tumors in this case series. Methods: The authors retrospectively reviewed 41 consecutive spinal tumor cases operated by the MISS-Key Hole technique using the tubular retractor system at Sree Utharadom Thirunal Hospital, Kerala, India between January 2015 and January 2020. Preoperative clinical findings, surgical technique, operative statistics, complications, and patient outcomes were analyzed in detail. Results: We could successfully achieve gross total resection in 39 cases (95.12%) and subtotal resection in 2 cases. There were 4 cervical, 1 craniovertebral junction, 20 thoracic, 14 lumbar, and 2 sacral lesions, of which 4 were extradural, 1 extradural foraminal, 33 intradural, and 3 dumbbell lesions. The Modified McCormick Scale at 12 weeks had improved by 1-2 scales in all but 2 patients. There was no cerebrospinal fluid leak, pseudomeningocele, or infection in our series. Conclusions: This series demonstrates the feasibility, safety and effectiveness of the keyhole approach for excision of intradural and extradural spinal tumors extending up to 2 levels. Careful case selection, good preoperative planning, meticulous microsurgical resection, and watertight dural closure are crucial for successful outcome. Early mobilization, less blood loss, and avoidance of delayed instability are the advantages of minimally invasive spinal surgery when compared with open surgery. abstract_id: PUBMED:36013590 Minimally Invasive Spinal Treatment (MIST)-A New Concept in the Treatment of Spinal Diseases: A Narrative Review. In the past two decades, minimally invasive spine surgery (MISS) techniques have been developed for spinal surgery. Historically, minimizing invasiveness in decompression surgery was initially reported as a MISS technique. In recent years, MISS techniques have also been applied for spinal stabilization techniques, which were defined as minimally invasive spine stabilization (MISt), including percutaneous pedicle screws (PPS) fixation, lateral lumbar interbody fusion, balloon kyphoplasty, percutaneous vertebroplasty, cortical bone trajectory, and cervical total disc replacement. These MISS techniques typically provide many advantages such as preservation of paraspinal musculature, less blood loss, a shorter operative time, less postoperative pain, and a lower infection rate as well as being more cost-effective compared to traditional open techniques. However, even MISS techniques are associated with several limitations including technical difficulty, training opportunities, surgical cost, equipment cost, and radiation exposure. These downsides of surgical treatments make conservative treatments more feasible option. In the future, medicine must become "minimally invasive" in the broadest sense-for all patients, conventional surgeries, medical personnel, hospital management, nursing care, and the medical economy. As a new framework for the treatment of spinal diseases, the concept of minimally invasive spinal treatment (MIST) has been proposed. abstract_id: PUBMED:24703448 Minimally invasive extracavitary transpedicular corpectomy for the management of spinal tumors. Management of spinal metastasis is a large and challenging clinical problem. For metastatic epidural spinal cord compression, a prospective, randomized, controlled trial showed the utility of circumferential surgical decompression followed by adjuvant radiotherapy. In the setting of those data, surgical techniques evolved from decompressive laminectomy only to anterior corpectomy to posterior-only transpedicular corpectomy. The transpedicular approach has recently been modernized with minimally invasive and mini-open techniques. This article presents the relevant clinical background on spinal metastasis, reviews the surgical technique for minimally invasive transpedicular corpectomy, and finally reviews relevant results in the literature. abstract_id: PUBMED:26804334 Minimally Invasive Supraorbital Key-hole Approach for the Treatment of Anterior Cranial Fossa Meningiomas. The most important target of minimally invasive surgery is to obtain the best therapeutic effect with the least iatrogenic injury. In this background, a pivotal role in contemporary neurosurgery is played by the supraorbital key-hole approach proposed by Perneczky for anterior cranial base surgery. In this article, it is presented as a possible valid alternative to the traditional craniotomies in anterior cranial fossa meningiomas removal. From January 2008 to January 2012 at our department 56 patients underwent anterior cranial base meningiomas removal. Thirty-three patients were submitted to traditional approaches while 23 to supraorbital key-hole technique. A clinical and neuroradiological pre- and postoperative evaluation were performed, with attention to eventual complications, length of surgical procedure, and hospitalization. Compared to traditional approaches the supraorbital key-hole approach was associated neither to a greater range of postoperative complications nor to a longer surgical procedure and hospitalization while permitting the same lesion control. With this technique, minimization of brain exposition and manipulation with reduction of unwanted iatrogenic injuries, neurovascular structures preservation, and a better aesthetic result are possible. The supraorbital key-hole approach according to Perneckzy could represent a valid alternative to traditional approaches in anterior cranial base meningiomas surgery. abstract_id: PUBMED:26037174 Minimally invasive approaches in metastatic spinal tumor surgery. The surgical treatment of spinal metastases is still controversial. Due to developments in diagnostic imaging there has been a great evolution in minimally invasive surgical techniques for the spinal surgery. Most of the patients with spinal metastases are debilitated and under high risk of major surgical morbidity and mortality. Less perioperative pain, less blood loss, less hospitalization time, protection of the spine biomechanics, fast recovery and less morbidity in medically debilitated patients are the advantages of minimally invasive surgical techniques. Radiotherapy, chemotherapy or combining both treatments are the standard treatment options for spinal tumors following surgery. Standard open approaches are not suitable for some patients due to limited life expectancies, high surgical complication rates and decrease in quality of life. Minimal invasive techniques represent major advance in minimizing approach related morbidity in the treatment of spinal tumors. Because of the evolution of minimally invasive surgical techniques for the spinal surgery, minimally invasive techniques are alternative treatment to standard open approaches for the treatment of metastatic spinal tumors. Due to less complication rates there has been a trend toward the minimalization of spine surgery. Answer: The provided abstracts do not specifically mention "piezoelectric bone surgery" as a technique used in minimally invasive surgery for cranial base and spinal procedures. Instead, they discuss various minimally invasive approaches and techniques that have been developed and applied in neurosurgery and spinal surgery, such as endoscopic endonasal approaches (EEA), endoscopic Keyhole, microscopic minimally invasive keyhole technique, minimally invasive decompression techniques, transorbital approaches, tubular retractor systems, and minimally invasive extracavitary transpedicular corpectomy (PUBMED:16369140, PUBMED:38301865, PUBMED:28958927, PUBMED:31053867, PUBMED:25860494, PUBMED:33548528, PUBMED:36013590, PUBMED:24703448, PUBMED:26804334, PUBMED:26037174). These techniques are described as having benefits such as lower morbidity, virtually no mortality, less blood loss, shorter operative times, less postoperative pain, and lower infection rates compared to traditional open techniques. They also aim to preserve the structural and functional integrity of the spine, reduce approach-related morbidity, and improve aesthetic outcomes (PUBMED:16369140, PUBMED:38301865, PUBMED:28958927, PUBMED:31053867, PUBMED:25860494, PUBMED:33548528, PUBMED:36013590, PUBMED:24703448, PUBMED:26804334, PUBMED:26037174). However, without specific information on piezoelectric bone surgery within these abstracts, it is not possible to comment on whether piezoelectric bone surgery is considered a revolutionary technique within the context of these minimally invasive approaches for cranial base and spinal surgery. Piezoelectric bone surgery is a technique that uses ultrasonic vibrations to cut bone tissue selectively while minimizing damage to soft tissue, which could theoretically be beneficial in delicate areas such as the cranial base and spine. Nonetheless, this would require direct evidence from studies or reviews that specifically evaluate the use and outcomes of piezoelectric bone surgery in neurosurgical and spinal applications.
Instruction: Does subclinical inflammation contribute to impairment of function of knee joints in aged individuals? Abstracts: abstract_id: PUBMED:25870315 Does subclinical inflammation contribute to impairment of function of knee joints in aged individuals? High prevalence of ultrasound inflammatory findings. Objectives: To investigate the prevalence of knee US findings of inflammation and structural damage in aged individuals (≥60 years) of a long-term population-based cohort and to correlate these findings with demographic, clinical and laboratory parameters. Methods: Cross-sectional clinical and US investigation of both knee joints during the 2010 follow-up of the prospective population-based Bruneck Study. Demographic variables, physical activity, comorbidities, medications, pain, and functional scales related to the knee joints were recorded. US-assessed parameters were synovial hypertrophy, power Doppler signal, joint effusion, cartilage abnormalities, osteophytes, enthesopathy and bursitis. Statistics included univariate and multivariate regression analysis. Results: A total of 488 subjects (mean age 72.5 years; 53.5% females, 46.5% males) were examined by clinical assessment, and 433 of these underwent US examination of both knees. Both inflammatory and structural abnormalities were found in 296 (68.8%) subjects. Inflammatory abnormalities were significantly associated with age in years, male gender, diabetes and the presence of knee joint symptoms. In the multivariate analysis, age, male gender and knee swelling emerged as independent predictors of inflammation [odds ratio (OR) (95% CI) = 1.06 (1.03, 1.09), 2.55 (1.55, 4.21) and 5.92 (1.99, 17.58), respectively]. Conclusion: The present study showed a high prevalence of US inflammatory abnormalities in the knee joints of a normal aged population. These data suggest a substantial contribution of inflammation in progressive impairment of joint function with age. abstract_id: PUBMED:31842183 PET/computed tomography shows association between subjective pain in knee joints and fluorine-18-fluorodeoxyglucose uptake. Objective: The aim of this study was to evaluate the capability of fluorine-18-fluorodeoxyglucose (F-FDG)-PET/computed tomography (CT) in screening for musculoskeletal inflammation and injury in the knee region. Methods: The study included 12 subjects (6 men and 6 women) who complained of knee pain at rest and 94 controls who did not. Maximum standardized uptake value (SUV)max was calculated for the right and left knees and compared via a 4-point numerical rating scale (NRS) for subjective knee pain. Correlations between SUV values were also evaluated. Results: SUVmax was significantly higher in knee joints with resting and/or walking pain than in pain-free knee joints. SUVmax in knee joints with walking pain were significantly correlated with NRS category (β = 0.129, P < 0.001). Conclusion: F-FDG-PET/CT may be useful in screening for musculoskeletal inflammation and injury in the knee region. As knee pain is common, especially among elderly individuals, we should consider conducting further examinations when F-FDG uptake is identified in knee joints. abstract_id: PUBMED:29286528 The role of three phase bone scintigraphy in arthroplasty of hip and knee joints of clean-up workers of Chornobyl accident. Objective: The objective of the work to determine the diagnostic role of the three phase bone scintigraphy (3 F BS) in the early prediction of paraendoprosthetic complications of the hip and knee joints of the participants in clean up workers of Chornobyl accident. Material And Methods: The 3 F BS method were investigated the radionuclide kinetic parameters of the inclusion and distribution of radiopharmaceuticals (RF) in damaged joints in the 75 male clean up workers. Results: It was established that calculating the quantitative indices of kinetics of the drug at various stages of the 3 F BS allowed to determine statistically significant differences in metabolic changes in focal lesions in articular structures. Thus, the differential diagnostic capabilities of the radionuclide method of examination of patients with arthroplasty of the hip and knee joints are elevated. Conclusions: The kinetics of osteotropic RF in the foci of fixing the damaged hip and knee joints in the infectious and inflammatory processes is characterized by the predominance of retention and specific accumulation of the drug in the early static phase and the delayed static phase of the 3 F BS compared with the centers of fixation of RF in deforming osteoarthroses, which correlates with the differences in destructive reparative processes in them. abstract_id: PUBMED:28093922 Toll-like receptors and their soluble forms differ in the knee and thumb basal osteoarthritic joints. Background and purpose - Although the pathogenesis of osteoarthritis (OA) is not well understood, chondrocyte-mediated inflammatory responses (triggered by the activation of innate immune receptors by damage-associated molecules) are thought to be involved. We examined the relationship between Toll-like receptors (TLRs) and OA in cartilage from 2 joints differing in size and mechanical loading: the first carpometacarpal (CMC-I) and the knee. Patients and methods - Samples of human cartilage obtained from OA CMC-I and knee joints were immunostained for TLRs (1-9) and analyzed using histomorphometry and principal component analysis (PCA). mRNA expression levels were analyzed with RT-PCR. Collected synovial fluid (SF) samples were screened for the presence of soluble forms of TLR2 and TLR4 by enzyme-linked immunosorbent assay (ELISA). Results - In contrast to knee OA, TLR expression in CMC-I OA did not show grade-dependent overall profile changes, but PCA revealed that TLR expression profiles clustered according to their cellular compartment organization. Protein levels of TLR4 were substantially higher in knee OA than in CMC-I OA, while the opposite was the case at the mRNA level. ELISA assays confirmed the presence of soluble forms of TLR2 and TLR4 in SF, with sTLR4 being considerably higher in CMC-I OA than in knee OA. Interpretation - We observed that TLRs are differentially expressed in OA cartilage, depending on the joint. Soluble forms of TLR2 and TLR4 were detected for the first time in SF of osteoarthritic joints, with soluble TLR4 being differentially expressed. Together, our results suggest that negative regulatory mechanisms of innate immunity may be involved in the pathomolecular mechanisms of osteoarthritis. abstract_id: PUBMED:31990581 Effectiveness of collagen supplementation on pain scores in healthy individuals with self-reported knee pain: a randomized controlled trial. The purpose of this study was to examine the effects of 12 weeks collagen peptide (CP) supplementation on knee pain and function in individuals with self-reported knee pain. Healthy physically active individuals (n = 167; aged 63 [interquartile range = 56-68] years) with self-reported knee pain received 10 g/day of CP or placebo for 12 weeks. Knee pain and function were measured with the Visual Analog Scale (VAS), the Lysholm questionnaire, and the Knee injury and Osteoarthritis Outcome Score (KOOS). Furthermore, we assessed changes in inflammatory, cartilage, and bone (bio)markers. Measurements were conducted at baseline and after 12 weeks of supplementation. Baseline VAS did not differ between CP and placebo (4.7 [2.5-6.1] vs. 4.7 [2.8-6.2], p = 0.50), whereas a similar decrease in VAS was observed after supplementation (-1.6 ± 2.4 vs. -1.9 ± 2.6, p = 0.42). The KOOS and Lysholm scores increased after supplementation in both groups (p values < 0.001), whereas the increase in the KOOS and Lysholm scores did not differ between groups (p = 0.28 and p = 0.76, respectively). Furthermore, CP did not impact inflammatory, cartilage, and bone (bio)markers (p values > 0.05). A reduced knee pain and improved knee function were observed following supplementation, but changes were similar between groups. This suggests that CP supplementation over a 12-week period does not reduce knee pain in healthy, active, middle-aged to elderly individuals. Novelty CP supplementation over a 12-week period does not reduce knee pain in healthy, active, middle-aged to elderly individuals. CP supplementation over a 12-week period does not impact on inflammatory, cartilage, and bone (bio)markers in healthy, active, middle-aged to elderly individuals. abstract_id: PUBMED:31310336 Pre-Clinical Testing of Microwave Radiometer and a Pilot Study on the Screening Inflammation of Knee Joints. This article presents the pre-clinical evaluation of our custom-built, single-band microwave radiometer centered at 1.3 GHz for deep tissue thermometry, and a pilot study on volunteers for passive detection of inflammation in knee joints. The electromagnetic (EM) compatibility of the battery-operated radiometer for clinical use was assessed as per International Special Committee on Radio Interference (CISPR) 22 standard. The ability to detect inflammation in knee joints was assessed using a substrate integrated waveguide antenna connected to the radiometer. EM compatibility tests carried out in the laboratory indicated device immunity to intentional radiated interference up to -20 dBm injected power in the global system for mobile communication frequency band, and pre-compliance to CISPR 22 standard. Radiometer temperature measurements recorded at the lateral and medial aspects of both knees of 41 volunteers indicated mean temperature greater than 33°C for the diseased sites compared with the mean temperature of 28°C measured for the healthy sites. One-way analysis of variance statistics indicated significantly (P < 0.005) higher radiometer temperature at the diseased sites unlike the healthy sites. Thus, the EM pre-compliance of the device and the potential to measure deep tissue inflammation were demonstrated. Bioelectromagnetics. 2019;40:402-411. © 2019 Bioelectromagnetics Society. abstract_id: PUBMED:12209505 Comparison of synovial tissues from the knee joints and the small joints of rheumatoid arthritis patients: Implications for pathogenesis and evaluation of treatment. Objective: Serial synovial biopsy samples are increasingly being used for the evaluation of novel therapies for rheumatoid arthritis (RA). Most studies have used tissues from knee biopsies, but technical improvements have made serial small joint arthroscopy feasible as well. Theoretically, there could be differences in the features of synovial inflammation between various joints as a result of mechanical factors, differences in innervation, and other factors. We therefore undertook this study to compare the cell infiltrate in paired synovial biopsy samples from inflamed knee joints and paired inflamed small joints of patients with RA. Methods: Nine RA patients with both an inflamed knee joint and an inflamed small joint (wrist or metacarpophalangeal joint) underwent an arthroscopic synovial biopsy of both joints on the same day. Multiple biopsy specimens were collected and stained for macrophages, T cells, plasma cells, fibroblast-like synoviocytes, and interleukin-6 (IL-6) by immunohistochemistry. Sections were evaluated by digital image analysis. Results: There were no significant differences in mean cell numbers for all markers investigated in samples from the knee joint compared with samples from the small joints. We detected statistically significant correlations for the numbers of sublining macrophages, T cells, and plasma cells, as well as for IL-6 expression, between the knee joint and the small joints. However, there was no significant correlation between different joints for the numbers of intimal macrophages or fibroblast-like synoviocytes. Conclusion: The results of this study show that the inflammation in one inflamed joint is generally representative of that in other inflamed joints. Therefore, it is possible to use serial samples from the same joint, selecting either large or small joints, for the evaluation of antirheumatic therapies. abstract_id: PUBMED:11520164 Guidance by ultrasound of intra-articular injections in the knee and hip joints. Objective: To develop and assess a stable method for ascertaining the placement of intraarticular injections for osteoarthritis (OA) in the hip and knee. Methods: Injections into the hip or knee joint with e.g. hyaluronan or cortisone were performed under the guidance of ultrasound. For this purpose an Acuson Sequoia apparatus and a 8-15 MHz transducer were used. After perforation of the capsule with a 21 G needle, 0.5-1 ml of atmospheric air and 1 ml lidocain 1% was injected with simultaneous recording of the ultrasound signals. This procedure was undertaken before the injection of the medication through the in situ needle. Results: In the hip joint the injected air could readily ascertain the placement of the injection with a sharp echoic contrast forming on the ultrasound picture respecting the joint cavity. In the knee joint the procedure gave the best results in joints which have a small amount of fluid in either the suprapatellar bursa or in a pouch regularly observed over the lateral joint margin. However, also in some so-called 'dry' knee joints the air could be traced in the bursa by ultrasound. Conclusion: By the injection of air, it is possible to test the placement of intraarticular injections in both hip and knee joints. This procedure will give a supplementary documentation of the injection as compared to a mere ultrasonographic demonstration of the position of the needle in the joint. The method is proposed as a tool for both learning purposes and quality assurance in daily therapy. abstract_id: PUBMED:33838144 Early benefits of bariatric surgery on subclinical cardiac function: Contribution of visceral fat mobilization. Aims: We explored the early effects of bariatric surgery on subclinical myocardial function in individuals with severe obesity and preserved left ventricular (LV) ejection fraction. Methods: Thirty-eight patients with severe obesity [body mass index (BMI) ≥35 kg/m2] and preserved LV ejection fraction (≥50%) who underwent bariatric surgery (biliopancreatic diversion with duodenal switch [BPD-DS]) (Surgery group), 19 patients with severe obesity managed with usual care (Medical group), and 18 age and sex-matched non-obese controls (non-obese group) were included. Left ventricular global longitudinal strain (LV GLS) was evaluated with echocardiography speckle tracking imaging. Abnormal myocardial function was defined as LV GLS <18%. Results: Age of the participants was 42 ± 11 years with a BMI of 48 ± 8 kg/m2 (mean ± standard deviation); 82% were female. The percentage of total weight loss at 6 months after bariatric surgery was 26.3 ± 5.2%. Proportions of hypertension (61 vs. 30%, P = 0.0005), dyslipidemia (42 vs. 5%, P = 0.0001) and type 2 diabetes (40 vs. 13%, P = 0.002) were reduced postoperatively. Before surgery, patients with obesity displayed abnormal subclinical myocardial function vs. non-obese controls (LV GLS, 16.3 ± 2.5 vs. 19.6 ± 1.7%, P < 0.001). Six months after bariatric surgery, the subclinical myocardial function was comparable to non-obese (LV GLS, 18.2 ± 1.9 vs. 19.6 ± 1.7%, surgery vs. non-obese, P = NS). On the contrary, half of individuals with obesity managed medically worsened their myocardial function during the follow-up (P = 0.002). Improvement in subclinical myocardial function following bariatric surgery was associated with changes in abdominal visceral fat (r = 0.43, P < 0.05) and inflammatory markers (r = 0.45, P < 0.01), whereas no significant association was found with weight loss or change in insulin sensitivity (HOMA-IR) (P > 0.05). In a multivariate model, losing visceral fat mass was independently associated with improved subclinical myocardial function. Conclusions: Bariatric surgery was associated with significant improvement in the metabolic profile and in subclinical myocardial function. Early improvement in subclinical myocardial function following bariatric surgery was related to a greater mobilization of visceral fat depot, linked to global fat dysfunction and cardiometabolic morbidity. abstract_id: PUBMED:23744481 Altered spinal microRNA-146a and the microRNA-183 cluster contribute to osteoarthritic pain in knee joints. The objective of this study was to examine whether altered expression of microRNAs in central nervous system components is pathologically linked to chronic knee joint pain in osteoarthritis. A surgical animal model for knee joint OA was generated by medial meniscus transection in rats followed by behavioral pain tests. Relationships between pathological changes in knee joint and development of chronic joint pain were examined by histology and imaging analyses. Alterations in microRNAs associated with OA-evoked pain sensation were determined in bilateral lumbar dorsal root ganglia (DRG) and the spinal dorsal horn by microRNA array followed by individual microRNA analyses. Gain- and loss-of-function studies of selected microRNAs (miR-146a and miR-183 cluster) were conducted to identify target pain mediators regulated by these selective microRNAs in glial cells. The ipsilateral hind leg displayed significantly increased hyperalgesia after 4 weeks of surgery, and sensitivity was sustained for the remainder of the 8-week experimental period (F = 341, p < 0.001). The development of OA-induced chronic pain was correlated with pathological changes in the knee joints as assessed by histological and imaging analyses. MicroRNA analyses showed that miR-146a and the miR-183 cluster were markedly reduced in the sensory neurons in DRG (L4/L5) and spinal cord from animals experiencing knee joint OA pain. The downregulation of miR-146a and/or the miR-183 cluster in the central compartments (DRG and spinal cord) are closely associated with the upregulation of inflammatory pain mediators. The corroboration between decreases in these signature microRNAs and their specific target pain mediators were further confirmed by gain- and loss-of-function analyses in glia, the major cellular component of the central nervous system (CNS). MicroRNA therapy using miR-146a and the miR-183 cluster could be powerful therapeutic intervention for OA in alleviating joint pain and concomitantly regenerating peripheral knee joint cartilage. Answer: Yes, subclinical inflammation does contribute to the impairment of function of knee joints in aged individuals. A study investigating the prevalence of knee ultrasound (US) findings in aged individuals (≥60 years) from the Bruneck Study cohort found a high prevalence of US inflammatory abnormalities in the knee joints of a normal aged population. These findings suggest a substantial contribution of inflammation in the progressive impairment of joint function with age (PUBMED:25870315). Additionally, the presence of inflammatory abnormalities was significantly associated with age, male gender, diabetes, and the presence of knee joint symptoms, with age, male gender, and knee swelling emerging as independent predictors of inflammation.
Instruction: Can Clustal-style progressive pairwise alignment of multiple sequences be used in RNA secondary structure prediction? Abstracts: abstract_id: PUBMED:17559658 Can Clustal-style progressive pairwise alignment of multiple sequences be used in RNA secondary structure prediction? Background: In ribonucleic acid (RNA) molecules whose function depends on their final, folded three-dimensional shape (such as those in ribosomes or spliceosome complexes), the secondary structure, defined by the set of internal basepair interactions, is more consistently conserved than the primary structure, defined by the sequence of nucleotides. Results: The research presented here investigates the possibility of applying a progressive, pairwise approach to the alignment of multiple RNA sequences by simultaneously predicting an energy-optimized consensus secondary structure. We take an existing algorithm for finding the secondary structure common to two RNA sequences, Dynalign, and alter it to align profiles of multiple sequences. We then explore the relative successes of different approaches to designing the tree that will guide progressive alignments of sequence profiles to create a multiple alignment and prediction of conserved structure. Conclusion: We have found that applying a progressive, pairwise approach to the alignment of multiple ribonucleic acid sequences produces highly reliable predictions of conserved basepairs, and we have shown how these predictions can be used as constraints to improve the results of a single-sequence structure prediction algorithm. However, we have also discovered that the amount of detail included in a consensus structure prediction is highly dependent on the order in which sequences are added to the alignment (the guide tree), and that if a consensus structure does not have sufficient detail, it is less likely to provide useful constraints for the single-sequence method. abstract_id: PUBMED:33289883 The Clustal Omega Multiple Alignment Package. Clustal Omega is a version, completely rewritten and revised in 2011, of the widely used Clustal series of programs for multiple sequence alignment. It can deal with very large numbers (many tens of thousands) of DNA/RNA or protein sequences due to its use of the mBed algorithm for calculating guide-trees. This algorithm allows very large alignment problems to be tackled very quickly, even on personal computers. The accuracy of the program has been considerably improved over earlier Clustal programs, through the use of the HHalign method for aligning profile hidden Markov models. The program currently is used from the command-line or can be run online. abstract_id: PUBMED:28884485 Clustal Omega for making accurate alignments of many protein sequences. Clustal Omega is a widely used package for carrying out multiple sequence alignment. Here, we describe some recent additions to the package and benchmark some alternative ways of making alignments. These benchmarks are based on protein structure comparisons or predictions and include a recently described method based on secondary structure prediction. In general, Clustal Omega is fast enough to make very large alignments and the accuracy of protein alignments is high when compared to alternative packages. The package is freely available as executables or source code from www.clustal.org or can be run on-line from a variety of sites, especially the EBI www.ebi.ac.uk. abstract_id: PUBMED:17445273 Efficient pairwise RNA structure prediction using probabilistic alignment constraints in Dynalign. Background: Joint alignment and secondary structure prediction of two RNA sequences can significantly improve the accuracy of the structural predictions. Methods addressing this problem, however, are forced to employ constraints that reduce computation by restricting the alignments and/or structures (i.e. folds) that are permissible. In this paper, a new methodology is presented for the purpose of establishing alignment constraints based on nucleotide alignment and insertion posterior probabilities. Using a hidden Markov model, posterior probabilities of alignment and insertion are computed for all possible pairings of nucleotide positions from the two sequences. These alignment and insertion posterior probabilities are additively combined to obtain probabilities of co-incidence for nucleotide position pairs. A suitable alignment constraint is obtained by thresholding the co-incidence probabilities. The constraint is integrated with Dynalign, a free energy minimization algorithm for joint alignment and secondary structure prediction. The resulting method is benchmarked against the previous version of Dynalign and against other programs for pairwise RNA structure prediction. Results: The proposed technique eliminates manual parameter selection in Dynalign and provides significant computational time savings in comparison to prior constraints in Dynalign while simultaneously providing a small improvement in the structural prediction accuracy. Savings are also realized in memory. In experiments over a 5S RNA dataset with average sequence length of approximately 120 nucleotides, the method reduces computation by a factor of 2. The method performs favorably in comparison to other programs for pairwise RNA structure prediction: yielding better accuracy, on average, and requiring significantly lesser computational resources. Conclusion: Probabilistic analysis can be utilized in order to automate the determination of alignment constraints for pairwise RNA structure prediction methods in a principled fashion. These constraints can reduce the computational and memory requirements of these methods while maintaining or improving their accuracy of structural prediction. This extends the practical reach of these methods to longer length sequences. The revised Dynalign code is freely available for download. abstract_id: PUBMED:24772375 R-PASS: A Fast Structure-based RNA Sequence Alignment Algorithm. We present a fast pairwise RNA sequence alignment method using structural information, named R-PASS (RNA Pairwise Alignment of Structure and Sequence), which shows good accuracy on sequences with low sequence identity and significantly faster than alternative methods. The method begins by representing RNA secondary structure as a set of structure motifs. The motifs from two RNAs are then used as input into a bipartite graph-matching algorithm, which determines the structure matches. The matches are then used as constraints in a constrained dynamic programming sequence alignment procedure. The R-PASS method has an O(nm) complexity. We compare our method with two other structure-based alignment methods, LARA and ExpaLoc, and with a sequence-based alignment method, MAFFT, across three benchmarks and obtain favorable results in accuracy and orders of magnitude faster in speed. abstract_id: PUBMED:37356900 rMSA: A Sequence Search and Alignment Algorithm to Improve RNA Structure Modeling. The multiple sequence alignment (MSA) is the entry point of many RNA structure modeling tasks, such as prediction of RNA secondary structure (rSS) and contacts. However, there are few automated programs for generating high quality MSAs of target RNA molecules. We have developed rMSA, a hierarchical pipeline for sensitive search and accurate alignment of RNA homologs for a target RNA. On a diverse set of 365 non-redundant RNA structures, rMSA significantly outperforms an existing MSA generation method (RNAcmap) by approximately 20% and 5% higher F1-scores for rSS and long-range contact prediction, respectively. rMSA is available at https://zhanggroup.org/rMSA/ and https://github.com/pylelab/rMSA. abstract_id: PUBMED:32993484 pmTM-align: scalable pairwise and multiple structure alignment with Apache Spark and OpenMP. Background: Structure comparison can provide useful information to identify functional and evolutionary relationship between proteins. With the dramatic increase of protein structure data in the Protein Data Bank, computation time quickly becomes the bottleneck for large scale structure comparisons. To more efficiently deal with informative multiple structure alignment tasks, we propose pmTM-align, a parallel protein structure alignment approach based on mTM-align/TM-align. pmTM-align contains two stages to handle pairwise structure alignments with Spark and the phylogenetic tree-based multiple structure alignment task on a single computer with OpenMP. Results: Experiments with the SABmark dataset showed that parallelization along with data structure optimization provided considerable speedup for mTM-align. The Spark-based structure alignments achieved near ideal scalability with large datasets, and the OpenMP-based construction of the phylogenetic tree accelerated the incremental alignment of multiple structures and metrics computation by a factor of about 2-5. Conclusions: pmTM-align enables scalable pairwise and multiple structure alignment computing and offers more timely responses for medium to large-sized input data than existing alignment tools such as mTM-align. abstract_id: PUBMED:32005745 RNAconTest: comparing tools for noncoding RNA multiple sequence alignment based on structural consistency. The importance of noncoding RNA sequences has become increasingly clear over the past decade. New RNA families are often detected and analyzed using comparative methods based on multiple sequence alignments. Accordingly, a number of programs have been developed for aligning and deriving secondary structures from sets of RNA sequences. Yet, the best tools for these tasks remain unclear because existing benchmarks contain too few sequences belonging to only a small number of RNA families. RNAconTest (RNA consistency test) is a new benchmarking approach relying on the observation that secondary structure is often conserved across highly divergent RNA sequences from the same family. RNAconTest scores multiple sequence alignments based on the level of consistency among known secondary structures belonging to reference sequences in their output alignment. Similarly, consensus secondary structure predictions are scored according to their agreement with one or more known structures in a family. Comparing the performance of 10 popular alignment programs using RNAconTest revealed that DAFS, DECIPHER, LocARNA, and MAFFT created the most structurally consistent alignments. The best consensus secondary structure predictions were generated by DAFS and LocARNA (via RNAalifold). Many of the methods specific to noncoding RNAs exhibited poor scalability as the number or length of input sequences increased, and several programs displayed substantial declines in score as more sequences were aligned. Overall, RNAconTest provides a means of testing and improving tools for comparative RNA analysis, as well as highlighting the best available approaches. RNAconTest is available from the DECIPHER website (http://DECIPHER.codes/Downloads.html). abstract_id: PUBMED:7984417 CLUSTAL W: improving the sensitivity of progressive multiple sequence alignment through sequence weighting, position-specific gap penalties and weight matrix choice. The sensitivity of the commonly used progressive multiple sequence alignment method has been greatly improved for the alignment of divergent protein sequences. Firstly, individual weights are assigned to each sequence in a partial alignment in order to down-weight near-duplicate sequences and up-weight the most divergent ones. Secondly, amino acid substitution matrices are varied at different alignment stages according to the divergence of the sequences to be aligned. Thirdly, residue-specific gap penalties and locally reduced gap penalties in hydrophilic regions encourage new gaps in potential loop regions rather than regular secondary structure. Fourthly, positions in early alignments where gaps have been opened receive locally reduced gap penalties to encourage the opening up of new gaps at these positions. These modifications are incorporated into a new program, CLUSTAL W which is freely available. abstract_id: PUBMED:15320732 Integrating protein secondary structure prediction and multiple sequence alignment. Modern protein secondary structure prediction methods are based on exploiting evolutionary information contained in multiple sequence alignments. Critical steps in the secondary structure prediction process are (i) the selection of a set of sequences that are homologous to a given query sequence, (ii) the choice of the multiple sequence alignment method, and (iii) the choice of the secondary structure prediction method. Because of the close relationship between these three steps and their critical influence on the prediction results, secondary structure prediction has received increased attention from the bioinformatics community over the last few years. In this treatise, we discuss recent developments in computational methods for protein secondary structure prediction and multiple sequence alignment, focus on the integration of these methods, and provide some recommendations for state-of-the-art secondary structure prediction in practice. Answer: Yes, Clustal-style progressive pairwise alignment of multiple sequences can be used in RNA secondary structure prediction. The research presented in the abstract from PUBMED:17559658 investigates the application of a progressive, pairwise approach to align multiple RNA sequences while simultaneously predicting an energy-optimized consensus secondary structure. This approach uses an existing algorithm, Dynalign, which is modified to align profiles of multiple sequences. The study found that this method produces highly reliable predictions of conserved basepairs, which can be used as constraints to improve the results of a single-sequence structure prediction algorithm. However, the study also notes that the amount of detail in a consensus structure prediction is highly dependent on the order in which sequences are added to the alignment, and if the consensus structure lacks sufficient detail, it may not provide useful constraints for the single-sequence method.
Instruction: Hexaminolevulinate-induced fluorescence versus white light during transurethral resection of noninvasive bladder tumor: does it reduce recurrences? Abstracts: abstract_id: PUBMED:22857752 Hexaminolevulinate-induced fluorescence versus white light during transurethral resection of noninvasive bladder tumor: does it reduce recurrences? Objective: To evaluate the effect of hexaminolevulinate (HAL)-induced fluorescence during resection of noninvasive bladder cancer on tumor recurrence compared with resection under white light. Methods: Between 2008 and 2010, 102 consecutive patients with suspected bladder cancer were randomized to undergo transurethral resection with either conventional white light or combination of white light and HAL-induced fluorescence. Difference in tumor recurrence rate and recurrence-free survival between the 2 groups was evaluated. Subgroup analysis on recurrence-free survival was performed for different tumor parameters. Results: Cystoscopy at 3 months revealed tumor recurrence in 6 of 45 (13.3%) patients of the white light group compared with only 1 of 41 patients of the HAL group (2.4%) (P < .001). The recurrence-free rates in white light patients at 12 and 18 months were 56.3% and 50.6%, respectively, compared with 91% and 82.5% in HAL patients (P = .0006). In subgroup analyses, recurrence-free survival was similar between the 2 groups when solitary tumors were treated (P = .3525). However, the HAL group had a favorable recurrence-free survival compared with the white light group when multifocal tumors (P < .001), primary tumors (P = .0237), recurrent tumors (P = .0189), nonaggressive (papillary urothelial neoplasm of low malignant potential and low grade) tumors (P = .0204), or aggressive (high grade and carcinoma in situ) tumors (P = .0134) were treated. Conclusion: HAL significantly aids resection of non-muscle-invasive bladder cancer with the result of reduction in tumor recurrence rates. abstract_id: PUBMED:27500197 Systematic Review and Meta-Analysis on the Impact of Hexaminolevulinate- Versus White-Light Guided Transurethral Bladder Tumor Resection on Progression in Non-Muscle Invasive Bladder Cancer. Introduction: Although there is evidence that hexaminolevulinate (HAL)-based transurethral bladder tumor resection (TURBT) improves the detection of Ta-T1 non-muscle-invasive bladder cancer (NMIBC) as well as carcinoma in situ there is uncertainty about its beneficial effects on progression. Material and Methods: A systematic literature search was conducted according to the PRISMA statement to identify studies reporting on HAL- vs. white-light (WL-) based TUR-BT in non-muscle invasive bladder cancer between 2000 and 2016. A two-stage selection process was utilized to determine eligible studies. Of a total of 294 studies, 5 (4 randomized and one retrospective) were considered for final analysis. The primary objective was the rate of progression. Results: The median follow-up for patients treated with HAL- and WL-TURBT was 27.6 (1-55.1) and 28.9 (1-53) months, respectively. Of a total of 1301 patients, 644 underwent HAL- and 657 WL-based TURBT. Progression was reported in 44 of 644 patients (6.8%) with HAL- and 70 of 657 patients (10.7%) with WL-TURBT, respectively (median odds ratio: 1.64, 1.10-2.45 for HAL vs. WL; p = 0.01). Data on progression-free survival was reported in a single study with a trend towards improved survival for patients treated with HAL-TURBT (p = 0.05). Conclusions: In this meta-analysis the rate of progression was significantly lower in patients treated with HAL- vs. WL-based TURBT. These results support the initiation of randomized trials on HAL with progression as primary endpoint. abstract_id: PUBMED:31104952 Is a routine second transurethral resection of the bladder still necessary after hexaminolévulinate photodynamic diagnosis-assisted TURBT? Objectives: The aim of our study was to assess the impact of blue light cystoscopy with hexaminolevulinate on residual tumor rates at second-look transurethral resection of the bladder (TURB). Material And Methods: Among all patients undergoing TURB in our center between 2012 and 2017, 52 patients had a second-look after a first complete TURB with a delay<3months. We compare patients with standard white light cystoscopy/TURB then second-look blue light cystoscopy/re-TURB (group A, n=30) and patients with blue light cystoscopy/TURB at the initial procedure then white light cystoscopy/re-TURB (group B, n=22). The residual tumor rates at second-look, restaging and changing in therapeutic strategy, as well as recurrence free survival and progression rate were compared. Results: Residual tumor at the time of second-look cystoscopy was detected in 42.3% of cases in our cohort, with a significant difference between the two groups (63.3% in group A versus 0% in group B, <0.001). In group A, 16.7% (5/30) of patients had upstaging and/or upgrading at second-look cystoscopy, resulting in a change in therapeutic strategy in most cases (4/5) while none upstaging was observed in group B. In multivariate analysis, the use of luminofluorescence at the first TURB was the only independent predictive factor of residual tumor (P=0.0031). Conclusion: The quality of the initial TURB, when performed by using blue light cystoscopy, had a significant impact on the rate of residual tumor at the second-look resection and could modify therapeutic strategy of NMIBC. Level Of Evidence: 4. abstract_id: PUBMED:25605343 Cost-effectiveness of transurethral resection of the bladder with blue light in patients with non muscle invasive bladder cancer in France Objective: Photodynamic diagnosis after instillation of hexylaminolevulinate (Hexvix(®)) during transurethral resection of the bladder (TURB) helps in the detection of tumors and results in a reduction of recurrence. The medical and economic impact of fluorescence compared to conventional white light TURB needed to be analyzed in the French healthcare system. The aim of this study was to evaluate the medical and economic impact of the blue light TURB in the treatment of NMIBC. Materials And Methods: A cost-utility model, based on data from the literature and expert opinions, combining a decision tree and a Markov model was used to simulate the initial management after a first TURB of all new patients diagnosed with symptoms consistent with NMIBC and outcomes at short and long terms. In this model, the initial TURB could be achieved either with fluorescence in addition to white light, or with white light only. The main criteria of the model was based on the quality adjusted life years (QALY). The economic evaluation focused on the direct costs. The test's results and costs were determined from diagnosis until death of patients. Results: The use of photodynamic diagnosis during TURB resulted in an improvement in QALYs (0.075) and a reduction of € 670 of the costs compared to the conventional treatment with white light. Thus, the blue light resection was defined as a strategy called "dominant" over the TURB in white light. Conclusion: In the context of the French health system, the model of the study showed that the blue light cystoscopy during TURB was associated with increased QALYs and reduced health spending. This kind of result is rare in oncology. This health economic analysis confirms the interest of hexylaminolevulinate acid in initial management of NMIBC, according to studies conducted in United Kingdom, Italy and Poland. abstract_id: PUBMED:26305667 Budget impact of incorporating one instillation of hexaminolevulinate hydrochloride blue-light cytoscopy in transurethral bladder tumour resection for patients with non-muscle-invasive bladder cancer in Sweden. Objectives: To explore the cost impact on Swedish healthcare of incorporating one instillation of hexaminolevulinate hydrochloride (HAL) blue-light cystoscopy into transurethral resection of bladder tumour (TURBT) in patients with suspected new or recurrent non-muscle-invasive bladder cancer (NMIBC). Materials And Methods: A decision tree model was built based on European Association of Urology guidelines for the treatment and management of NMIBC. Input data were compiled from two recent studies comparing recurrence rates of bladder cancer in patients undergoing TURBT with either the current standard of care (SOC) of white-light cystoscopy, or with the SOC and HAL blue-light cystoscopy. Using these published data with clinical cost data for surgical and outpatient procedures and pharmaceutical costs, the model reported on the clinical and economic differences associated with the two treatment options. Results: This model demonstrates the significant clinical benefits likely to be observed through the incorporation of HAL blue-light cystoscopy for TURBT in terms of reductions in recurrences of bladder cancer. Analysis of economic outputs of the model found that the use of one instillation of HAL for TURBT in all Swedish patients with NMIBC is likely to be cost-neutral or cost-saving over 5 years relative to the current SOC of white-light cystoscopy. Conclusions: The results of this analysis provide additional health economic rationale for the incorporation of a single instillation of HAL blue-light cystoscopy for TURBT in the treatment of patients with NMIBC in Sweden. abstract_id: PUBMED:31976152 Optimizing Diagnosis of Urothelial Bladder Cancer with Blue Light Cystoscopy via Recognition of False-Positive Lesions. Introduction: Blue light cystoscopy (BLC) using hexaminolevulinate (Cysview®) improves the detection of nonmuscle invasive bladder cancer (NMIBC).1-3 BLC results in lower recurrence rate and a better recurrence-free survival, as well as a progression benefit.4 However, false-positive (FP) fluorescence can occur for various reasons and can vary among different series. Studies have shown that FP rates are not significantly different from white light (WL) cystoscopy. We evaluated different scenarios producing FP in BLC. Methods: Under institutional review board approval, we prospectively enrolled consecutive patients undergoing transurethral resection of bladder lesions into a BLC registry between April 2014 and December 2016. Several cases are highlighted in the video demonstrating cystoscopic view under WL and blue light in specific circumstances increasing the chance of detecting an FP lesion. Results: BLC with Cysview is demonstrated in several challenging cases for the detection of NMIBC. Possible FP scenarios include tangential views of the bladder neck or side walls (1) trigone, trabeculations, or diverticula; (2) in setting of inflammation like cystitis; (3) postintravesical therapy, that is, <6 weeks interval from prior bacillus Calmette-Guérin (BCG); (4) prior resection within 6 weeks; (5) bright tiny spots; and (6) site of ureterectomy/bladder cuff resection, early fading lesions (after irrigation). Unnecessary biopsy of these lesions can be avoided through simple techniques such as changing the angle of the cystoscopic view, several rounds of irrigation, and avoiding BLC too early after BCG instillation or prior resection. Conclusions: Use of BLC with Cysview can help with the detection of NMIBC as well as carcinoma in situ in patients undergoing transurethral resection of bladder tumor for bladder cancer. The reported FP rates of BLC will decrease with experience and recognition of the mentioned scenarios. Prior presentation: None. No competing financial interests exist. Runtime of video: 7 mins 16 secs. abstract_id: PUBMED:24053153 Prospective randomized trial of hexylaminolevulinate photodynamic-assisted transurethral resection of bladder tumour (TURBT) plus single-shot intravesical mitomycin C vs conventional white-light TURBT plus mitomycin C in newly presenting non-muscle-invasive bladder cancer. Objective: To determine if photodynamic 'blue-light'-assisted resection leads to lower recurrence rates in newly presenting non-muscle-invasive bladder cancer (NMIBC). Patients And Methods: We conducted a prospective randomized trial of hexylaminolevulinate (HAL) photodynamic diagnosis (PDD)-assisted transurethral resection of bladder tumour (TURBT) plus single-shot intravesical mitomycin C vs standard white-light-assisted TURBT plus single-shot intravesical mitomycin C. A total of 249 patients with newly presenting suspected NMIBC enrolled at Guy's Hospital between March 2005 and April 2010. Patients with a history of bladder cancer were excluded. The surgery was performed by specialist bladder cancer surgical teams. Of the eligible patients, 90% agreed to be randomized. Results: Of the 249 patients, 209 (84%) had cancer and in 185 patients (89%) the cancer was diagnosed as NMIBC. There were no adverse events related to HAL in any of the patients randomized to the intravesical HAL-PDD arm. Single-shot intravesical mitomycin C was administered to 61/97 patients (63%) in the HAL-PDD arm compared with 68/88 patients (77%) in the white-light arm (P = 0.04) Intravesical HAL was an effective diagnostic tool for occult carcinoma in situ (CIS). Secondary CIS was identified in 25/97 patients (26%) in the HAL-PDD arm compared with 12/88 patients (14%) in the white-light arm ((P = 0.04) There was no significant difference in recurrence between the two arms at 3 or 12 months: in the HAL-PDD and the white-light arms recurrence was found in 17/86 and 14/82 patients (20 vs 17%), respectively ((P = 0.7) at 3 months, and in 10/63 and 15/67 patients (16 vs 22%), respectively ((P = 0.4) at 12 months. Conclusions: Despite HAL-PDD offering a more accurate diagnostic assessment of a bladder tumour, in this trial we did not show that this led to lower recurrence rates of newly presenting NMIBC compared with the best current standard of care. abstract_id: PUBMED:33626963 Primary Complete Transurethral Resection of Bladder Tumor Using Photodynamic Diagnosis for High-Risk Nonmuscle Invasive Bladder Cancer: Is a Restaging Photodynamic Transurethral Resection Really Necessary? Objectives: To evaluate the risk of residual tumor and tumor upstaging during a second resection after primary complete transurethral resection of bladder tumor (TURBT) using photodynamic diagnosis (PDD) for high-risk nonmuscle invasive bladder cancer (NMIBC). Patients and Methods: From January 2014 to March 2020, a single-institutional study was conducted including consecutive patients with high-risk NMIBC (T1 and/or cis and/or high grade) who underwent a restaging transurethral resection (reTUR) within 12 weeks after a primary complete resection. Each TURBT was performed using blue light after intravesical instillation of hexaminolevulinate. The primary endpoint was detection of residual tumor at reTUR, proved with positive pathology report. Results: A total of 109 consecutive patients with high-risk NMIBC underwent reTUR after a primary complete blue light resection. Pathologic evaluation of the surgical specimens of the primary TURBT revealed stage T1 and high-grade tumors in 69 (68.3%) and 108 (99%) patients, respectively, and concomitant carcinoma in situ was found in 45 patients (41.3%). The median time to reTUR was 8 (6-10) weeks. Residual tumor was detected histopathologically in 64 of 109 patients (58.7%) at the second TURBT with PDD. In five of these patients (4.5%), initial T1 tumors were upstaged to T2 tumors. Conclusions: We examined a contemporary series of patients undergoing reTUR with PDD as management of high-risk NMIBC proven at the first blue light resection. We reported a 54.2% risk of disease persistence and a 4.5% risk of understaging in T1 tumors. These findings support that reTUR is still necessary after initial complete TURBT with PDD. Further studies are needed to assess the long-term oncologic outcomes of reTUR with PDD. abstract_id: PUBMED:19473068 Hexaminolevulinate fluorescence cystoscopy and transurethral resection of the bladder in noninvasive bladder tumors. Purpose: We compared hexaminolevulinate (HAL) blue light cystoscopy (BLC) with standard white light cystoscopy (WLC) in the diagnosis of noninvasive bladder tumors (NIBT) and analyzed the efficiency of blue light transurethral resection of the bladder (BL-TURB). Materials And Methods: WLC and BLC were performed in 64 cases. BL-TURB was applied for lesions that were detected only by BLC. Patients who received a diagnosis of NIBT were followed up after 18 weeks by WLC and BLC. The control group included the same number of consecutive patients with NIBT, diagnosed only by WLC and treated by WL-TURB. Results: WLC correctly diagnosed 66.4%, with a 9.7% rate of false-positive results, while BLC diagnosed 95.7%, but had a 13.5% rate of false-positive results. The study group had a recurrence rate of 5.3% vs 23.2% in the control group. Conclusions: HAL fluorescence cystoscopy is a valuable diagnostic method, with considerably improved accuracy. BL-TURB represents a superior treatment modality, with significant impact on the recurrence rate. abstract_id: PUBMED:37636209 Blue light transurethral resection and biopsy of bladder cancer with hexaminolevulinate: Histopathological characteristics and recurrence rates in a single UK centre study. Introduction: Blue light cystoscopy with hexaminolevulinate (HAL) during transurethral resection of bladder cancer (TURBT) has been shown to improve detection, thereby reducing bladder cancer recurrence compared with white light cystoscopy. Methods: Single-centred UK (United Kingdom) study on 101 patients who underwent blue light cystoscopy between July 2017 and November 2020, performed by a single surgeon. Our study was divided into two arms; the primary arm had no prior diagnosis of bladder malignancy (N = 41), whereas secondary re-resection arm had (N = 57). Three patients with non-urothelial bladder cancer were excluded. Patients were followed up for 24 months. Data were collected on biopsy quality, histopathological characteristics and recurrence. The end points of the study were recurrence rate at 24 months in both arms and detection of CIS in patients who undergo TURBT or biopsy after initial white light study in the secondary, re-resection arm. This was analysed with Fisher's exact test. Results: Of 98 patients, 39 had malignancy in their first blue light TURBT/biopsy: primary arm (10/41, 24.4%) and secondary arm (29/57, 50.9%), with detrusor present in 80.5% and 80.7%, respectively. In the secondary arm, blue light re-resection TURBT detected significantly more CIS (20.7% vs 51.7%, p = 0.0277) compared with white light with 3.4% upstaged to muscle invasive bladder cancer (G3pT2). Median time to re-resection was 3.06 months. Recurrence rate was 33.3% in the primary arm and 37.5% in the secondary arm after 24 months of follow-up. Conclusion: Our data confirm that blue light TURBT with HAL provides superior detection and diagnosis of CIS in patients with previous white light cystoscopy. Answer: Yes, hexaminolevulinate (HAL) induced fluorescence during transurethral resection of noninvasive bladder tumor does reduce recurrences. Studies have shown that the use of HAL during resection results in a significant reduction in tumor recurrence rates compared to resection under conventional white light. For instance, one study reported that cystoscopy at 3 months revealed tumor recurrence in only 2.4% of patients in the HAL group compared to 13.3% in the white light group, with recurrence-free rates at 12 and 18 months being significantly higher in the HAL group (91% and 82.5%, respectively) compared to the white light group (56.3% and 50.6%, respectively) (PUBMED:22857752). A systematic review and meta-analysis also found that the rate of progression was significantly lower in patients treated with HAL- versus white-light-based transurethral bladder tumor resection (TURBT), supporting the initiation of randomized trials on HAL with progression as the primary endpoint (PUBMED:27500197). Additionally, the use of HAL during the initial TURBT significantly impacted the rate of residual tumor at the second-look resection and could modify the therapeutic strategy of non-muscle invasive bladder cancer (NMIBC) (PUBMED:31104952). Furthermore, health economic analyses have shown that the use of HAL during TURBT is associated with increased quality-adjusted life years (QALYs) and reduced health spending, making it a cost-effective strategy in the treatment of NMIBC (PUBMED:25605343, PUBMED:26305667). However, it is important to note that while HAL significantly aids in the resection of non-muscle-invasive bladder cancer and reduces tumor recurrence rates, there are scenarios where false-positive fluorescence can occur, and recognition of these scenarios can help decrease false-positive rates (PUBMED:33626963). Despite these considerations, the overall evidence supports the use of HAL-induced fluorescence cystoscopy as an effective method to reduce recurrences in the treatment of noninvasive bladder tumors.
Instruction: Does mode of hysterectomy influence micturition and defecation? Abstracts: abstract_id: PUBMED:11580740 Does mode of hysterectomy influence micturition and defecation? Objective: Hysterectomy may affect bladder and bowel function. A retrospective study was performed to compare the prevalence of micturition and defecation symptoms between different modes of hysterectomy. Methods: All pre-operatively asymptomatic patients, with uteral size < or =10 cm, who underwent hysterectomy between 1988 and 1997 were interviewed about the prevalence of micturition and defecation symptoms and the experienced physical and emotional limitations of these symptoms. Using logistic regression analysis, odds ratios (OR) were calculated for all symptoms of which the prevalence between modes of hysterectomy differed more than 10%. These odds ratios were adjusted for differences in other prognostic factors. Results: Vaginal hysterectomy was performed on 68 patients, total abdominal hysterectomy on 109 patients and subtotal abdominal hysterectomy on 50 patients. An increased prevalence of urge incontinence (adjusted OR 1.5 (95% CI 0.8-3.1)) and feeling of incomplete evacuation (adjusted OR 1.9 (95% CI 1.0-4.0)) was observed among patients who had undergone vaginal hysterectomy as compared to patients who had undergone total abdominal hysterectomy. The prevalence of urge incontinence (adjusted OR 1.8 (95% CI 0.8-4.2)) and difficulty emptying the rectum (adjusted OR 1.8 (95% CI 0.7-4.4)) was higher among patients who had undergone vaginal hysterectomy than among patients who had undergone subtotal abdominal hysterectomy. Statistically significant odds ratios were not observed. Relevant differences in physical and emotional limitations related to micturition and defecation symptoms were not observed between groups. Conclusion: Our results suggest that technique of hysterectomy may influence the prevalence of micturition and defecation symptoms following hysterectomy. abstract_id: PUBMED:21864324 A long-term prospective study to compare the effects of vaginal and abdominal hysterectomy on micturition and defecation. Objective: We compared micturition and defecation symptoms with the route of hysterectomy over a period of 10 years. Design: Prospective multicentre observational study. Setting: Thirteen teaching hospitals in the Netherlands. Population: A total of 430 women who underwent vaginal or abdominal hysterectomy for benign conditions (pelvic organ prolapse excluded). Methods: The presence and discomfort experienced as a result of micturition and defecation symptoms were assessed using validated questionnaires before surgery, and at 6 weeks, 6 months, 1 year, 3 years and 10 years after surgery. Statistically significant differences in symptoms between vaginal and abdominal hysterectomy were adjusted for pre-operative differences in uterine descent, uterine size, parity and indication for hysterectomy. Main Outcome Measure: Presence of bothersome micturition and defecation symptoms, as assessed by the Urogenital Distress Inventory (UDI) and Defecation Distress Inventory (DDI). Results: Ten years after hysterectomy the response rate was 73%. Preoperatively, no differences were observed in the prevalence of micturition symptoms between patients who underwent vaginal and abdominal hysterectomy. However, 10 years after vaginal hysterectomy, significantly more women had been treated for micturition symptoms (18 versus 8%; P = 0.02; adjusted OR 3.8, 95% CI 1.2-11.6). Defecation symptoms also seemed more common after vaginal hysterectomy (58 versus 46%; P = 0.08). After adjustment, no statistically significant differences in defecation symptoms were found. Conclusions: Despite the same incidence of micturition symptoms before surgery, patients undergoing vaginal hysterectomy are more likely to seek medical help for micturition symptoms. Defecation symptoms were also more common after vaginal hysterectomy; however, this difference was not statistically significant. abstract_id: PUBMED:25796770 Defecation after colic transmural migration of a gauze pad left behind during abdomino-pelvic surgery We report a case of complete transmural migration through the colic wall, followed by spontaneous defecation, of a surgical sponge left behind in the abdominal cavity (gossypiboma) during a vaginal hysterectomy with oophorectomy. The process results from encapsulation of the surgical sponge with fistulization through the colic wall. No functional damage followed this defecation because either the fistula closed early, or the sponge remained temporarily entrapped in a colic fold. The diagnosis was based on echography and computed tomography. abstract_id: PUBMED:18340532 Obstructive defecation in middle-aged women. Obstructive defecation, a significant contributor to constipation, is frequently reported in middle-aged women, yet few population-based studies have established prevalence in this group. We analyzed data from the Reproductive Risks for Incontinence Study at Kaiser, a population-based cohort of racially diverse women, 40-69 years old, to describe the prevalence of obstructive defecation and identify associated risk factors. The Reproductive Risks for Incontinence Study at Kaiser is a randomly selected cohort of 2,109 women in the Kaiser Medical System. Obstructive defecation, determined by self-report, was defined as difficulty in passing stool, hard stool, straining for more than 15 min, or incomplete evacuation, occurring at least weekly. Age, race, income, education, drinking, health status, parity, pelvic organ prolapse, urinary incontinence, number of medications, hysterectomy, surgery for pelvic organ prolapse, colectomy, irritable bowel syndrome, and body mass index were assessed for both their univariate and multivariate association with obstructive defecation. Multivariate logistic regression was used to determine the independent association between associated factors and the primary outcome of obstructive defecation. Obstructive defecation that occurred at least weekly was reported by 12.3% of women. Significant independent risk factors included irritable bowel syndrome [odds ratio 1.78, (95% confidence interval 1.21-2.60)], vaginal or laparoscopic hysterectomy [2.01 (1.15-3.54)], unemployment [2.33 (1.39-3.92)], using three or more medications [1.81 (1.36-2.42)], symptomatic pelvic organ prolapse [2.34 (1.47-3.71)], urinary incontinence surgery [2.52 (1.29-4.90)], and other pelvic surgery [1.35 (1.03-1.78)]. We concluded that obstructive defecation is common in middle-aged women, especially those with a history of treatment for pelvic floor conditions. Women who had undergone laparoscopic/vaginal hysterectomies or surgery for pelvic organ prolapse or urinary incontinence had a nearly two times greater risk of weekly obstructive defecation. Demographic factors, with the exception of employment status, were not significant, indicating that obstructive defecation, although widespread, does not affect any particular group of women. abstract_id: PUBMED:17583093 Defecation complaints after hysterectomy because of a benign condition are rare; a prospective study Objective: To determine the frequency of defecation symptoms after hysterectomy. Design: Prospective multicentre study, 3 year follow-up. Methods: 404 patients from 13 teaching or non-teaching hospitals in The Netherlands, who underwent hysterectomy for benign disease other than symptomatic uterine prolapse or known endometriosis, were asked to complete the defecation distress inventory before and 3 years after surgery. The defecation distress inventory is a validated disease specific quality of life questionnaire for assessment of the presence and experienced inconvenience of defecation symptoms. Results: A response rate of 328/372 (88%) of 404 patients whose address could be recovered after 3 years was found. De novo constipation occurred in 2% of the patients following hysterectomy. Sensation of anal blockage and incomplete defecation occurred in more than 15% of the patients. Defecation symptoms reported before surgery had persisted in about half of the patients at 3 years after hysterectomy. A feeling of incomplete defecation at 3 years after hysterectomy was more common after subtotal hysterectomy than after total or vaginal hysterectomy (adjusted odds ratio: 2.1 (95% CI: 1.1-3.8) and 1.4 (95% CI: 0.7-2.7), respectively). Conclusion: Defecation symptoms seldom developed after hysterectomy for benign disease. The type of operation did not play a role. abstract_id: PUBMED:19609740 An unusual cause of obstructed defecation. Obstructed defecation is one subtype of constipation, and may be due to functional or mechanical causes. Here, we report an unusual cause, never described before, of obstructed defecation due to a large uterine myoma that reverted to normal bowel habits after surgery. The importance of an accurate evaluation of the causes of constipation is highlighted, to recognize potential curable factors. abstract_id: PUBMED:35572488 Reappraising the Role of Enterocele in the Obstructed Defecation Syndrome: Is Radiological Impaired Rectal Emptying Significant in Enterocele? Objectives: The role of enterocele in the obstructed defecation syndrome (ODS) has remained to be controversial, as patients with enterocele frequently exhibit multiple risk factors, including aging, parity, concomitant different abnormalities, previous histories of pelvic surgery, and incomplete emptying of the rectum. Thus, in this study, we aimed to investigate the association between enterocele and ODS using multivariate analysis. Methods: Between June 2013 and June 2021, 336 women underwent defecography as they had symptoms of ODS. Of those, 293 women (87%) who had anatomical abnormalities were included in this study. Results: Enterocele was detected in 104 (36%) patients. More women with enterocele had histories of hysterectomy compared to those without enterocele (29% vs. 10%, P < 0.0001). The frequency of radiological incomplete emptying was found to be significantly lower in women with enterocele (36%) than in those without enterocele (50%), whereas the mean (95% confidence interval) ODS scores in women with enterocele were significantly higher than those without enterocele [12.1 (11.0-13.3) versus 10.8 (10.5-11.5), P = 0.023]. As per the results of our multivariate analysis, it was determined that the presence of enterocele was associated with higher ODS scores (P = 0.028). However, the small differences in the mean score (1.3) would be clinically negligible. The specific radiological type of enterocele which compressed the rectal ampulla at the beginning of defecation was not associated with the increased ODS scores. Conclusions: The presence of enterocele may not be a primary cause of ODS. Other anatomical abnormalities combined with enterocele, or the hernia itself, may have a role in causing ODS. abstract_id: PUBMED:17714680 Symptoms of anal incontinence and difficult defecation among women with prolapse and a matched control cohort. Objective: The purpose of this study was to quantify the risk for anal incontinence and difficult defecation among women with prolapse by comparing them with women without prolapse of similar age, body mass index, race, and hysterectomy status, and to determine whether there are characteristics or findings in women with prolapse that are associated with greater symptom severity. Study Design: Women with primary pelvic organ prolapse (n = 151) were compared with women without prolapse (n = 135). All subjects underwent pelvic examination and completed symptom questionnaires regarding how frequently anal incontinence and difficult defecation were experienced. Incontinence of flatus was considered to be present if it occurred on "most" or "every" day; difficult defecation was considered to be present if it was experienced with "most" or "every" bowel movement. Symptoms that occurred "on occasion" or "never" were considered to be absent. Results: Incontinence of flatus was reported by 23.1% of cases vs 8.3% of control subjects (P = .006). Incontinence of liquid or solid stool was present in 4.7% and 3.5%, respectively, and was not reported by control subjects (P < .001 and .009, respectively). Difficult defecation, which was characterized by pushing on the vaginal walls to complete defecation, was present in 19.7% vs 4.4% of control subjects (P = .001). Cases that reported symptoms were compared with those that did not report symptoms. Among those reporting difficult defecation, the length of the perineal body length was greater when straining (4.0 vs 3.4 cm; P = .020). Among those reporting incontinence of flatus, mean parity was higher (3.3 vs 2.5; P = .012), and a positive standing cough stress test was more likely (39.3% vs 18.5%; P = .025). Symptoms of anal incontinence and/or difficult defecation were present in 35.3% of subjects (52/147). Conclusion: Women with prolapse are more likely than control subjects to have symptoms of anal incontinence or difficult defecation; approximately one-third of these women will have symptoms. abstract_id: PUBMED:26611759 The 'bother' of obstructed defecation. Objective: To examine the relationship of visual analog scale (VAS) 'bother' scores for obstructed defecation (OD) with demographic data, physical examination and sonographic findings of the posterior compartment. Methods: All patients seen at a urogynecology clinic between January and October 2013 were included. Patients were diagnosed with OD if they had any of the following: incomplete bowel emptying, straining with bowel movement or need for digitation. Patients used a VAS to rate OD bother on a scale of 0-10 (0, no bother; 10, worst imaginable bother). For each patient, a comprehensive history was obtained, the International Continence Society Pelvic Organ Prolapse Quantification was performed and four-dimensional translabial ultrasound volumes were recorded on maximal Valsalva maneuver. Linear and multiple regression models were used to correlate bother VAS scores with demographic, clinical and sonographic findings. Results: Among 265 patients included in the analysis, 61% had OD symptoms with a mean VAS bother score of 5.6. OD bother scores were associated with a history of previous prolapse surgery (P = 0.0001), previous hysterectomy (P = 0.0006), descent of the posterior compartment (Bp; P = 0.004) and hiatal dimensions (Pb and Gh + Pb; P = 0.006 and P = 0.004). OD bother was associated with the following sonographic findings: true rectocele (P = 0.01), depth of rectocele (P = 0.04), descent of rectal ampulla (P = 0.02), enterocele (P = 0.03) and rectal intussusception (P < 0.0001). Conclusions: VAS bother scores are associated with both clinical and sonographic measures of posterior compartment descent. Rectal intussusception was most likely to result in highly bothersome symptoms of OD. Copyright © 2015 ISUOG. Published by John Wiley & Sons Ltd. abstract_id: PUBMED:22728020 Impact of cervical cancer treatment on micturition and sexual function Context: Cervical cancer is the second most common tumor in women worldwide and due to diagnostic and therapeutic advances, the overall survival rates at 5 years is approaching 70%. Disorders in micturition, defecation, sexuality and quality of life have been described, frequently caused by different treatments. Addressing these comorbidities in the medical follow-up is often limited or nonexistent. Methods: A systematic review of studies to identify the articles related with urogynecological sequels from cervical cancer treatment was carried out. Summary Of Evidence: During radical hysterectomy, disruption of the autonomic nerve fibers which innervate the bladder appears to be the main cause of voiding dysfunction. Up to 36% of women report voiding dysfunction; from 10 to 80%, stress urinary incontinence (SUI), due to the decrease in urethral closure pressure. After radical hysterectomy and/or radiotherapy, vaginal shortening and stenosis after is often observed. Sexual function is altered in these women and those who are sexually active women after the surgery frequently report sexual dysfunction due to lack of lubrication and pain. Conclusions: Voiding dysfunction and urinary incontinence are the most frequent urinary problems that occur in patients treated for cervical cancer. Systemic urogynecologic assessment of the symptoms suggestive of micturition dysfunctions during oncologic follow-up may be useful to detect the cases that can be evaluated and treated in an Urogynecology Unit. Answer: Yes, the mode of hysterectomy does appear to influence micturition and defecation. Studies have shown that different techniques of hysterectomy may have varying impacts on bladder and bowel function post-surgery. For instance, a retrospective study found that patients who underwent vaginal hysterectomy had an increased prevalence of urge incontinence and feeling of incomplete evacuation compared to those who had total abdominal hysterectomy. Similarly, the prevalence of urge incontinence and difficulty emptying the rectum was higher among patients who had undergone vaginal hysterectomy than among patients who had undergone subtotal abdominal hysterectomy, although the observed odds ratios were not statistically significant (PUBMED:11580740). A long-term prospective study also indicated that 10 years after vaginal hysterectomy, significantly more women had been treated for micturition symptoms compared to those who had abdominal hysterectomy. Defecation symptoms also seemed more common after vaginal hysterectomy, but the difference was not statistically significant after adjustment (PUBMED:21864324). Another prospective study found that defecation complaints after hysterectomy for benign conditions were rare and that the type of operation did not play a significant role in the development of defecation symptoms (PUBMED:17583093). In summary, while the evidence suggests that vaginal hysterectomy may be associated with a higher prevalence of certain micturition and defecation symptoms compared to abdominal hysterectomy, the differences are not always statistically significant, and the overall incidence of such symptoms post-hysterectomy is relatively low. It is important to note that individual patient factors and the presence of other pelvic floor disorders may also contribute to these symptoms.
Instruction: The effect of a six-week program of yoga and meditation on brachial artery reactivity: do psychosocial interventions affect vascular tone? Abstracts: abstract_id: PUBMED:17007170 The effect of a six-week program of yoga and meditation on brachial artery reactivity: do psychosocial interventions affect vascular tone? Background: Chronic stress is estimated to increase the risk of cardiovascular (CV) events two-fold. Although stress reduction has been linked to a reduction in CV events, little is known regarding its exact mechanism of benefit. Hypothesis: Yoga and meditation will improve parameters of endothelial function. Methods: We examined the effects of yoga and meditation on hemodynamic and laboratory parameters as well as on endothelial function in a 6-week pilot study. Systolic and diastolic blood pressures, heart rate, body mass index (BMI), fasting glucose, lipids, hs C-reactive protein (CRP), and endothelial function (as assessed by brachial artery reactivity) were all studied at baseline and after 6 weeks of yoga practice. Results: A course in yoga and meditation was given to the subjects for 1.5 h three times weekly for 6 weeks and subjects were instructed to continue their efforts at home. This prospective cohort study included 33 subjects (mean age 55 +/- 11 years) both with (30%) and without (70%) established coronary artery disease (CAD). There were significant reductions in blood pressure, heart rate, and BMI in the total cohort with yoga. None of the laboratory parameters changed significantly with yoga. For the total cohort there was no significant improvement in endothelial-dependent vasodilatation with yoga training and meditation compared with baseline (16.7% relative improvement from 7.2-8.4%; p = 0.3). In the group with CAD, endothelial-dependent vasodilatation improved 69% with yoga training (6.38-10.78%; p = 0.09). Conclusion: Yoga and meditation appear to improve endothelial function in subjects with CAD. abstract_id: PUBMED:36239640 Effect of Practicing Meditation, Pranayama, and Yoga on the Mental Health of Female Undergraduate Medical Students: An Interventional Study. Background Medical education is a rigorous formal education with a vast syllabus. Burnout and stresses are common among medical students and more prominent in females. Female medical students suffer from a higher level of stress than male medical students. For the improvement of physical and mental health, meditation, pranayama, and yoga are some of the ancient techniques. Meditation is a technique of focusing the mind on a target like an object, activity, or any thought. Pranayama is an ancient yogic practice focusing on the breath. Yoga is a combination of physical, mental, and spiritual dimensions that has the potential to improve mental and physical health. Objective This study aimed to find the effect of meditation, pranayama, and yoga on the improvement of mental health among female undergraduate medical students. Materials and methods This was an interventional study. A total of 105 females with a median age of 19 years (first quartile - third quartile: 18-20) first-year undergraduate medical students were recruited for this study. They were randomly allocated to control, meditation, pranayama, and yoga groups. The control group did not practice any form of meditation, pranayama, or yoga. The rest of the group practiced a designated program for their group, six days a week for 12 weeks. The anxiety, depression, anger, and sense of well-being were assessed by a validated self-administered questionnaire developed by the Defence Institute of Physiology and Allied Sciences, New Delhi before starting the study, at six weeks, and at 12 weeks after the intervention. Inter-group levels of anxiety, depression, anger, and well-being were compared by the Kruskal-Wallis test with Dunn's posthoc test. Intra-group parameters at baseline, at six weeks, and at 12 weeks after the intervention was tested by Friedman's test. Result The age (years) (p = 0.07), height (cm) (p = 0.98), and weight (kg) (p = 0.26) of participants among groups were similar. Anxiety, depression, and anger significantly decreased after six weeks in all three intervention groups. A further decrement was seen after 12 weeks of meditation, pranayama, and yoga. The maximum effect was seen in the yoga group. A sense of well-being was improved after practicing all types of interventions. However, meditation was found to increase a sense of well-being to the highest level compared to pranayama and yoga. Conclusion Introduction and sustainment of meditation, pranayama, and yoga programs for first-year female undergraduate medical students may help reduce anxiety, depression, and anger and promote a sense of well-being. Although a six-week program helps to improve mental health, a 12-week program helps in further improvement. A yoga program is more effective for improving the mental health of the students in comparison with pranayama and meditation. abstract_id: PUBMED:35047097 The effects of yoga and mindful meditation on elderly care worker's burnout: a CONSORT-compliant randomized controlled trial. Objectives: This study aims to investigate the effects of mindful meditation and yoga on reducing burnout and stress in care workers who assist elderly individuals. Knowing how to reduce burnout is important because that of care workers is associated with the quality of client care, worker productivity, and job turnover. Patients and Methods: The participants included 44 care workers who worked for elderly care facilities in rural Fukuoka. They were randomly assigned to one of three intervention groups: control, yoga, or mindfulness. In the yoga intervention group, a certified yoga instructor taught a 60-minute yoga session each week for six weeks. In the mindfulness group, an experienced medical doctor instructed a mindful meditation program for the same length. Participants were asked to complete the Japanese Burnout Scale (JBS), and the research team collected the level of α-amylase in saliva using NIPRO: T-110-N pre- and post-interventions. Results: MANOVA was performed with each intervention (control, yoga, mindfulness) as the independent variable on the three subscales of the JBS (emotional exhaustion, depersonalization, and personal achievement) and a biomarker of stress level (α-amylase). The results indicated a significant main effect of interventions, and a follow-up ANOVA showed a significant effect of interventions on emotional exhaustion and personal achievement. Conclusion: The results indicate that practicing mindful meditation or yoga for 60 minutes once a week for six weeks can reduce care workers' burnout. This study was notable because the biomarker of stress also improved. It is strongly recommended and encouraged that institutions caring for the elderly population provide mindful meditation or yoga intervention to reduce burnout, which benefits not only care workers but also their clients. abstract_id: PUBMED:36087391 Methodological issues in conducting yoga- and meditation-based research: A narrative review and research implications. Yoga and meditation-based interventions have been extensively utilized in the field of contemporary complementary and alternative medicine for various physical and mental health conditions. Ongoing COVID-19 pandemic has rekindled the interest of researchers in yoga and meditation for its preventive and therapeutic utilities. However, the available literature in this area has several methodological concerns, limiting formers' clinical utility. A comprehensive literature on this topic would stimulate researchers and guide them to conduct research on this topic with robust methodologies. The current review highlights the methodological issues with the yoga and meditation-based Research (henceforth, MBR), discusses some of the contentious issues, and provides future directions. The PubMed, Medline, and google scholar databases were searched to screen records dealing with the methodological issues on MBR. The search yielded 299 records, upon screening, only 24 articles were found suitable for the current study. Common methodological issues with MBR: lack of the consensus definitions of the yoga and meditations, interventions lacking theoretical framework of meditation; inadequate description of the study design; difficulty with participants recruitment, setting up the control groups, and blinding; difficulty in assessing the baseline characteristics of the participants, and validity issues with the outcome measures. A few research, however, have also highlighted the potential measures to overcome these methodological challenges. Yoga and meditation-based interventions are promising for several health conditions. However, literature suffers from considerable methodological issues, thus, limiting its utility in modern clinical practice. The study findings can stimulate and guide future research on this topic. abstract_id: PUBMED:34040857 A naturalistic study of yoga, meditation, self-perceived stress, self-compassion, and mindfulness in college students. Objective: This study compared the effects of yoga and mindfulness meditation on self-compassion, mindfulness, and perceived stress in college students; and explored mind-body mechanisms and predictors of stress reduction. Participants: Student participants (N = 92) were enrolled in either yoga or mindfulness meditation classes at a college in the southern United States from August through May of 2015-2016. Methods: Students participated in 50-minute classes twice a week for 10 weeks, completing self-report questionnaires during the 1st and 10th week. Results: Multiple-linear regression analysis found change in self-compassion was the strongest predictor of stress reduction. Conclusions: Increasing self-compassion may increase the efficacy of mind-body interventions. Research into mind-body mechanisms is needed to identify intervention components that most improve student well-being. abstract_id: PUBMED:29350580 Development and Evaluation of a Yoga Intervention Program for Parkinson's Disease. Preliminary research indicates that yoga could be a valuable tool for people suffering from Parkinson's disease (PD). However, little has been published about the process by which the yoga interventions were designed and evaluated. This study elaborates on the process of developing and testing a bi-weekly, 12-week yoga program to determine its safety and feasibility for people with PD. The lead yoga teacher used input from a focused literature review to design an initial draft of the intervention program. This draft was reviewed by a group of yoga experts ( n = 6) to develop the final intervention program. This 12-week intervention was implemented in 19 participants with PD (mean age 63 ± 8, range 49-75) via twice-weekly yoga classes. Through this comprehensive development process, a series of 24 individual 1-hour yoga sequences was created. These sequences included yoga postures (asana), breathing techniques (pranayama), and mindfulness meditation principles specifically chosen to address concerns unique to the PD population. The feasibility of the program was supported with excellent attendance: 90% of participants attended > 75% of the classes, with four participants attending 100%. No adverse events were reported. This development process produced a safe and enjoyable yoga program specific for the needs of people with PD. However, this methodology could serve as a template for future studies on how to develop safe and effective yoga interventions for other populations. abstract_id: PUBMED:19657730 Yoga ameliorates performance anxiety and mood disturbance in young professional musicians. Yoga and meditation can alleviate stress, anxiety, mood disturbance, and musculoskeletal problems, and can enhance cognitive and physical performance. Professional musicians experience high levels of stress, performance anxiety, and debilitating performance-related musculoskeletal disorders (PRMDs). The goal of this controlled study was to evaluate the benefits of yoga and meditation for musicians. Young adult professional musicians who volunteered to participate in a 2-month program of yoga and meditation were randomized to a yoga lifestyle intervention group (n = 15) or to a group practicing yoga and meditation only (n = 15). Additional musicians were recruited to a no-practice control group (n = 15). Both yoga groups attended three Kripalu Yoga or meditation classes each week. The yoga lifestyle group also experienced weekly group practice and discussion sessions as part of their more immersive treatment. All participants completed baseline and end-program self-report questionnaires that evaluated music performance anxiety, mood, PRMDs, perceived stress, and sleep quality; many participants later completed a 1-year followup assessment using the same questionnaires. Both yoga groups showed a trend towards less music performance anxiety and significantly less general anxiety/tension, depression, and anger at end-program relative to controls, but showed no changes in PRMDs, stress, or sleep. Similar results in the two yoga groups, despite psychosocial differences in their interventions, suggest that the yoga and meditation techniques themselves may have mediated the improvements. Our results suggest that yoga and meditation techniques can reduce performance anxiety and mood disturbance in young professional musicians. abstract_id: PUBMED:19585761 Effect of Raja yoga meditation on the lipid profile of post-menopausal women. Background: Coronary artery disease is an important cause of death and disability among older women. Modification in lipid profile lowers the risk of coronary artery disease. It is claimed that yoga and transcendental meditation have a cholesterol lowering effect. This study was designed to assess the effect of raja yoga meditation of Brahmakumaris which is very simple to practice, on serum lipids in normal Indian women. Methods And Results: 49 normal female volunteers were the subjects. They were divided into pre-menopausal (n=23) and post-menopausal (n=26) groups. They were further divided into non-meditators (who had never done any kind of meditation), short-term meditators (meditating for 6 months to 5 years) and long-term meditators (meditating for more than 5 years). Lipid profile was assessed using their respective reagent sets. Serum cholesterol, triglyceride and low-density lipoprotein-cholesterol in nonmeditators were significantly more in post-menopausal women as compared to pre-menopausal women. Serum cholesterol and low density lipoprotein-cholesterol were significantly lowered in both short and long term meditators as compared to non-meditators in post-menopausal women. No significant difference was observed in lipid profile in pre-menopausal women. Conclusion: Raja yoga meditation lowered serum cholesterol and low-density lipoprotein-cholesterol in post-menopausal women thus reducing the risk of coronary artery disease in them. abstract_id: PUBMED:19735239 Yoga breathing, meditation, and longevity. Yoga breathing is an important part of health and spiritual practices in Indo-Tibetan traditions. Considered fundamental for the development of physical well-being, meditation, awareness, and enlightenment, it is both a form of meditation in itself and a preparation for deep meditation. Yoga breathing (pranayama) can rapidly bring the mind to the present moment and reduce stress. In this paper, we review data indicating how breath work can affect longevity mechanisms in some ways that overlap with meditation and in other ways that are different from, but that synergistically enhance, the effects of meditation. We also provide clinical evidence for the use of yoga breathing in the treatment of depression, anxiety, post-traumatic stress disorder, and for victims of mass disasters. By inducing stress resilience, breath work enables us to rapidly and compassionately relieve many forms of suffering. abstract_id: PUBMED:37065343 A Systematic Review of Yoga and Meditation for Attention-Deficit/Hyperactivity Disorder in Children. With attention-deficit/hyperactivity disorder (ADHD) being a prevalent disorder in childhood, it is essential to acknowledge the different adverse effects the disorder can have on the quality of life in children. Therefore, this systematic review focuses mainly on children. Medical therapy, especially stimulants, can have many side effects. Our systematic review aims to evaluate the potential of other non-medical treatment options for ADHD, such as yoga or meditation. We used PubMed and Google Scholar as databases for this systematic review. Using different combinations of medical subheadings (MeSH) and key terms, followed by the application of several inclusion/exclusion criteria and filters to narrow down our search. From an initial 51,675 articles, we selected 10 papers that passed our screening process and quality check to analyze in depth. Yoga and meditation positively affect various symptoms in children with ADHD, including attention, hyperactivity, and impulsive behavior. If done in family group sessions, it also benefited the parents and family dynamics, suggesting a potential option for family therapy. Furthermore, other psychological symptoms, such as anxiety or low self-esteem, appeared to be positively impacted by these interventions. Although yoga and meditation positively influenced children with ADHD, a more in-depth research is necessary with a more significant number of participants and over a more extended period of time. The results of the included studies suggest a substantial benefit. Nonetheless, as the number of studies is limited, at present yoga and meditation could be beneficial as supplemental therapy rather than being used alone as a therapy for ADHD. Answer: The study described in abstract PUBMED:17007170 investigated the effects of a six-week program of yoga and meditation on various parameters, including endothelial function as assessed by brachial artery reactivity. The study included 33 subjects, both with and without established coronary artery disease (CAD). The results showed significant reductions in blood pressure, heart rate, and body mass index (BMI) in the total cohort after the yoga program. However, there were no significant changes in laboratory parameters. For the total cohort, there was no significant improvement in endothelial-dependent vasodilation with yoga training and meditation compared with baseline. In contrast, in the group with CAD, endothelial-dependent vasodilation improved by 69% with yoga training, although this finding did not reach statistical significance (p = 0.09). The conclusion drawn from this study is that yoga and meditation appear to improve endothelial function in subjects with CAD, suggesting that psychosocial interventions such as yoga and meditation may have a beneficial effect on vascular tone, particularly in individuals with CAD.
Instruction: Are the Brookhill-Wilk patents impediments to market growth in cybersurgery? Abstracts: abstract_id: PUBMED:18275034 Are the Brookhill-Wilk patents impediments to market growth in cybersurgery? Background: In the past, many surgeons could practise their craft with little or no knowledge of patent law. But in the world of robotic and computerized surgery, this is increasingly a myopic approach, because the principle means of protecting high-tech surgical instruments is through the application of patent law. The issue is: does the Brookhill-Wilk patent, which covers the performance of remote robotic surgery, impede the growth of cybersurgery? Methods: Review of the Brookhill-Wilk patent and relevant law. Results: Patent law, which first took its form in the Middle Ages, attempts to balance the rewarding of innovation with the stifling of market growth. Using US patent law as a model, it would appear that the Brookhill-Wilk patent, a particular example of a medical process patent, could inhibit the growth of cybersurgery, as potential sums of money could be demanded by the patent holder from anyone who practises cybersurgery. However, two recent US Supreme Court cases appear to have seriously undermined the validity of a number of medical process patents, including the Brookhill-Wilk patent. Conclusion: Based on recent changes in patent law, it is not expected that Brookhill-Wilk patent will hinder the growth of cybersurgery. abstract_id: PUBMED:17520646 The legal and economic forces that will shape the international market for cybersurgery. Background: Despite the common use of medical devices most health care providers have little understanding how a device alters medical malpractice litigation. Such knowledge will be increasingly valuable as cybersurgery (i.e. remote robotic surgery) becomes routine. Methods: Review of the laws governing products and telecommunication liability. Results: Litigation after cybersurgery will be complex. In addition to being able to sue physicians and hospitals, patients who sustain an adverse outcome after cybersurgery will have the potential to sue the robotic manufacturer and telecommunication company. Robotics manufacturers can obtain virtual immunity from liability if they elected to place their devices on the market after obtaining [see text]360 per-market approval from the FDA. However, because [see text]360 pre-market approval is expensive and time consuming most medical devices on the market (including the robotic surgical instruments) do not have immunity to products liability. Consequently, after an adverse cybersurgical outcome a manufacturer of a robotic surgical instrument faces liability for failure to warn, design defects, and failure to properly manufacture. As for telecommunication providers, existing law provides them with immunity from liability. Conclusions: Litigation following cybersurgery will involve multiple defendants who are likely to use "finger pointing" defenses. Accordingly, there will be liability traps associated with providing cybersurgery. abstract_id: PUBMED:36637738 Robotics and cybersurgery in ophthalmology: a current perspective. Ophthalmology is one of the most enriched fields, allowing the domain of artificial intelligence to be part of its point of interest in scientific research. The requirement of specialized microscopes and visualization systems presents a challenge to adapting robotics in ocular surgery. Cyber-surgery has been used in other surgical specialties aided by Da Vinci robotic system. This study focuses on the current perspective of using robotics and cyber-surgery in ophthalmology and highlights factors limiting their progression. A review of literature was performed with the aid of Google Scholar, Pubmed, CINAHL, MEDLINE (N.H.S. Evidence), Cochrane, AMed, EMBASE, PsychINFO, SCOPUS, and Web of Science. Keywords: Cybersurgery, Telesurgery, ophthalmology robotics, Da Vinci robotic system, artificial intelligence in ophthalmology, training on robotic surgery, ethics of the use of robots in medicine, legal aspects, and economics of cybersurgery and robotics. 150 abstracts were reviewed for inclusion, and 68 articles focusing on ophthalmology were included for full-text review. Da Vinci Surgical System has been used to perform a pterygium repair in humans and was successful in ex vivo corneal, strabismus, amniotic membrane, and cataract surgery. Gamma Knife enabled effective treatment of uveal melanoma. Robotics used in ophthalmology were: Da Vinci Surgical System, Intraocular Robotic Interventional Surgical System (IRISS), Johns Hopkins Steady-Hand Eye Robot and smart instruments, and Preceyes' B.V. Cybersurgery is an alternative to overcome distance and the shortage of surgeons. However, cost, availability, legislation, and ethics are factors limiting the progression of these fields. Robotic and cybersurgery in ophthalmology are still in their niche. Cost-effective studies are needed to overcome the delay. Technologies, such as 5G and Tactile Internet, are required to help reduce resource scheduling problems in cybersurgery. In addition, prototype development and the integration of artificial intelligence applications could further enhance the safety and precision of ocular surgery. abstract_id: PUBMED:32306833 An overview of patents on therapeutic monoclonal antibodies in Europe: are they a hurdle to biosimilar market entry? As patents on many high-selling biological medicines are expiring, non-innovator versions, such as biosimilars, may enter this multi-billion dollar market. This study aims to map patents and patent applications for innovator as well as biosimilar monoclonal antibodies in Europe, and investigates legal challenges associated with patenting the innovator product and alleged infringing activities, focusing on consequences for biosimilar developers. Via an exploratory literature review in PubMed and a database analysis in Darts-ip, Derwent Innovation, and Espacenet, an overview of basic patents and exclusivity rights for some of the best-selling biologicals is given, supplemented with a detailed analysis of patents taken during the medicine's life cycle via three specific case studies (trastuzumab, bevacizumab, cetuximab). Case law was used to determine which patents were viewed by biosimilar developers as blocking market entry. For the selected monoclonal antibodies, the key protection instruments appeared to be the basic patent and the additional protection provided by a supplementary protection certificate. We observed that additional patents filed after the basic patent are hard to obtain and often insufficient in blocking market entry of biosimilars, but can in some cases be a substantial hurdle for biosimilar developers to overcome in patent litigation cases or to invent around, creating uncertainty on the launch date of a biosimilar on the market. These hurdles, however, seem to be surmountable, given that many cases were won by biosimilar developers. Also, biosimilars can be protected by filing new patents and these mainly pertain to new formulations. abstract_id: PUBMED:31630628 Back to the future of digital price display: Analyzing patents and other archives to understand contemporary market innovations. The article explores reasons for the lack of success of digital electronic shelf labels (ESLs) in US retail settings. It suggests that these reasons can be traced by referring to the triple meaning of 'digital': 'Digital' now means electronic, but the word also long encompassed numerals - a digit is a number - and body parts - digitus is the Latin word for the finger, that is, the index we use to point at things or manipulate them. The current fate of ESLs is linked to a long history that combined these three dimensions. The study unfolds along a twofold narrative. First, it reviews the recent introduction of ESLs in the United States based on the reading of papers and advertisements published in Progressive Grocer, a leading trade press magazine. Then, it goes 'back to the future' by exploring the roots of ESLs over a century. This historical study is based on the analysis of the evolution of US price tag patents (through a network study of patents citations and their evolution); the network analysis is complemented with the history of the US price tag market (through the knowledge gained from Progressive Grocer). The results show that digital price fixing depends on past and present systems and infrastructures, cost constraints and payback schemes, legal frameworks, and social projects. abstract_id: PUBMED:20590431 Patents in genomics and human genetics. Genomics and human genetics are scientifically fundamental and commercially valuable. These fields grew to prominence in an era of growth in government and nonprofit research funding, and of even greater growth of privately funded research and development in biotechnology and pharmaceuticals. Patents on DNA technologies are a central feature of this story, illustrating how patent law adapts-and sometimes fails to adapt-to emerging genomic technologies. In instrumentation and for therapeutic proteins, patents have largely played their traditional role of inducing investment in engineering and product development, including expensive post-discovery clinical research to prove safety and efficacy. Patents on methods and DNA sequences relevant to clinical genetic testing show less evidence of benefits and more evidence of problems and impediments, largely attributable to university exclusive licensing practices. Whole-genome sequencing will confront uncertainty about infringing granted patents, but jurisprudence trends away from upholding the broadest and potentially most troublesome patent claims. abstract_id: PUBMED:30851983 Market and Patent Analyses of Wearables in Medicine. Wearable medical devices (WMDs) will advance point-of-care diagnostics and therapeutics. This article analyses the market and patents for wearable devices. Activity monitors have the largest market share, and the intellectual property landscape is dominated by electronics corporations. However, the majority of these patents have not been realized in commercial products. abstract_id: PUBMED:37681129 Motivators and impediments to mushroom commercialisation in Ghana: Implication for policymakers, development partners and extension agents. Waste (organic waste) management is a major challenge for urban and peri-urban areas all over the world. This is because landfilling raises greenhouse gas emissions, whereas indiscriminate refuse dumping wastes valuable nutrients and energy that are still present in the refuse. Fungi, such as mushrooms, have the unique ability to decompose and transform obstinate organic substances into protein. The government, private sector, and foreign development partners in Ghana's Greater Accra region have invested in mushroom (oyster) production to boost commercialization for economic development. However, commercialization remains low, and significant income gains from mushroom commercialization have yet to be realised. Psychosocial factors are imperative for agricultural commercialization because they create a thriving environment and incentive for agricultural commercialization in general, and mushroom commercialization in specific. To determine the motivations and impediments to mushroom commercialization, we used a questionnaire to survey 153 oyster mushroom producers censused from the Ga East and the Adentan municipalities in the Greater Accra region. The data was analyzed using Principal Component Analysis (PCA), means and standard deviations. Our results showed three [3] motivators: social awareness, economic value, and market incentives, and three [3] impediments: institutional and personal weakness, market uncertainty, and value chain challenges. The major motivators to mushroom commercialization are social awareness (M = 3.94) and economic value (M = 3.93) whereas the major impediments comprised market uncertainty (M = 3.59) and institutional and personal weaknesses (M = 3.29). We recommend that policy effort to create enabling institutional environment and social awareness of mushroom coupled with development partners' and extension agents' effort to train farmers on mushroom production management are key to enhancing commercial mushroom production in Ghana. NB: M=Mean. abstract_id: PUBMED:17915832 Intellectual property policy in the pharmaceutical sciences: the effect of inappropriate patents and market exclusivity extensions on the health care system. Though patents are effective tools for promoting innovation and protecting intellectual property in the pharmaceutical sciences, there has been growing concern about 2 important ways that patents in this field can have a negative effect on patient care and the practice of medicine. First, inventors can seek and receive patents on pharmaceutical products or research tools that stretch the statutory requirements for patenting. Second, patent holders in the pharmaceutical market can use legal loopholes or aspects of the patent registration system to extend exclusivity for inventions beyond what was anticipated by the Patent Act or subsequent legislation. The monopoly control bestowed by such inappropriate patents or manipulation of the patent system can limit options available to patients, increase the cost of health care delivery, and make cooperative research more difficult. In response, several different government and market-based efforts have emerged to promote more equitable patent policy in health care that encourages dissemination of ideas while still supporting the development of innovative products. abstract_id: PUBMED:29298119 Secondary patents in the pharmaceutical industry: missing the wood for the trees? Introduction: The critics of the Innovator pharmaceutical industry allege that secondary patents are trivial modifications over the primary patent, which extend its term and delay the entry of the generics in the market place. The protagonists regard secondary patents a result of continuous research and development (R&D), which help them introduce and protect new, differentiated products. Areas Covered: The areas covered are Product life cycle management (PLCM), Drug approval process, Orange book (OB) listed patents, US patent data. Expert Opinion: Our analysis of the patents and products of four innovators viz., AstraZeneca, Takeda, Eisai and Wyeth in the field of proton pump inhibitors (PPI's) and Merck and Pfizer in the field of Statins shows that secondary patents help innovators sustain competition against other innovators in the specific product segment. The number of secondary patents listed in OB per NCE depends on the innovators interest in exploiting the NCE, the success of R & D effort and product lifecycle management strategy in the wake of market competition. Market entry decisions of innovators are strategic rather than a mere fallout of the secondary patents granted. Entry of another innovator is more unpredictable and hurts the first entrant more vis a vis the entry of generics who can enter the market when the patents protecting a product are no more enforceable, and hence more predictable. Generic entry in the field of PPI's shows that the term of the primary patent is not extended by the secondary patents. Answer: Based on the review of the Brookhill-Wilk patent and relevant law, it is concluded that the Brookhill-Wilk patent, which covers the performance of remote robotic surgery, could potentially inhibit the growth of cybersurgery by allowing the patent holder to demand sums of money from anyone who practices cybersurgery. However, recent changes in US patent law, particularly two recent US Supreme Court cases, appear to have undermined the validity of a number of medical process patents, including the Brookhill-Wilk patent. Therefore, it is not expected that the Brookhill-Wilk patent will hinder the growth of cybersurgery (PUBMED:18275034).
Instruction: Node-positive left-sided breast cancer: does VMAT improve treatment plan quality with respect to IMRT? Abstracts: abstract_id: PUBMED:23525513 Node-positive left-sided breast cancer: does VMAT improve treatment plan quality with respect to IMRT? Purpose: The aim of the present work was to explore plan quality and dosimetric accuracy of intensity-modulated radiotherapy (IMRT) and volumetric modulated arc therapy (VMAT) for lymph node-positive left-sided breast cancer. Methods: VMAT and IMRT plans were generated with the Pinnacle(3) V9.0 treatment planning system for 10 lymph node-positive left-sided breast cancer patients. VMAT plans were created using a single arc and IMRT was performed with 4 beams using 6, 10, and 15 MV photon energy, respectively. Plans were evaluated both manually and automatically using ArtiView™. Dosimetric plan verification was performed with a 2D ionization chamber array placed in a full scatter phantom. Results: Photon energy had no significant influence on plan quality for both VMAT and IMRT. Large variability in low doses to the heart was found due to patient anatomy (range V(5 Gy) 26.5-95 %). Slightly more normal tissue dose was found for VMAT (e.g., V(Tissue30%) = 22 %) than in IMRT (V(Tissue30%) = 18 %). The manual and ArtiView™ plan evaluation coincided very accurately for most dose metrics (difference < 1 %). In VMAT, 96.7 % of detector points passed the 3 %/3 mm gamma criterion; marginally better accuracy was found in IMRT (98.3 %). Conclusion: VMAT for node-positive left-sided breast cancer retains target homogeneity and coverage when compared to IMRT and allows maximum doses to organs at risk to be reduced. ArtiView™ enables fast and accurate plan evaluation. abstract_id: PUBMED:34367986 Auxiliary Structures-Assisted Radiotherapy Improvement for Advanced Left Breast Cancer. Background: To improve the quality of plan for the radiation treatment of advanced left breast cancer by introducing the auxiliary structures (ASs) which are used to spare the regions with no intact delineated structures adjacent to the target volume. Methods: CT data from 20 patients with left-sided advanced breast cancer were selected. An AS designated as A1 was created to spare the regions of the aorta, pulmonary artery, superior vena ava, and contralateral tissue of the upper chest and neck, and another, designated as A2, was created in the regions of the cardia and fundus of the stomach, left liver lobe, and splenic flexure of the colon. IMRT and VMAT plans were created for cases with and without the use of the AS dose constraints in plan optimization. Dosimetric parameters of the target and organs at risk (OARs) were compared between the separated groups. Results: With the use of AS dose constraints, both the IMRT and VMAT plans were clinically acceptable and deliverable, even showing a slight improvement in dose distribution of both the target and OARs compared with the AS-unused plans. The ASs significantly realized the dose sparing for the regions and brought a better conformity index (p < 0.05) and homogeneity index (p < 0.05) in VMAT plans. In addition, the volume receiving at least 20 Gy (V20) for the heart (p < 0.05), V40 for the left lung (p < 0.05), and V40 for the axillary-lateral thoracic vessel juncture region (p < 0.05) were all lower in VMAT plans. Conclusion: The use of the defined AS dose constraints in plan optimization was effective in sparing the indicated regions, improving the target dose distribution, and sparing OARs for advanced left breast cancer radiotherapy, especially those that utilize VMAT plans. abstract_id: PUBMED:36196407 Integrated scoring approach to assess radiotherapy plan quality for breast cancer treatment. Background: Proposal of an integrated scoring approach assessing the quality of different treatment techniques in a radiotherapy planning comparison. This scoring method incorporates all dosimetric indices of planning target volumes (PTVs) as well as organs at risk (OARs) and provides a single quantitative measure to select an ideal plan. Materials And Methods: The radiotherapy planning techniques compared were field-in-field (FinF), intensity modulated radiation therapy (IMRT), volumetric modulated arc therapy (VMAT), hybrid IMRT (H-IMRT), and hybrid VMAT (H-VMAT). These plans were generated for twenty-five locally advanced left-sided breast cancer patients. The PTVs were prescribed a hypofractionation dose of 40.5 Gy in 15 fractions. The integrated score for each planning technique was calculated using the proposed formula. Results: An integrated score value that is close to zero indicates a superior plan. The integrated score that incorporates all dosimetric indices (PTVs and OARs) were 1.37, 1.64, 1.72, 1.18, and 1.24 for FinF, IMRT, VMAT, H-IMRT, and H-VMAT plans, respectively. Conclusion: The proposed integrated scoring approach is scientific to select a better plan and flexible to incorporate the patient-specific clinical demands. This simple tool is useful to quantify the treatment techniques and able to differentiate the acceptable and unacceptable plans. abstract_id: PUBMED:35494897 A Dosimetric Study Comparing 3D-CRT vs. IMRT vs. VMAT in Left-Sided Breast Cancer Patients After Mastectomy at a Tertiary Care Centre in Eastern India. Introduction Post-mastectomy radiation in left-sided breast cancer in women continues to pose a significant risk to the underlying lungs and heart. This study analyzed the difference in planning target volume (PTV) coverage and dose to the organs at risk (OAR) by using three different planning methods for the same patient - three-dimensional conformal radiotherapy (3D-CRT), intensity-modulated radiotherapy (IMRT), and volumetric-modulated arc therapy (VMAT). Material and methods Thirty-five left-sided breast cancer patients' post-mastectomy were included in this study, and three different plans for adjuvant radiation were created using 3D-CRT, IMRT, and VMAT. The prescribed dose was 50Gy in 25 fractions. Kruskal-Wallis analysis of variance (ANOVA) was done, followed by a pairwise t-test to establish a hierarchy of plan quality and dosimetric benefits. The plans were compared with PTV95, homogeneity index (HI), conformity index (CI), hotspot (V107%), left lung V20Gy, mean lung dose, heart V25Gy, mean heart dose, and integral dose (ID) to the body. Results Both VMAT and IMRT led to improved PTV95% coverage (95.63±1.82%, p=0.000 in VMAT; 93.70±2.16 %, p=0.000; 81.40±6.27% in 3D-CRT arm) and improved CI (0.91±0.06 in IMRT [p<0.05] and 0.96±0.02 for VMAT plans [p<0.05]) as compared to 3D-CRT (0.66±0.11), which was statistically significant on pairwise analysis. In contrast, the difference in HI and reduction in hotspots were not significantly different. Left lung V20 was statistically very different between the three arms with the highest values in IMRT (36.64±4.45) followed by 3D-CRT (34.80±2.24) and the most negligible value in VMAT (33.03±4.20). Mean lung dose was also statistically different between the three arms. There was a statistically significant difference in mean heart dose between the three arms on pairwise analysis. Both the inverse planning methods led to a statistically significant increase in low dose volume (V5 and V10) of the ipsilateral lung, opposite lung, and heart, and increased ID to the body excluding the PTV. Conclusion While both the inverse planning modalities led to increased coverage, better CI, and better HI and decreased high dose volumes in OARs, there was increased low volume irradiation of heart, lungs, and body with VMAT faring marginally better than IMRT in coverage and decreasing lung irradiation with comparable heart irradiation. abstract_id: PUBMED:27838084 Evaluation of 3D-CRT, IMRT and VMAT radiotherapy plans for left breast cancer based on clinical dosimetric study. Objective: This paper aims to compare dosimetric differences based on three types of radiotherapy plans for postoperative left breast cancer. In particular, based on a clinical dosimetric study, the three-dimensional conformal radiotherapy (3D-CRT), intensity- modulated radiation therapy (IMRT) and VMAT plans were implemented on 15 cases of postoperative patients with left breast cancer with prescription doses of 5000cGy. Methods And Results: Dose volume histogram (DVH) was used to analyze each evaluation index of clinical target volume (CTV) and organs at risk (OARs). Except for homogeneous index (HI), D2, each CTV evaluation index of 3D-CRT plan was inferior to IMRT and VMAT plans (P<0.05). Compared with the VMAT plans, IMRT has a statistical significance only in Dmean, V95 (P<0.05). On the contrary, Dmean pertaining to the VMAT plan is much closer to the prescription dose with a V95 coverage rate as high as 97.44%. For the infected lung, V5, V10 of 3D-CRT were the lowest (P<0.05), while V20, V30 were the highest (P<0.05) among the three types of plans. Here, the V5, V10 of infected lung were slightly higher (P<0.05) for the VMAT and IMRT plans. Each evaluation index of the contralateral lung and heart in 3D-CRT was the lowest (P<0.05). D1 of contralateral breast was lower in both IMRT and VMAT plans, which were 1770.89±121.16cGy and 1839.92±92.77cGy, respectively. While D1 of the spinal cord in IMRT and VMAT plans was higher, which were 1990.12±61.52cGy and 1927.38±43.67cGy, respectively. When the radiation dose of 500-1500cGy was delivered to the normal tissues, 3D-CRT significantly shows the lowest volume, VMAT is relatively higher. Monitor Units (MU) and treatment time (T) of VMAT were the least, only 49.33% and 55.86% of those of IMRT. Conclusion: The three types of plans can meet the clinical dosimetry demands of postoperative radiotherapy for left breast cancer. The target of IMRT and VMAT plans has a better conformity, and the VMAT plan takes the advantages of less MU and treatment time. abstract_id: PUBMED:24376964 Comparison of dose distributions and organs at risk (OAR) doses in conventional tangential technique (CTT) and IMRT plans with different numbers of beam in left-sided breast cancer. Aim: Our aim was to improve dose distribution to the left breast and to determine the dose received by the ipsilateral lung, heart, contralateral lung and contralateral breast during primary left-sided breast irradiation by using intensity modulated radiotherapy (IMRT) techniques compared to conventional tangential techniques (CTT). At the same time, different beams of IMRT plans were compared to each other in respect to CI, HI and organs at risk (OAR) dose. Background: Conventional early breast cancer treatment consists of lumpectomy followed by whole breast radiation therapy. CTT is a traditional method used for whole breast radiotherapy and includes standard wedged tangents (two opposed wedged tangential photon beams). The IMRT technique has been widely used for many treatment sites, allowing both improved sparing of normal tissues and more conformal dose distributions. IMRT is a new technique for whole breast radiotherapy. IMRT is used to improve conformity and homogeneity and used to reduce OAR doses. Materials And Methods: Thirty patients with left-sided breast carcinoma were treated between 2005 and 2008 using 6, 18 or mixed 6/18 MV photons for primary breast irradiation following breast conserving surgery (BCS). The clinical target volume [CTV] was contoured as a target volume and the contralateral breast, ipsilateral lung, contralateral lung and heart tissues as organs at risk (OAR). IMRT with seven beams (IMRT7), nine beams (IMRT9) and 11 beams (IMRT11) plans were developed and compared with CTT and among each other. The conformity index (CI), homogeneity index (HI), and doses to OAR were compared to each other. Results: ALL OF IMRT PLANS SIGNIFICANTLY IMPROVED CI (CTT: 0.76; IMRT7: 0.84; IMRT9: 0.84; IMRT11: 0.85), HI (CTT: 1.16; IMRT7: 1.12; IMRT9: 1.11; IMRT11: 1.11), volume of the ipsilateral lung receiving more than 20 Gy (>V20 Gy) (CTT: 14.6; IMRT7: 9.08; IMRT9: 8.10; IMRT11: 8.60), and volume of the heart receiving more than 30 Gy (>V30 Gy) (CTT: 6.7; IMRT7: 4.04; IMRT9: 2.80; IMRT11: 2.98) compared to CTT. All IMRT plans were found to significantly decrease >V20 Gy and >V30 Gy volumes compared to conformal plans. But IMRT plans increased the volume of OAR receiving low dose radiotherapy: volume of contralateral lung receiving 5 and 10 Gy (CTT: 0.0-0.0; IMRT7: 19.0-0.7; IMRT9: 17.2-0.66; IMRT11: 18.7-0.58, respectively) and volume of contralateral breast receiving 10 Gy (CTT: 0.03; IMRT7: 0.38; IMRT9: 0.60; IMRT11: 0.68). The differences among IMRT plans with increased number of beams were not statistically significant. Conclusion: IMRT significantly improved conformity and homogeneity index for plans. Heart and lung volumes receiving high doses were decreased, but OAR receiving low doses was increased. abstract_id: PUBMED:12443804 Node-positive left-sided breast cancer patients after breast-conserving surgery: potential outcomes of radiotherapy modalities and techniques. Purpose: To determine how much proton and intensity modulated photon radiotherapy (IMRT) can improve treatment results of node-positive left-sided breast cancer compared to conventional radiation qualities (X-rays and electrons) after breast-conserving surgery in terms of lower complication risks for cardiac mortality and radiation pneumonitis. Methods And Material: For each of 11 patient studies, one proton plan, one IMRT, and two conventional (tangential and patched) plans were calculated using a three-dimensional treatment-planning system, Helax-TMS(). The evaluation of the different treatment plans was made by applying the normal tissue complication probability model (NTCP) proposed by Källman (also denoted the relative seriality model) on the dose distributions in terms of dose-volume histograms. The organs at risk are the spinal cord, the left lung, the heart, and the non-critical normal tissues (including the right breast). Results: The comparison demonstrated that the proton treatment plans provide significantly lower NTCP values for the heart and lung when compared to conventional radiation qualities including IMRT for all 11 patients. At a prescribed dose of 50 Gy in the PTV, the calculated mean NTCP value for the patients decreased, on the average, from 14.7 to 0.6% for the lung (radiation pneumonitis) for the proton plans compared with the best plan using conventional radiation qualities. The corresponding figures for the heart (cardiac mortality) were from 2.1 to 0.5%. The figures for cardiac mortality for IMRT, tangential technique and the patched technique were 2.2, 6.7, and 2.1%, respectively. Conclusions: Protons appear to have major advantages in terms of lower complication risks when compared with treatments using conventional radiation qualities for treating node-positive left-sided breast cancer after breast-conserving surgery. abstract_id: PUBMED:32178694 Automated Hypofractionated IMRT treatment planning for early-stage breast Cancer. Background: Hypofractionated whole-breast irradiation is a standard adjuvant therapy for early-stage breast cancer. This study evaluates the plan quality and efficacy of an in-house-developed automated radiotherapy treatment planning algorithm for hypofractionated whole-breast radiotherapy. Methods: A cohort of 99 node-negative left-sided breast cancer patients completed hypofractionated whole-breast irradiation with six-field IMRT for 42.56 Gy in 16 daily fractions from year 2016 to 2018 at a tertiary center were re-planned with an in-house-developed algorithm. The automated plan-generating C#-based program is developed in a Varian ESAPI research mode. The dose-volume histogram (DVH) and other dosimetric parameters of the automated and manual plans were directly compared. Results: The average time for generating an autoplan was 5 to 6 min, while the manual planning time ranged from 1 to 1.5 h. There was only a small difference in both the gantry angles and the collimator angles between the autoplans and the manual plans (ranging from 2.2 to 5.3 degrees). Autoplans and manual plans performed similarly well in hotspot volume and PTV coverage, with the autoplans performing slightly better in the ipsilateral-lung-sparing dose parameters but were inferior in contralateral-breast-sparing. The autoplan dosimetric quality did not vary with different breast sizes, but for manual plans, there was worse ipsilateral-lung-sparing (V4Gy) in larger or medium-sized breasts than in smaller breasts. Autoplans were generally superior than manual plans in CI (1.24 ± 0.06 vs. 1.30 ± 0.09, p < 0.01) and MU (1010 ± 46 vs. 1205 ± 187, p < 0.01). Conclusions: Our study presents a well-designed standardized fully automated planning algorithm for optimized whole-breast radiotherapy treatment plan generation. A large cohort of 99 patients were re-planned and retrospectively analyzed. The automated plans demonstrated similar or even better dosimetric quality and efficacy in comparison with the manual plans. Our result suggested that the autoplanning algorithm has great clinical applicability potential. abstract_id: PUBMED:35455769 Radiotherapy of Breast Cancer in Laterally Tilted Prone vs. Supine Position: What about the Internal Mammary Chain? Background: In the multimodal breast-conserving curative therapy of some high-risk breast cancer patients, extended external beam radiotherapy (EBRT) not only to the breast but also to the supraclavicular fossa and the internal mammary chain (parasternal region (PSR)) is indicated. We report a dosimetric study on the EBRT of the breast ("B") and the breast including PSR ("B + PSR"), comparing the supine and the laterally tilted prone patient positions in free breathing. Methods: The planning CT scans of 20 left- and 20 right-sided patients were analyzed. EBRT plans were calculated with 3D conformal EBRT (3D) and with intensity-modulated EBRT (IMRT) for "B" and "B + PSR" in the prone and supine positions. The mean and threshold doses were computed. The quality of EBRT plans was compared with an overall plan assessment factor (OPAF), comprising three subfactors, homogeneity, conformity, and radiogenic exposure of OAR. Results: In the EBRT of "B", prone positioning significantly reduced the exposure of the OARs "heart" and "ipsilateral lung" and "lymphatic regions". The OPAF was significantly better in the prone position, regardless of the planning technique or the treated breast side. In the EBRT of "B + PSR", supine positioning significantly reduced the OAR "heart" exposure but increased the dose to the OARs "ipsilateral lung" and "lymphatic regions". There were no significant differences for the OPAF, independent of the irradiated breast side. Only the IMRT planning technique increased the chance of a comparatively good EBRT plan. Conclusion: Free breathing prone positioning significantly improves plan quality in the EBRT of the breast but not in the EBRT of the breast + PSR. abstract_id: PUBMED:36727074 Different meaning of the mean heart dose between 3D-CRT and IMRT for breast cancer radiotherapy. Background: Previous studies in 2D and in 3D conformal radiotherapy concludes that the maximal heart distance and the mean heart dose (MHD) are considered predictive of late cardiac toxicities. As the use of inverse-planned intensity modulated radiation therapy (IMRT) is increasing worldwide, we hypothesized that this 3D MHD might not be representative of heart exposure after IMRT for breast cancer (BC). Methods: Patients with left-sided BC and unfavorable cardiac anatomy received IMRT. Their treatment plan was compared to a virtual treatment plan for 3D conformal radiotherapy with similar target volume coverage (study A). Then, a second 3D conformal treatment plan was generated to achieve equivalent individual MHD obtained by IMRT. Then the heart and left anterior descending (LAD) coronary artery exposures were analyzed (study B). Last, the relationship between MHD and the heart volume or LAD coronary artery volume receiving at least 30Gy, 40Gy and 45Gy in function of each additional 1Gy to the MHD was assessed (study C). Results: A significant decrease of heart and LAD coronary artery exposure to high dose was observed with the IMRT compared with the 3D conformal radiotherapy plans that both ensured adequate target coverage (study A). The results of study B and C showed that 3D MHD was not representative of similar heart substructure exposure with IMRT, especially in the case of high dose exposure. Conclusions: The mean heart dose is not a representative dosimetric parameter to assess heart exposure following IMRT. Equivalent MHD values following IMRT and 3DRT BC treatment do not represent the same dose distribution leading to extreme caution when using this parameter for IMRT plan validation. Answer: The comparison between volumetric modulated arc therapy (VMAT) and intensity-modulated radiotherapy (IMRT) for node-positive left-sided breast cancer suggests that VMAT may improve treatment plan quality in certain aspects. According to the studies, VMAT retains target homogeneity and coverage when compared to IMRT and allows maximum doses to organs at risk to be reduced (PUBMED:23525513). Additionally, the use of auxiliary structures (ASs) in plan optimization with VMAT has been shown to significantly improve the dose distribution of both the target and organs at risk (OARs), leading to better conformity and homogeneity indices, as well as lower doses to the heart and left lung (PUBMED:34367986). An integrated scoring approach that assesses the quality of different treatment techniques in radiotherapy planning comparisons has indicated that VMAT plans may have a slightly higher score compared to IMRT, suggesting a superior plan quality (PUBMED:36196407). Furthermore, a dosimetric study comparing 3D-CRT, IMRT, and VMAT in left-sided breast cancer patients post-mastectomy found that VMAT led to improved PTV coverage and conformity index compared to IMRT, with marginally better performance in decreasing lung irradiation and comparable heart irradiation (PUBMED:35494897). Another study evaluating 3D-CRT, IMRT, and VMAT plans for postoperative left breast cancer based on a clinical dosimetric study concluded that VMAT plans have better conformity for the target and require less monitor units and treatment time compared to IMRT (PUBMED:27838084). However, it is important to note that while VMAT can improve conformity and homogeneity and reduce high-dose volumes to OARs, it may also increase the volume of OARs receiving low-dose radiotherapy (PUBMED:24376964). In summary, the evidence suggests that VMAT can improve treatment plan quality for node-positive left-sided breast cancer compared to IMRT in terms of target coverage, homogeneity, conformity, and sparing of OARs. However, the choice between VMAT and IMRT should be made based on individual patient anatomy, clinical objectives, and the specific dosimetric goals of the treatment plan.
Instruction: Nephrotic syndrome in African children: lack of evidence for 'tropical nephrotic syndrome'? Abstracts: abstract_id: PUBMED:16326742 Nephrotic syndrome in African children: lack of evidence for 'tropical nephrotic syndrome'? Background: Infections such as malaria, schistosomiasis, hepatitis B and HIV have been suggested as major causes of the nephrotic syndrome (NS) in African children. We retrospectively analysed the course of the NS in 32 children from Ghana and reviewed the literature on NS from 18 different African countries for the presence of 'the tropical nephrotic syndrome'. Methods: Thirty-two children (22 boys, 10 girls, median age 12 years, range 1-18 years) with NS were treated from 2000-2003 at Battor Hospital, Ghana. Thirteen out of 32 children underwent a renal biopsy which was investigated by light, immune and electron microscopy. All 32 patients were initially treated with oral prednisone (PRED) therapy (29 with standard therapy for 8 weeks and three individually tailored), and steroid-resistant children received also intravenous methylprednisolone pulses (three children) or oral cyclophosphamide (two children). Results: All patients fulfilled the clinical and laboratory criteria of a NS. The initial median serum creatinine was 65 micromol/l (range 44-133 micromol/l). Renal biopsy was performed in 13/32 children and revealed focal and segmental glomerulosclerosis (FSGS) in 10 patients, minimal change disease (MCNs) in two and no conclusive result in one patient. Glomerular immune complex deposition was absent in all biopsies. After treatment with PRED, oedema disappeared in 24/32 patients; however, proteinuria normalized in 16/32 patients only. The NS relapsed in 9/16 steroid-sensitive patients after cessation of PRED therapy, and two children were frequent relapsers. The steroid-resistant NS did not respond to an intensified immunosuppression in 5/16 children receiving methylprednisolone or cyclophosphamide. Five out of 32 children died, all were steroid resistant. Conclusions: There was no evidence for a dominating role of steroid-resistant 'tropical glomerulopathies' in children with a NS in Ghana. Similar to South Africa, focal and segmental glomerulosclerosis (FSGS) and minimal change disease were the most frequent findings on histology. Contrary to Nigeria, membrano-proliferative glomerulonephritis was not found in these patients. We conclude from this data and from the literature that the histological pattern of NS may vary between different African countries. Concerning therapy of NS under tropical conditions, we emphasize that despite the limited therapeutic facilities half of these patients may benefit from corticosteroids; however, steroid resistance and FSGS resulted in a high mortality. abstract_id: PUBMED:31187763 Steroid response in primary childhood nephrotic syndrome in a tropical african environment. Background: Earlier studies on childhood nephrotic syndrome (NS) in tropical Africa showed steroid resistance in the majority. More recent studies show a variable picture, necessitating a re-evaluation. This study was aimed at determining the current pattern of steroid response in childhood NS, in an environment known to be dominated by steroid resistance. Patients And Methods: This prospective study of consecutive children who received steroid therapy for primary NS was carried out at the University College Hospital, Ibadan, Nigeria between 2006 and 2013. The outcomes of interest were steroid sensitivity and death. The recruited patients received a 4-6 weeks' course of prednisolone at 60 mg/m2/day followed by alternate day doses of 40 mg/m2 up to total steroid therapy duration of 6 months in steroid sensitive patients. Statistical analysis was carried out using STATA version 12.0. P value <0.05 was considered significant. Results: Of 109 children that received steroids for at least 8 weeks, whose mean (SD) age was 7.9 (3.7) years, 69 (63.3%) were steroid sensitive. Those aged ≥6 years responded as well as those aged <6 years (P = 0.78). Boys were more likely to be steroid-sensitive than girls, 65.2% versus 34.8% (P = 0.039). There was zero mortality among the patients studied. Conclusion: This study has shown a better steroid sensitivity of 63.3% in children with primary NS compared with the previously reported 36.8-42.9% in patients with highly selective proteinuria. This improved steroid response and zero mortality show a remarkable departure from the past. abstract_id: PUBMED:11863085 Focal segmental glomerulosclerosis in African Americans. Focal segmental glomerulosclerosis (FSGS), the leading glomerular cause of the nephrotic syndrome among African Americans, is typically associated with edema, proteinuria, hypertension, microscopic hematuria, and renal insufficiency. Recent studies suggest that either the incidence of FSGS has increased or an increased number of biopsies of African American patients have made the diagnosis more common. The collapsing variant of FSGS, which occurs more commonly in African Americans than in whites, carries an especially poor prognosis with respect to renal survival. Although the pathogenesis of FSGS is not well understood, the fact that it frequently recurs early after transplantation has led to speculation that patients with FSGS may have a circulating factor that leads to increased glomerular permeability. There are no randomized control trials of treatment regimens for FSGS. Steroids, alkylating agents, and cyclosporin have all been used with variable results to treat FSGS. abstract_id: PUBMED:795103 Aspects of tropical paediatrics. Malnutrition interacting with infectious and parasitic diseases are the main causes of the appalling mortality in childhood in the tropics. The most important single safeguard against these in infancy is breast feeding and the trend now evident to abandon this is a disaster which demands urgent attention. Reasons for this trend are discussed. Efforts to control infectious diseases, other than smallpox, have had little success and the emergence and spread of dengue haemorrhagic fever in S.E. Asia have added new dimensions to the problem. Malaria is still widely prevalent in the tropics and falciparum malaria, holoendemic in much of Africa, remains a major cause of death with its most serious impact on pregnant women and children. The emergence and spread of drug resistant strains of this parasite in parts of the world is a cause for serious concern. Quartan malaria is also an insidious corruptor of health in childhood and commonly causes the nephrotic syndrome. Neonatal jaundice, often associated with G6PD deficiency, is increasing in frequency in urban areas of Africa and now constitutes a significant hazard to the newborn and requires urgent investigation. These problems in tropical paediatrics indicate the need for urgent reappraisal of our role as a profession in the affairs of the tropical developing world. abstract_id: PUBMED:6672231 Absence of 'true' minimal change nephrotic syndrome in African children in South Africa. Minimal change nephrotic syndrome (MCNS) which is a well defined clinical, histopathological and therapeutic entity accounts for the majority of childhood nephrosis throughout the world except Africa. We describe 15 (13%) of 115 biopsy-confirmed African children with nephrotic syndrome who had minimal change on light microscopy. These are compared to 42 biopsied Indian children with typical MCNS. The critical differences between African children and children with classical MCNS were the lack of a predictable response in eight of the former to steroids and cyclophosphamide and in a later peak age of presentation (7-8 years). These patients differed in most respects from minimal change seen in the malarial zones of North Africa except for clinical presentation. Minimal change in West and East Africa more closely approximates typical MCNS. Two of the African patients with minimal change had endemic syphilis and responded clinically to penicillin therapy. The clinical presentation in all the African children, the favourable short-term outcome in 12 of these patients, immunofluorescent (13 patients) and electron microscopic (three patients) findings, and selectivity of proteinuria (eight patients) paralleled that of classical MCNS. The implications of this study are that African children in South Africa with MCNS should not be treated with steroids or cyclophosphamide. There is no clear benefit in the use of these drugs in nephrotic children with obvious glomerular lesions and therefore it follows that such therapy should be avoided in all African children in South Africa with this disease. abstract_id: PUBMED:16708519 Sudden deterioration in the renal function of an African child with cyanotic congenital heart disease. Nephrotic syndrome (NS) and deteriorating renal function are associated with congenital cyanotic heart disease (CCHD). We describe a nine-year-old African male child with CCHD, NS and deteriorating renal function. He presented with two weeks' history of progressive generalized body swelling, oliguria and orthopnea. Oliguria did not improve, and the generalized body swelling did not subside with the administration of diuretics. Dipstick urinalysis remained 3+. He was referred to a facility that offers renal replacement therapy in view of worsening renal status. This is the first description known to us of deteriorating renal function in an African and in a child with CCHD, implying that the complication can develop in children or commences in childhood. abstract_id: PUBMED:482593 Clinico-pathological features of the nephrotic syndrome in South African children. One hundred and thirty children of whom 74 were Africans and 56 Indians with contrasting clinicopathological patterns of the nephrotic syndrome are described. Eighty-six per cent of African children had obvious structural glomerular lesions which were associated with unresponsiveness to steroids while 75 per cent of Indians had minimal change nephrotic syndrome which was steroid responsive. The treatment history of a further 41 unbiopsied children with nephrotic syndrome (nine Africans, 32 Indians) support and emphasize this difference. Extramembranous and a tropical variety of extramembranous (36.5 per cent) together with proliferative (20.2 per cent) lesions accounted for most of the histological types in African children. The disease in Indian children was similar to that in other countries in age of onset, sex distribution, frequency of histological types and steroid responsiveness although there was a preponderance of frequent relapsers (69 per cent). Africans differed from children in other continents in the frequency of histological categories (therefore in steroid responsiveness) and occasionally in clinical behaviour. They also differed from children in tropical Africa in a lower incidence of the disease, male dominance and absence of malarial nephropathy. The aetiology of nephrotic syndrome in nearly all the children remains unidentified. abstract_id: PUBMED:8953119 Focal segmental glomerulosclerosis in adult African Americans. We have previously shown that idiopathic focal segmental glomerulosclerosis (FSGS) is the most common non-proliferative primary glomerulopathy in adult African Americans. In this report we present our experience with treated FSGS in 15 such patients followed over five years. They were all treated with prednisone 60 mg daily for three months, followed by a slow tapering. In addition, two patients later had cyclophosphamide, and five had enalapril. At entry hypertension was present in 73% of the patients, nephrotic syndrome in 87%, and elevated serum creatinine (> or = 1.4 mg/dl) in 40%. Five of the 15 patients (33%) developed end-stage renal failure (ESRF), one of them having a "malignant" course after the advent of pregnancy. Two patients (13%) have chronic renal insufficiency (CRI; serum creatinine > 2.5 mg/dl); three (20%) have mild renal insufficiency (serum creatinine 1.4-2.5 mg/dl), and five patients (33%) have normal renal function. The cumulative renal survival was 93% at five years, but only 26% at eight years. At last follow-up all the ten patients who did not develop ESRF were in partial remission (urinary protein of 1.3 g/day +/- 1.21), but 4 of the 5 patients who did not develop ESRF had no prolonged partial remission of nephrotic syndrome. Neither the initial clinical parameters not the use of enalapril correlated with the renal outcome (univariate analysis). However, 4 of the 5 patients who developed ESRF had elevated serum creatinine at entry, versus only 2 of the 10 not developing ESRF (p = 0.09 by two-sided, and 0.045 by one-sided Fisher's exact test). We conclude that the short-term renal outcome in nephrotic adult African Americans with treated FSGS is comparable to that of the non-African Americans, but their long-term prognosis may be poorer. Patients developing ESRF were more likely to present with elevated serum creatinine. Enalapril did not seem to modify the course of renal disease, but its utility and that of other ACE inhibitors in the treatment of FSGS must await prospective randomized studies. abstract_id: PUBMED:14529227 Clinical diabetic nephropathy in a tropical African population. Diabetic nephropathy is the single most important disorder leading to renal failure in adults in the Western countries and it is among the first three major causes of end stage renal disease in Nigeria. The aim of this study is to show the features of clinical diabetic nephropathy in the Olabisi Onabanjo University Teaching Hospital, Ogun State, Nigeria. The study group consists of 342 consecutive diabetic patients with persistent proteinuria (positive albustix) and diabetic retinopathy, seen from January 2000 to June 2001 in the Ogun State University Teaching Hospital, Sagamu. Clinical and laboratory parameters were recorded. Students' t-test and Spearman correlation coefficient were used in analysis. The frequency of occurrence of clinical diabetic nephropathy is 28.4% with majority already symptomatic despite normal biochemistry. Mean ages of type 1 and type 2 are 26+/-7.9 years and 53.4+/-6.3 years respectively with a sex ratio of 1.2:1. Mean duration of disease is 6.5+/-3.6 years and 9.4+/-4.1 years respectively. Two hundred and seventy-one (79.2%) patients were hypertensive. Nephrotic syndrome is not a common presentation. Diabetic nephropathy is a significant problem in this environment. abstract_id: PUBMED:15954915 Identification of podocin (NPHS2) gene mutations in African Americans with nondiabetic end-stage renal disease. Background: Podocin, encoded by NPHS2 and mapped to 1q25.2, is an integral membrane protein exclusively expressed in glomerular podocytes. Mutations in the NPHS2 gene cause autosomal-recessive nephrotic syndrome and have been associated with proteinuria in several populations. Evidence for linkage of end-stage renal disease (ESRD) to chromosome 1q25-31 in the region of NPHS2 has been identified in a genome-wide scan in African American (AA) siblings. Methods: To investigate the potential role of this gene in ESRD, we sequenced all coding regions and approximately 2 kb of upstream promoter sequence of NPHS2 in 96 unrelated AA nondiabetic ESRD cases and 96 healthy population-based AA controls, and assessed several single nucleotide polymorphisms (SNPs) for association in a larger case-control sample. Results: Fifty-five variants were identified with minor allele frequencies ranging from <1% to 44%. Twenty-three polymorphisms were located in the promoter region, 11 were exonic, 13 were intronic, and 8 were in the 5' and 3'- untranslated regions. Two novel nonsynonymous coding SNPs were identified (A44E and A61V). An insertion polymorphism in intron 3, IVS3+9insA, was detected in 6 ESRD patients and in no controls. This variant, and 4 other common SNPs, were evaluated in a larger sample of 288 AA ESRD cases and 278 AA controls. The overall minor allele frequencies for the insertion allele were 0.018 in cases and 0.002 in controls. Significant evidence of association of IVS3+9insA was observed (P= 0.012), and the haplotype containing the insertion allele in cases was also associated. Conclusion: These results suggest that uncommon variants of the NPHS2 gene may play a role in the development of nondiabetic ESRD in AAs. Answer: The concept of 'tropical nephrotic syndrome' in African children has been questioned by several studies. A retrospective analysis of 32 children with nephrotic syndrome (NS) in Ghana found no evidence for a dominating role of steroid-resistant 'tropical glomerulopathies'. Instead, the most frequent histological findings were focal and segmental glomerulosclerosis (FSGS) and minimal change disease (MCD), similar to patterns seen in South Africa. This study also noted a high mortality rate associated with steroid resistance and FSGS (PUBMED:16326742). In contrast, a study from Nigeria reported improved steroid sensitivity of 63.3% in children with primary NS, which is a significant increase from previously reported rates of 36.8-42.9% in patients with highly selective proteinuria. This study also reported zero mortality among the patients studied, marking a departure from past trends (PUBMED:31187763). Historically, the nephrotic syndrome in African children has been associated with infections such as malaria, which can cause quartan malaria nephropathy, a form of NS. However, the absence of 'true' minimal change nephrotic syndrome (MCNS) has been reported in South African children, where the response to steroids and cyclophosphamide was unpredictable, and the peak age of presentation was later compared to classical MCNS (PUBMED:6672231). Clinico-pathological features of NS in South African children have also shown contrasting patterns between African and Indian children, with a higher percentage of African children having structural glomerular lesions associated with steroid unresponsiveness (PUBMED:482593). In summary, the evidence suggests that the histological pattern of NS may vary between different African countries, and the term 'tropical nephrotic syndrome' may not accurately describe the condition in African children. Improved steroid sensitivity in some regions indicates a shift from previous perceptions of the disease, although steroid resistance and associated mortality remain significant concerns in certain areas (PUBMED:16326742, PUBMED:31187763).
Instruction: CD10-positive mantle cell lymphoma: biologically distinct entity or an aberrant immunophenotype? Abstracts: abstract_id: PUBMED:37373354 Unveiling the Prognostic Significance of BCL6+/CD10+ Mantle Cell Lymphoma: Meta-Analysis of Individual Patients and Systematic Review. Mantle cell lymphoma (MCL) is a type of non-Hodgkin lymphoma (NHL) characterized by a hallmark translocation of t (11; 14). CD10 negativity has been used to differentiate MCL from other NHL types; however, recently, there has been an increase in the number of reported cases of CD10-positive MCL. This warrants further investigation into this rarer immunophenotype and its clinical significance. BCL6, which is a master transcription factor for the regulation of cell proliferation and key oncogene in B cell lymphomagenesis, has been reported to have co-expression with CD10 in MCL. The clinical significance of this aberrant antigen expression remains unknown. We conducted a systematic review by searching four databases and selected five retrospective analyses and five case series. Two survival analyses were conducted to determine if BCL6 positivity conferred a survival difference: 1. BCL6+ vs. BCL6- MCL. 2. BCL6+/CD10+ vs. BCL6-/CD10+ MCL. Correlation analysis was conducted to determine if BCL6 positivity correlated with the Ki67 proliferation index (PI). Overall survival (OS) rates were performed by the Kaplan-Meier method and log-rank test. Our analyses revealed that BCL6+ MCL had significantly shorter overall survival (median OS: 14 months vs. 43 months; p = 0.01), BCL6+/CD10+ MCL had an inferior outcome vs. BCL6+/CD10- MCL (median OS: 20 months vs. 55 months p = 0.1828), BCL6+ MCL had significantly higher percentages of Ki67% (Ki67% difference: 24.29; p = 0.0094), and BCL6 positivity had a positive correlation with CD10+ status with an odds ratio 5.11 (2.49, 10.46; p = 0.0000286). Our analysis showed that BCL6 expression is correlated with CD10 positivity in MCL, and BCL6 expression demonstrated an inferior overall survival. The higher Ki67 PI in BCL6+ MCL compared to BCL6- MCL further supports the idea that the BCL6+ immunophenotype may have prognostic value in MCL. MCL management should consider incorporating prognostic scoring systems adjusted for BCL6 expression. Targeted therapies against BCL6 may offer potential therapeutic options for managing MCL with aberrant immunophenotypes. abstract_id: PUBMED:26124315 CD10-positive mantle cell lymphoma: biologically distinct entity or an aberrant immunophenotype? Insight, through gene expression profile in a unique case series. Background: Mantle cell lymphoma (MCL) is an aggressive disease with genetic heterogeneity and discrete clinical subtypes. MCL is rarely CD10 positive. These cases raise the question whether a subset of MCL may be germinal centre (GC) derived, and have distinct clinicopathological characteristics. Aims And Methods: A series of nine CD10-positive MCL cases is described herein. The clinicopathological and immunophenotypic features, immunoglobulin somatic hypermutation (SHM) status and gene expression profile (GEP) data are detailed. These features were compared with two independent sets (n=20, each) of CD10-negative MCL cases (controls), which were randomly selected from our institutional registry. Results: GEP showed distinct expression of a GC signature in CD10-positive MCL cases with minimal impact on downstream signalling pathways. There were no significant differences in the clinicopathological features or clinical outcome between our CD10-positive and CD10-negative MCL cases. The frequency of SHM was comparable with established data. Conclusions: This study provides convincing evidence that CD10 expression is related to a distinct GC signature in MCL cases, but without clinical or biological implications. abstract_id: PUBMED:38504844 Blastoid Variant of Mantle Cell Lymphoma with Extranodal Presentation and Aberrant CD10 Expression. Mantle cell lymphoma (MCL) constitutes 3%-10% of non-Hodgkin lymphoma and is characterized by t (11:14)(q13;q32). The common presentation is generalized lymphadenopathy with weight loss, infrequently night sweats, and fever. Among histological subtypes of MCL, the blastoid variant of MCL constitutes 10%-15% of all the cases. It is challenging to diagnose the blastoid variant of MCL based on its morphology alone as it mimics large B-cell lymphoma. Hence, the immunophenotyping and molecular studies aid in its correct diagnosis. We report an elderly man diagnosed with blastoid variant MCL. He presented with disseminated soft-tissue and subcutaneous nodules, and showed aberrant CD10 expression. Presentation of the extranodal site and aberrant CD10 expressions carries an overall poor prognosis. CD10-positive MCL can be mistaken for large B-cell lymphoma. abstract_id: PUBMED:18684040 A unique case of mantle cell lymphoma with an aberrant CD5-/CD10+ immunophenotype and typical morphology. Mantle cell lymphoma (MCL) is a non-Hodgkin lymphoma with a poor prognosis that may be confused with less aggressive diseases, such as small lymphocytic lymphoma and follicular lymphoma. In many cases immunophenotyping, particularly analysis of reactivity for CD5 and CD10, is an important adjunct to morphology that usually distinguishes MCL from follicular lymphoma; the former is CD5(+)/CD10(-), whereas follicular lymphoma is the reverse. We report a case of MCL, initially diagnosed as follicular lymphoma, that at presentation expressed neither CD5 nor CD10. At relapse, it was still CD5(-), but CD10 was now detected. Studies for a t(11;14) translocation and CYCLIN D1 protein expression, however, permitted a revised diagnosis of MCL. An MCL with this immunophenotype and classical morphology has not been previously reported. abstract_id: PUBMED:23628035 Immunophenotype analysis of leukemic mantle cell lymphoma Mantle cell lymphoma (MCL) is a kind of mature B-cell neoplasms with significantly poor prognosis and is usually misdiagnosed. With the development of flow cytometry and cytogenetic technique, most patients were at leukemic phase when diagnosed. This study was purposed to investigate the immunophenotypes of MCL, the immunophenotype information of 22 leukemic MCL patients was analyzed retrospectively. All the patients were conformed t(11;14) translocation by fluorescence in situ hybridization. Immunophenotypes were detected by a four-color flow cytometry including CD3, CD4, CD5, CD8, CD10, CD19, CD20, CD22, CD23, CD25, CD38, CD103, CD148, CD200, FMC7, ZAP-70, κ, λ. The results showed that CD19, CD5, CD20 and monoclonal sIg expressed in all 22 patients with CD20 high expression; CD22 expressed weakly in 17 patients; CD23 expressed in 6 patients including 2 cases highly expressed; FMC7 expressed in 12 patients. 5 patients were 4-point score and 17 patients had a score less than 4 according to CLL scoring system. CD148 and CD200 were detected in 18 patients, in which CD200 expressed negatively in 11 patients, CD200 expressed weakly in 7 patients with median fluorescence intensity (MFI) 25.8 (6.6 - 254.26); CD148 expressed positively in all 18 patients with median MFI: 337 (73.4 - 1341.9). It is concluded that the atypical immunophenotype is common in leukemia MCL, thereby the diagnosis of MCL needs comprehensively analyze with morphocytology, immunophenotype and cytogenetic, CD200 and CD148 as new bio-markers can differentiate MCL from chronic B cell lymphoproliferative disease. abstract_id: PUBMED:28109040 Composite mantle cell lymphoma and chronic lymphocytic leukemia/small lymphocytic lymphoma. Background: Composite mantle cell lymphoma (MCL) and chronic lymphocytic leukemia/small lymphocytic lymphoma (CLL/SLL) is rare, as less than 20 cases have been reported so far. However, this entity may be under-diagnosed because the two lymphomas are very similar in morphology and immunophenotype. Previous cases were mostly diagnosed with immunohistochemistry, but flow cytometry may play an important role in the detection of two tumors in the same specimen, thus achieving an accurate diagnosis. By definition, a composite lymphoma is two demarcated lymphomas occurring at the same anatomic site. Therefore, immunohistochemistry is still needed to identify the topographic relation of these two tumors. Our reported case illustrates the pitfalls in the diagnostic process and we recommend two standard panels with new markers for an accurate diagnosis of this composite lymphoma. Method: FACSCanto II is used with antibodies, including CD5, CD10, CD19, CD20, CD22, CD23, CD43, CD79b, CD200, kappa, and lambda. Immunohistochemical stains include PAX-5/CD5 dual stain, Cyclin D1, SOX11, and LEF-1. Results: CLL/SLL is positive for CD5, CD19, CD23, CD43, and CD200, with dim expression of CD20, CD22, CD79b, and kappa. MCL is positive for CD5, CD19, CD20, CD22, CD79b, kappa, and negative for CD23, CD43, and CD200. Immunohistochemical stains show that PAX-5/CD5 stains the entire tumor population. Cyclin D1 and SOX11 only stain the central portion that represents MCL and LEF-1 stains the periphery that represents CLL/SLL. Conclusions: We recommend the use of the above panels for flow cytometry and immunohistochemistry, respectively. LEF-1 is specific for CLL/SLL; and CD200 is helpful to distinguish CLL/SLL from MCL. © 2017 International Clinical Cytometry Society. abstract_id: PUBMED:23612032 Primary cutaneous mantle cell lymphoma of the leg with blastoid morphology and aberrant immunophenotype: a diagnostic challenge. Mantle cell lymphoma rarely affects the skin and is usually a secondary involvement. The present case illustrates a primary cutaneous mantle cell lymphoma of the leg, with blastoid morphology and aberrant expression of CD10 and bcl-6, which was misinterpreted at the beginning as diffuse large B-cell lymphoma. A larger panel of immunohistochemical markers, including cyclin-D1, and molecular investigation showing the typical translocation (t11;14), pointed toward the correct diagnosis. Cutaneous diffuse B-cell lymphomas with unusual morphology should be interpreted cautiously, and the diagnosis made on the basis of an appropriate panel of antibodies and molecular studies. abstract_id: PUBMED:15272540 An unusual case of leukemic mantle cell lymphoma with a blastoid component showing loss of CD5 and aberrant expression of CD10. Characteristically, mantle cell lymphoma (MCL) expresses surface immunoglobulin (sIg), CD19, CD20, and CD5 and lacks CD10 and CD23. Rare CD5-MCL variants have been described. This report describes a case of leukemic MCL with morphologically and immunophenotypically distinct classic MCL and blastoid-variant MCL (BV-MCL) components. The classic MCL had typical morphologic features and immunophenotype (kappa sIg light chain-restricted and CD5+; CD10- and CD23-). The BV-MCL had larger nuclei and open chromatin; these cells also were kappa sIg light chain-restricted; however, they were CD10+ and CD5-. Fluorescence in situ hybridization studies demonstrated cyclin D1-immunoglobulin heavy chain gene fusion in both components; the bone marrow biopsy cellularity was replaced by CD10+ and cyclin D1+ and CD5-BV-MCL. This case illustrates the phenotypic heterogeneity of MCL and underscores the need for histopathologic correlation and, in some instances, ancillary genetic studies to accurately classify B-cell lymphomas. abstract_id: PUBMED:28628241 Aberrant expression of CD10 and BCL6 in mantle cell lymphoma. Aims: Mantle cell lymphoma (MCL) is characterized by distinctive histological and molecular features. Aberrant expression of BCL6 and CD10 has been reported occasionally, but the biological features of such cases are largely unknown. This study aimed to define the epidemiological, histological and cytogenetic characteristics of BCL6 and CD10-positive MCLs, also investigating possible biological features. Methods And Results: A total of 165 cases of cyclin D1 and t(11;14)(q13;q34)-positive MCLs were studied for CD10 and BCL6 immunohistochemical expression, which was documented in 26 of 165 (15.8%) cases (BCL6 17 of 165; CD10 11 of 165; BCL6 and CD10 co-expression two of 165). CD10-positivity was significantly more frequent in females (63.3%; P < 0.01). Either expression correlated significantly with higher mean proliferation index and higher prevalence of MUM1 positivity (P < 0.05). Fluorescence in-situ hybridization (FISH) for BCL6 (3q27) gene derangements was performed on the BCL6- and CD10-positive cases and 98 matched controls: amplifications were documented more frequently in BCL6-positive than -negative cases (50.0% versus 19.4% of cases) (P < 0.05). The mutational status of the variable immunoglobulin heavy chain genes (IGVH) was investigated by Sanger sequencing: five of the six successfully tested cases (83.3%) showed no somatic hypermutations. Conclusions: Aberrant CD10 and BCL6 expression defines a subset of MCLs with higher mean Ki-67 index and higher prevalence of MUM1 expression. BCL6 protein positivity correlates with cytogenetic aberrations involving the BCL6 gene. Although examined successfully in few cases, the high prevalence of unmutated IGVH genes also points at a pregerminal cell origin for these phenotypically aberrant cases. abstract_id: PUBMED:7670110 Cyclin D1 protein analysis in the diagnosis of mantle cell lymphoma. Mantle cell lymphoma (MCL) is a clinicopathologic entity that is difficult to diagnose on histopathologic criteria. Approximately 50% to 70% of MCL contain a t(11;14)(q13;q32) translocation involving the cyclin D1 gene. Irrespective of this rearrangement, almost all MCL show overexpression of the cyclin D1 gene at the mRNA level. Other B-cell non-Hodgkin's lymphomas (NHL) do not show this rearrangement or overexpression of cyclin D1. We developed an immunohistochemical assay to detect overexpression of the cyclin D1 protein on conventional formalin-fixed, paraffin-embedded biopsies using the well-defined monoclonal antibody DCS-6. Expression in tumor cells was compared with expression of cyclin D1 in endothelial cells and fibroblasts. An exclusively nuclear staining pattern was observed. Moreover, expression was directly compared with the expression observed by immunoblot analysis with the same antibody, as well as with mRNA expression and with the occurrence of genomic rearrangements within the BCL-1 locus. Of 13 MCL that were analyzed by immunohistochemistry and immunoblot, 12 showed overexpression with both techniques, whereas no overexpression was observed in 39 other NHL. Of 13 additional MCL studied either by immunohistochemistry or immunoblot, 11 also showed overexpression. Two lymphomas morphologically indistinguishable from MCL but with an aberrant immunophenotype (CD5 negative, CD10 positive) both lacked overexpression of cyclin D1. These results underscore the significance of overexpression of the cyclin D1 protein as a specific marker for MCL. Detection of cyclin D1 overexpression on formalin-fixed, paraffin-embedded tissues using the DCS-6 monoclonal antibody can be applied for routine diagnostic purposes. Answer: The question of whether CD10 positive mantle cell lymphoma (MCL) represents a biologically distinct entity or an aberrant immunophenotype has been explored in several studies. Some research suggests that CD10 expression in MCL may be related to a distinct germinal center (GC) signature, but without significant clinical or biological implications. For instance, a study with nine CD10-positive MCL cases showed a distinct expression of a GC signature in gene expression profile (GEP) data, but no significant differences in clinicopathological features or clinical outcome compared to CD10-negative MCL cases (PUBMED:26124315). This suggests that while CD10 expression may indicate a certain biological profile, it does not necessarily translate into a different clinical behavior or outcome. However, other studies have found that CD10 positivity in MCL can be associated with a worse prognosis. A meta-analysis of individual patients revealed that BCL6+/CD10+ MCL had an inferior outcome compared to BCL6+/CD10- MCL, and BCL6+ MCL had significantly shorter overall survival (PUBMED:37373354). This indicates that CD10 positivity, especially when co-expressed with BCL6, may have prognostic significance in MCL. Additionally, aberrant CD10 expression has been reported in various case studies and series, sometimes in association with atypical presentations or morphologies, such as the blastoid variant of MCL (PUBMED:38504844, PUBMED:23628035, PUBMED:18684040, PUBMED:23612032). These cases can pose diagnostic challenges, as CD10-positive MCL can be mistaken for other types of lymphomas, such as large B-cell lymphoma. In summary, while CD10 expression in MCL is associated with a distinct GC signature, the clinical significance of this expression varies across studies. Some evidence points to CD10 positivity as a marker of worse prognosis, especially when co-expressed with BCL6, suggesting that it may not merely be an aberrant immunophenotype but could also indicate a biologically distinct subset of MCL with prognostic relevance.
Instruction: Does the neurologist contribute to the care of patients with chronic back pain? Abstracts: abstract_id: PUBMED:12186994 Does the neurologist contribute to the care of patients with chronic back pain? Objective: To review the disposition and outcome of patients with back pain seen in a neurology clinic, and determine the value of the neurologist's input. Background: Back pain is a common cause of referral to the neurology clinic, but it is not clear how much the neurologist contributes to the care of such patients. Design/methods: We reviewed all new patients and new consultations seen for the following reasons: low back pain; 'sciatica'; 'back and leg pain', and 'spinal stenosis'. Patients were evaluated by a single attending neurologist at a general neurology clinic over a 2-year period (January 1997 through December 1998). Results: During the 2-year period, there were 97 patients seen for the above reasons. The chief complaint was low back pain alone in 73, while it was combined with another symptom in 24 (leg pain in 18, buttock or hip pain in 6). Thirty-three patients were self-referred, 56 were referred by a healthcare professional, 3 were referred by attorneys, and the referral source was unclear in 5. Median age was (range 25-90) years. Forty-eight were males. Based on the clinical data (history and examination), clinical diagnoses were radiculopathy in 28, myelopathy in 2, uncomplicated spondylosis in 27, and other non-neurologic conditions in 45. Tests performed after the visit included X-rays in 31, CT in 1, MRI in 16, and EMG in 17. 'Final' diagnoses were radiculopathy in 29 patients, myelopathy in 2 (total neurologic n = 30), uncomplicated spondylosis in 27, and other non-neurologic diagnoses in 41 (total non-neurologic n = 67). The non-neurologic diagnoses included periarticular pain, bursitis, muscle strain, soft tissue injury, and vertebral compression fractures. Final dispositions in the non-neurologic group (n = 67) were: nonsteroidal anti-inflammatories (NSAIDs) in 34 patients; analgesics or muscle relaxants in 37; physical therapy for 19, and none (including rest and observation) for 22. Four patients were referred for epidural blocks. (Many patients had more than one final disposition.) Final dispositions in the neurologic group (n = 39) were: physical therapy (n = 9); NSAIDs (n = 12); analgesics or muscle relaxants (n = 12), and epidural blocks (n = 3). Three patients were referred for surgery, but only 1 had an operation. Conclusions: Of patients with back pain and related symptoms seen in a neurology clinic, about two thirds have non-neurologic conditions. These patients are usually treated symptomatically with medications, rest, and physical therapy, all of which could be managed by primary care physicians. In the neurologic group, the vast majority is treated in the same way as the non-neurologic group. When more specific measures are needed, such as surgery or pain management procedures (e.g., epidural blocks), then the patients could be evaluated directly by the proper specialist (pain management or spine surgery) rather than the neurologist. The neurologist's input does not significantly affect the diagnosis or the management, so that the neurologist appears to have no useful role in the management of such patients. abstract_id: PUBMED:29035178 Patient Perspectives on Communication with Primary Care Physicians about Chronic Low Back Pain. Objectives: Chronic low back pain (CLBP) is a common health problem with challenges for providing satisfactory care. This study was undertaken to identify opportunities to improve key aspects of physicians' communications with CLBP-affected patients. Methods: A series of 3 focus groups, each with 7 to 11 patients with CLBP, were recruited from primary care settings and grouped by risk level of reduced function resulting from back pain, to elicit perspectives about interactions with their primary care physicians. Analysis of focus group transcripts used an iterative process based on a thematic approach and a priori concepts. Results: A total of 28 patients participated in the focus groups. Patient comments about communicating with physicians around CLBP fit into themes of listening and empathy, validating pain experiences, conducting effective CLBP assessment, providing clear diagnosis and information, and collaboratively working on treatment. Patients shared that physicians can foster positive interactions with CLBP-affected patients by sharing personal experiences of chronic pain, being truthful about not having all the answers and being clear about how patients can benefit from referrals, reviewing the patient's previous treatments before beginning conversations about treatment options, providing follow-up instructions, giving patients a diagnosis beyond "chronic pain," and explaining the role of imaging in their care. Conclusion: This study provides specific steps that physicians in the US can take to improve physician-patient interactions during primary care visits pertaining to CLBP. The findings could inform physician training, development of educational materials for patients, and future research. abstract_id: PUBMED:15086657 The quality of medical care provided to vulnerable older patients with chronic pain. Objectives: To assess the quality of chronic pain care provided to vulnerable older persons. Design: Observational study evaluating 11 process-of-care quality indicators using medical records and interviews with patients or proxies covering care received from July 1998 through July 1999. Setting: Two senior managed care plans. Participants: A total of 372 older patients at increased risk of functional decline or death identified by interview of a random sample of community dwellers aged 65 and older enrolled in these managed-care plans. Measurements: Percentage of quality indicators satisfied for patients with chronic pain. Results: Fewer than 40% of vulnerable patients reported having been screened for pain over a 2-year period. One hundred twenty-three patients (33%) had medical record documentation of a new episode of chronic pain during a 13-month period, including 18 presentations for headache, 66 for back pain, and 68 for joint pain. Two or more history elements relevant to the presenting pain complaint were documented for 39% of patients, and at least one relevant physical examination element was documented for 68% of patients. Treatment was offered to 86% of patients, but follow-up occurred in only 66%. Eleven of 18 patients prescribed opioids reported being offered a bowel regimen, and 10% of patients prescribed noncyclooxygenase-selective nonsteroidal antiinflammatory medications received appropriate attention to potential gastrointestinal toxicity. Conclusion: Chronic pain management in older vulnerable patients is inadequate. Improvement is needed in screening, clinical evaluation, follow-up, and attention to potential toxicities of therapy. abstract_id: PUBMED:29212489 Health-care utilization of patients with chronic back pain before and after rehabilitation. Background: Patients with chronic back pain show an increased use of health-care services leading to high direct costs. Multidisciplinary rehabilitation reduces pain intensity, depression, disability and work inability. The study aims to investigate whether health-care utilization in patients with chronic back pain is lower after rehabilitation than before rehabilitation and if, in addition to sociodemographic, medical and psychological characteristics, changes in these characteristics immediately after rehabilitation can predict health-care utilization. Methods: N = 688 patients with chronic back pain were asked about their overall health-care services use and the use of general practitioners, specialists, physiotherapy, psychotherapy, complementary therapist, massages, and admission to hospital both 6 months before and 6 months after rehabilitation. In addition, various sociodemographic, medical and psychological variables were assessed. To measure changes due to rehabilitation, differences in pain intensity, disability, impairment and coping, quality of life, and days on sick leave before and after rehabilitation were calculated. Dependent t-tests and hierarchical regression analyses were used to analyse the data. Results: Health-care utilization 6 months after rehabilitation was, except for physiotherapy and psychotherapy, significantly lower than before. The effect sizes were rather small (Cohens'd =. 01-.34). After rehabilitation between 15.2% and 39.9% of the variance of health-care utilization could be explained. The baseline values of health-care utilization explained between 3.2% and 15.9% of the incremental variances. The changes in psychological impairment and coping as well as in sick leave after rehabilitation could explain between 0.8% and 2.9% of the variance of health-care utilization after rehabilitation. Its influence was significant for the general use of health-care services, general practitioners and specialists. Conclusions: The results indicate that use of health-care services after rehabilitation in the present study is slightly lower than before, which has an impact on direct costs due to chronic back pain in Germany. The predictors show the importance in terms of health-care utilization of improving work ability and psychological impairment. abstract_id: PUBMED:31367430 Struggling to be seen and understood as a person - Chronic back pain patients' experiences of encounters in health care: An interview study. Aim: The aim of this study was to describe how patients with chronic back pain experience encounters with health care. Persons with chronic back pain are a stigmatized group often treated based on stereotypes, which may lead to misunderstandings and create frustrated patients and healthcare personnel. Few studies have examined the generic aspects of quality of care in this context. Design: A descriptive design with a qualitative approach was used. Methods: Nine individual interviews were conducted with chronic back pain patients after admission to an orthopaedic hospital ward. Data were analysed using content analysis. Results: The patients' experiences of healthcare encounters can be described by the theme "Struggling to be seen and understood as a person," comprising the categories "Lack of access and trust to care," "A desire to be taken care of and listened to" and "Own strength to handle healthcare situations." abstract_id: PUBMED:23566128 Ethical quandaries in caring for primary-care patients with chronic pain. In the past decade, more and more behavioral health providers have begun consultation practices in primary-care settings. Their availability makes multidisciplinary care a reality and the possibility of improved outcomes for patients with chronic pain more feasible. However, behavioral health providers encounter new ethical quandaries in providing services to patients with chronic pain and to the primary-care providers who plan their treatment. This article presents two cases to illustrate the questions that arise in delivery of primary-care behavioral health services to patients with chronic pain. Relevant professional ethical guidelines for psychologists, social workers, and physicians are examined and recommendations for addressing the gaps in extant guides are offered. abstract_id: PUBMED:17007290 Masked (atypical) depression in patients with back pain syndrome in outpatient neurological care Unlabelled: Affective disorders, mainly depression is fast growing problem in highly civilised communities. The literature reports that 30-40% of patients seeking medical advice from specialists (not psychiatrists) in outpatient departments, suffer from these disorders. Many authors mention also diagnostic difficulties in dealing with atypical depression, and draw attention to the fact that this process is often "masked" by chronic pain. In neurological care we find a remarkable group of outpatients with back pain syndrome, mainly cervical and lumbar, with no corresponding lesions found on examination. This enabled us to assume that in some patients presenting with somatic symptoms and therefore referred to a neurologist, the reason of complaints can be depressive disorders. The aim of the study was to verify this hypothesis and to estimate the frequency of masked depression in neurological care. Material And Method: 100 patients were examined in the neurological outpatient department, with the use of the two--Beck and Hamilton--scales of depression evaluation. Results: A statistical analysis of the data thus obtained, showed that: back pain syndrome, as a symptom of depression, occurred in 14% of all patients referred to a neurologist, whereas in 37%--depressive disorders, not diagnosed before, were detected; in 88,2% they were of mild and moderate type. Conclusion: In some patients presenting with somatic symptoms and therefore referred to a neurologist, the reason of complaints are depresive disorders. abstract_id: PUBMED:26004570 Healthcare costs associated with elderly chronic pain patients in primary care. Objective: This study aimed to estimate the total healthcare costs associated with elderly chronic pain (CP) patients, define cost-related factors in this population, and examine cost evolution over two years. Method: This is an ancillary study from the CP S.AGE subcohort, including non-institutionalized patients aged over 65 suffering from CP. 1190, 1108, 1042, and 950 patients were reviewed with available healthcare data at follow-up visits at 6, 12, 18, and 24 months, respectively. Healthcare components included medical and paramedical visits, medication prescription, and hospitalization. Result: The mean total cost in the first semester was estimated at <euro>2548 ± <euro>8885 per patient. Hospitalization represented the largest cost component (50%) followed by paramedical care (24%), medications (21%), and medical visits (5%). Significant cost-associated factors were comorbidity (OR 1.49, 95% CI 1.35-1.64), dependency in daily activities (OR 1.85, 95% CI 1.39-2.47), probable depression (OR 1.71, 95% CI 1.09-2.69), permanent pain (OR 1.48, 95% CI 1.18-1.86), neuropathic pain (OR 1.94, 95% CI 1.38-2.73), living alone (OR 1.45, 95% CI 1.16-1.82), chronic back pain (OR 1.35, 95% CI 1.07-1.71), and vertebral fracture/compression (OR 1.47, 95% CI 1.08-2.01). Healthcare costs increased significantly by 48% (p < 0.0001) during follow-up namely due to hospitalizations. Elevated costs were associated with a higher risk of future hospitalization (OR 1.95, CI 95% 1.33-2.87). Conclusion: Healthcare costs increased rapidly over time, largely due to hospitalization. Prevention strategies to limit hospitalizations in elderly appear to be the most useful in order to achieve cost savings in the future. abstract_id: PUBMED:28463868 Application of Primary Care Guideline for Chronic Low Back Pain in the Emergency Department. Chronic low back pain (CLBP) is a primary care condition that overflows into the emergency department (ED). No ED-specific practice guideline exists for the management of patients with CLBP in the ED setting. Back pain is a common chief complaint, with cases of CLBP making up to 50% of the patients seen with back pain in an urban, freestanding ED affiliated with a multicampus health system in the Midwest where 25% of patients live below the poverty line and 21.8% do not have primary care. Kolcaba's Theory of Comfort guides the implementation of an evidence-based practice application project. The Guideline for the Evidence-Informed Primary Care Management of Low Back Pain from the National Guideline Clearinghouse is introduced. Interprofessional collaboration for immediate referrals was implemented. The plan-do-study-act model was applied with descriptive analysis of the data. Reflective of established specific aims of the project, there was increased (1) support for the use of evidence-based practice guideline among the ED providers and staff, (2) use of evidence-based practices, (3) connection of patients to community resources, (4) evidence-based education for patients, and (5) overall satisfaction with pain management. The effectiveness of the application of this primary care guideline in the ED opens the way both for region-wide application of the guidelines and for the introduction of other primary care practice guidelines for patients who present to the ED with other chronic conditions. abstract_id: PUBMED:27405752 Study protocol for a cluster randomized controlled trial to evaluate a referral strategy for axial spondyloarthritis in young primary care patients with chronic low back pain; an impact study. Background: Axial spondyloarthritis (axSpA) is a disabling inflammatory joint disease with chronic low back pain (CLBP) as leading symptom. Recognizing axSpA in the large amount of CLBP patients is difficult for general practioners (GP). This evaluation aims to assess the effect of a referral strategy for axSpA in young primary care patients with CLBP by comparing the use of the strategy with usual care. The effect is measured at three different levels; by patient reported outcomes (the clinical effect), process and costs evaluation. Methods/design: This study design is a cluster randomized controlled trial with GP as clusters. GPs throughout the Netherlands are invited to participate and randomized to either the intervention or the control group. Patients from participating GPs are invited to participate if they have ever been registered with low back pain, without radiation (ICPC L03) and aged 18-45 years. To be included in the study, patients need to have current low back pain and chronic low back pain (>12 weeks). In the intervention arm a referral strategy for axSpA will be applied in CLBP patients, in the control arm care as usual will be provided for CLBP patients. The referral strategy consists of four easy to use variables. All are questions about the back pain complaints of the patients. Data is prospectively collected in an online database at baseline (T0), 4 months (T1), 12 months (T2) and 24 months (T3). After time point T1 (4 months) patients from the control group will also receive the intervention i.e. the application of a referral strategy for axSpA. The effect of the referral strategy is measured at three different levels, by patient outcomes (e.g. pain scores, quality of life), process measures (e.g. number of axSpA diagnoses by rheumatologists) and by costs (work productivity and health care resources use). Our primary outcome is the Roland Morris Disability Questionnaire after 4 months, secondary outcomes are pain and quality of life. Costs will be assessed before and after the use of the referral strategy, to estimate if the use of the strategy will lead to a reduction in health care costs and improvement in work participation. Discussion: It is anticipated that using the axSpA referral strategy for primary care CLBP patients will increase the quality of life of CLBP patients, will result in more (correct) diagnoses of axSpA by the rheumatologists, and will be cost-effective. Ultimately, the results of this study may contribute to the startup of a national implementation of the axSpA referral strategy to identify timely CLBP patients with axSpA. Trial Registration: NCT01944163 , date of registration; September 6, 2013 (Clinicaltrials.gov). Answer: Based on the information provided in the abstracts, it appears that the role of the neurologist in the care of patients with chronic back pain may be limited. A study reviewed the disposition and outcome of patients with back pain seen in a neurology clinic and found that about two-thirds of these patients had non-neurologic conditions. These patients were typically treated with medications, rest, and physical therapy, which could be managed by primary care physicians. In the neurologic group, the majority were treated similarly to the non-neurologic group. When specific measures such as surgery or pain management procedures were needed, patients could be evaluated directly by the appropriate specialist rather than the neurologist. The study concluded that the neurologist's input did not significantly affect the diagnosis or management, suggesting that the neurologist may not have a useful role in the management of such patients (PUBMED:12186994). Additionally, other studies emphasize the importance of communication, assessment, and treatment in primary care settings for chronic low back pain (CLBP) patients (PUBMED:29035178), the inadequacy of chronic pain management in older vulnerable patients (PUBMED:15086657), and the slight decrease in healthcare utilization after rehabilitation for chronic back pain (PUBMED:29212489). Furthermore, patients with chronic back pain have expressed a desire to be seen and understood as a person during healthcare encounters (PUBMED:31367430), and ethical quandaries have been noted in caring for primary-care patients with chronic pain (PUBMED:23566128). There is also mention of masked depression in patients with back pain syndrome in outpatient neurological care (PUBMED:17007290) and the significant healthcare costs associated with elderly chronic pain patients (PUBMED:26004570). Lastly, the application of primary care guidelines for CLBP in the emergency department (PUBMED:28463868) and a study protocol for evaluating a referral strategy for axial spondyloarthritis in young primary care patients with CLBP (PUBMED:27405752) are discussed, which further supports the notion that primary care plays a critical role in the management of chronic back pain. In summary, while neurologists may see patients with chronic back pain, their role in the long-term management of these patients may be limited, with primary care physicians and other specialists potentially playing a more central role in the care of these patients.
Instruction: Detection of single-vessel coronary artery disease by dipyridamole stress echocardiography: no longer a problem? Abstracts: abstract_id: PUBMED:19222128 Detection of single-vessel coronary artery disease by dipyridamole stress echocardiography: no longer a problem? Unlabelled: We aimed to evaluate whether addition of myocardial contrast echocardiography (MCE) perfusion data improves the sensitivity of stress echocardiography for detection of single-vessel coronary artery disease (svCAD) and to compare the diagnostic value of MCE and single-photon emission computed tomography (SPECT) for detection of svCAD. Methods: One hundred and three patients with suspected or known stable CAD underwent dipyridamole (0.84 mg kg(-1) intravenously over 4 min)-atropine (up to 1 mg intravenously) stress echocardiography combined with MCE. Wall motion abnormalities (WMA) and perfusion defects were assessed visually. Presence of CAD was detected by coronary angiography. Results: Single-vessel coronary artery disease defined as >or=70% stenosis was detected in 30% of patients, whereas 26% of patients had svCAD defined as >or=50% stenosis. Presence of inducible WMA had 35% and 26% sensitivity for detection of svCAD defined as >or=70% and >or=50% stenosis, respectively. Concomitant evaluation of MCE increased the sensitivity to 74% (P = 0.005) and 56% (P = 0.053), respectively, using any inducible abnormality (WMA or perfusion defects) as a criterion. Presence of any (inducible or fixed) WMA had 77% and 59% sensitivity for detection of svCAD defined as >or=70% and >or=50% stenosis, respectively. In case of such criterion for stress test positivity, the improvement in sensitivity provided by addition of MCE (to 94% and 78%, respectively) did not reach statistical significance. Conclusions: Addition of MCE perfusion analysis during stress echocardiographical examination based on evaluation of inducible abnormalities improves the test sensitivity for detection of svCAD. This benefit is less apparent when fixed WMA and perfusion defects are incorporated into the stress test positivity criterion. abstract_id: PUBMED:11882424 Non-invasive diagnostic and prognostic assessment of single-vessel coronary artery disease: focus on stress echocardiography. Aims: Revascularization procedures are increasingly applied in patients with single-vessel coronary artery disease in spite of the fact that a prognostic benefit has been proved only for soft end-points. This review summarizes the results of stress echocardiography in the diagnostic and prognostic assessment of these patients. Methods And Results: The diagnostic and prognostic assessment of patients with single-vessel disease using stress (exercise, dobutamine, adenosine and dipyridamole) echocardiography are focused upon in the light of pathophysiological considerations and the results of clinical studies. Factors affecting test accuracy are individually addressed and comparisons made with different stress testing modalities, including exercise electrocardiography and nuclear techniques. Finally, therapeutic options are discussed and the superior accuracy of the physiological assessment of coronary stenosis as compared to the simple anatomic evaluation emphasized. Conclusions: Patients with single-vessel disease represent an anatomically heterogeneous group. Although the suboptimal performance of any technique in their evaluation has to be acknowledged, stress echocardiography can effectively contribute to selection of the management strategy. abstract_id: PUBMED:9362012 Inotropic stress with arbutamine is superior to vasodilator stress with dipyridamole for the detection of reversible ischemia with Tc-99m sestamibi single-photon emission computed tomography. Background: There is a paucity of data comparing the relative merits of inotropic and vasodilator stress Tc-99m sestamibi single-photon emission computed tomography (SPECT) for the detection of coronary artery disease and reversible ischemia. Methods And Results: Twenty-seven patients referred for diagnostic coronary arteriography underwent separate day dipyridamole and arbutamine Tc-99m sestamibi SPECT imaging with simultaneous two-dimensional echocardiography. The sensitivity of arbutamine and dipyridamole Tc-99m sestamibi for the detection of coronary artery disease was 100% (21 of 21) and 90% (19 of 21), respectively, with a specificity of 66% (4 of 6) for both. Coronary artery disease was detected in all six patients with single vessel disease by both stress modalities. The sensitivity for prediction of multivessel disease was 66% (10 of 15) for arbutamine and 46% (7 of 15) for dipyridamole stress. Arbutamine stress induced a greater extent and severity of perfusion abnormality at peak stress (peak perfusion score 25 +/- 6.2 and 21 +/- 5.9 for arbutamine and dipyridamole, respectively, p = 0.001) and reversible perfusion defects (difference between peak stress and rest scores 8.8 +/- 5.5 and 5.2 +/- 4.4 for arbutamine and dipyridamole, respectively, p = 0.001). Furthermore a significantly higher percentage of reversible defects induced by arbutamine stress was associated with wall thickening abnormality on simultaneous echocardiography, which is a more specific marker of myocardial ischemia (88% and 24% for arbutamine and dipyridamole, respectively, p = 0.002). Conclusion: Inotropic stress may be superior to vasodilators for the determination of the extent and severity of myocardial involvement and reversible ischemia by Tc-99m sestamibi SPECT. abstract_id: PUBMED:26165447 Longitudinal Strain by Automated Function Imaging Detects Single-Vessel Coronary Artery Disease in Patients Undergoing Dipyridamole Stress Echocardiography. Background: The aim of this study was to investigate the incremental value of global longitudinal strain (GLS) by automated function imaging in respect to wall motion (WM) for the detection of coronary artery disease (CAD) during dipyridamole stress echocardiography. Methods: Fifty-two patients (mean age, 65.3 ± 8.7 years; 22 men) underwent dipyridamole stress echocardiography followed by coronary angiography within 1 week. Diagnostic accuracy for the identification of single-vessel CAD was evaluated for WM and GLS. The study population was divided into two groups according to coronary angiographic findings: those with CAD (n = 38; mean age, 67.2 ± 5.9 years; 19 men) and those without CAD (n = 14; mean age, 63.3 ± 6.4 years; three men). Results: A trend toward lower resting GLS values was found in patients with CAD than in those without (-18.7 ± 2.2% vs -20 ± 2.8%, P = .061). In patients without CAD, GLS progressively increased up to peak dose (from -20 ± 2.8% at rest to -20.7 ± 1.9% at low dose, P = .045; from -20.7 ± 1.9% at low dose to -21.5 ± 3.1% at peak dose, P = .032), whereas in patients with CAD, an increase of GLS from rest to low dose (from -18.7 ± 2.2% to -19.2 ± 3.9%, P = .046) followed by a decrease from low to peak dose (from -19.2 ± 3.9% to -17.5 ± 2.4%, P = .007) was observed. In addition, with regard to diagnostic accuracy in detecting CAD, WM yielded sensitivity of 44%, specificity of 55%, positive predictive value of 73%, and negative predictive value of 26%, whereas GLS, alternatively evaluated as the difference between peak dose and resting values or between peak and low-dose values, provided sensitivity of 61%, specificity of 90%, positive predictive value of 94%, and negative predictive value of 47% (respectively, P = .020, P = .001, P = .023, and P = .031, all vs WM) and sensitivity of 84%, specificity of 92%, positive predictive value of 96%, and negative predictive value of 68% (respectively, P < .001, P < .001, P = .001, P < .001, all vs WM). Conclusions: GLS analysis, particularly performed by comparing peak-dose with low-dose values, improves the accuracy of dipyridamole stress echocardiography in the detection of single-vessel CAD compared with the sole assessment of WM changes. abstract_id: PUBMED:8993941 Comparative diagnostic value of dobutamine stress echocardiography and stress thallium-201 single-photon-emission computed tomography for detecting coronary artery disease in women. Background: The non-invasive diagnosis of coronary artery disease in women remains a difficult clinical problem. The aim of this study was to investigate the comparative diagnostic accuracy of dobutamine stress echocardiography and stress thallium-201 single-photon-emission computed tomography (SPECT) in women who were suspected to have coronary artery disease. Methods: We studied 70 consecutive women who had not had a previous myocardial infarction (mean age 65 years) who underwent dobutamine stress echocardiography and quantitative coronary angiography. Sixty-one of the 70 women were also subjected to thallium-201 SPECT, with exercise or dipyridamole administration as the stressor, during the same period. A positive diagnosis was based on the detection of inducible wall motion abnormality and the presence of stress-induced abnormalities of perfusion. Results: No significant complication occurred in any patient during dobutamine stress echocardiography (peak dose 31 +/- 9 micrograms/kg per min, atropine administered to 13 patients, peak heart rate 120 +/- 19 beats/min). Coronary artery disease (> or = 50% diameter stenosis) was present in 20 (29%) patients, nine with one-vessel and 11 with multivessel disease. The sensitivity, specificity and diagnostic accuracy of dobutamine stress echocardiography for detecting coronary artery disease were 75, 92 and 87% respectively. In 61 patients who underwent both stress tests, the sensitivity, specificity and accuracy of dobutamine stress echocardiography and stress thallium-201 SPECT were 72 versus 78%, 91 versus 70% (P < 0.05) and 85 versus 72%, respectively. Conclusions: Dobutamine stress echocardiography appears to be of comparable sensitivity and higher specificity with stress thallium-201 SPECT for noninvasive detection of coronary artery disease in women. abstract_id: PUBMED:11028482 Dobutamine-atropine stress echocardiography and dipyridamole sestamibi scintigraphy for the detection of coronary artery disease: limitations and concordance. Objectives: We sought to compare dobutamine-atropine stress echocardiography (DASE) and dipyridamole Technetium 99-m (Tc-99m) sestamibi single photon emission computed tomography (SPECT) scintigraphy (DMIBI) for detecting coronary artery disease (CAD). Background: Both DASE and DMIBI are effective for evaluating patients for CAD, but their concordance and limitations have not been directly compared. Methods: To investigate these aims, patients underwent multistage DASE, DMIBI and coronary angiography within three months. Dobutamine-atropine stress echocardiography and stress-rest DMIBI were performed according to standard techniques and analyzed for their accuracy in predicting the extent of CAD. Segments were assigned to vascular territories according to standard models. Angiography was performed using the Judkin's technique. Results: The 183 patients (mean age: 60 +/- 11 years, including 50 women) consisted of 64 patients with no coronary disease and 61 with single-, 40 with two- and 18 with three-vessel coronary disease. Dobutamine-atropine stress echocardiography and DMIBI were similarly sensitive (87%, 104/119 and 80%, 95/119, respectively) for the detection of CAD, but DASE was more specific (91%, 58/64 vs. 73%, 47/64, p < 0.01). Sensitivity was similar for the detection of CAD in patients with single-vessel disease (84%, 51/61 vs. 74%, 45/61, respectively) and multivessel disease (91%, 53/58 vs. 86%, 50/58, respectively). Multiple wall motion abnormalities and perfusion defects were similarly sensitive for multivessel disease (72%, 42/58 vs. 66%, 38/53, respectively), but, again, DASE was more specific than DMIBI (95%, 119/125 vs. 76%, 95/125, respectively, p < 0.01). Dobutamine-atropine stress echocardiography and DMIBI were moderately concordant for the detection and extent of CAD (Kappa 0.47, p < 0.0001) but were only fairly (Kappa 0.35, p < 0.001) concordant for the type of abnormalities (normal, fixed, ischemia or mixed). Conclusions: Dobutamine-atropine stress echocardiography and DMIBI were comparable tests for the detection of CAD. Both were very sensitive for the detection of CAD and moderately sensitive for the extent of disease. The only advantage of DASE was greater specificity, especially for multivessel disease. Dobutamine-atropine stress echocardiography may be advantageous in patients with lower probabilities of CAD. abstract_id: PUBMED:11797508 Comparison of dobutamine-stress magnetic resonance imaging and dipyridamole-TL-201-SPECT as alternative strategies for the detection of coronary artery dis in patients no suitable for stress echocardiography Aim: During the past decade stress-echocardiography has gained increasing popularity for detection of myocardial ischemia in patients with coronary artery disease. However, about 10% to 15% of the patients submitted for stress-echocardiography do not have an adequate acoustic window. The purpose of this study was to compare high-dose dobutamine-stress magnetic resonance imaging (dobutamine-MRI) with dipyridamole-Tl-201-SPECT (dipyridamole-SPECT) as alternative strategies for detection of myocardial ischemia in patients with inadequate image quality by stress-echocardiography. Patients And Methods: Of 296 patients which were consecutively submitted to stress-echocardiography 45 patients (15%) had two or more segments that could not be evaluated according to the 16-segment-model of the American Society of Echocardiography. They underwent dobutamine-MRI and dipyridamole-SPECT studies, which were evaluated using a 28-segment modell. Myocardial segments were attributed to perfusion territories of the coronary arteries. The results of ischemia detection were compared to the results of coronary angiography (stenosis > or = 50%). Results: In comparison to coronary angiography dobutamine-MRI yielded a sensitivity of 87%, a specificity of 86%, a positive predictive value of 93%, a negative predictive value of 75% and a diagnostic accuracy of 86%. For dipyridamole-SPECT results were 90%, 86%, 93%, 80% and 89%, respectively. These values were not significantly different. Conclusions: In patients not suitable for stress-echocardiography, both dobutamine-MRI and dipyridamole-SPECT are reliable strategies for detection of myocardial ischemia. Selection is dependent on the patient criteria, technical considerations, local logistics and experience of the observer. abstract_id: PUBMED:12767419 Comparison of usefulness of dipyridamole stress myocardial contrast echocardiography to technetium-99m sestamibi single-photon emission computed tomography for detection of coronary artery disease (PB127 Multicenter Phase 2 Trial results). We hypothesized that assessment of hyperemic myocardial blood flow (MBF) velocity using myocardial contrast echocardiography (MCE) can detect coronary artery disease (CAD). We also postulated that only a single MCE study during stress is required for the detection of CAD in patients with normal function at rest. Patients with known or suspected CAD referred for dipyridamole stress technetium-99m sestamibi single-photon emission computed tomographic (SPECT) studies were enrolled. MCE was performed concurrently with SPECT using continuous infusions of PB127 during intermittent harmonic power Doppler imaging at multiple pulsing intervals. MCE and SPECT were compared in 43 of 54 patients who had adequate studies using both techniques. In 15 of the 43 patients, coronary angiography was performed within 30 days of the MCE/SPECT tests. Overall concordance for classification of patients as normal versus abnormal was 84% (kappa = 0.63) between the 2 tests. When false-negative SPECT scans were corrected for results of angiography, concordance increased to 93% (kappa = 0.82). For territorial analysis, concordance between MCE and SPECT for location of perfusion defects was 65% (kappa = 0.41) and 74% (kappa = 0.61) after SPECT was corrected by angiography. In patients with normal function at rest, a single stress MCE perfusion study allowed identification of CAD with the same concordance as rest/stress perfusion studies. In conclusion, visual assessment of regional differences in MBF velocity using PB127 allows detection of CAD with good concordance compared with technetium-99m sestamibi SPECT. In patients with normal left ventricular function at rest, a single stress PB127 MCE perfusion study is adequate for the detection of CAD. abstract_id: PUBMED:7985586 Pharmacologic stress dual-isotope myocardial perfusion single-photon emission computed tomography. Separate-acquisition rest thallium-201/exercise technetium-99m sestamibi (sestamibi) dual-isotope single-photon emission computed tomography (SPECT) has been shown to be effective for assessment of myocardial perfusion and viability. The present study was designed to validate the dual-isotope approach when used in conjunction with pharmacologic stress. All patients had rest 201TI SPECT followed immediately by adenosine (n = 82) or dipyridamole (n = 50) infusion and sestamibi injection. Sestamibi SPECT was performed 1 hour later. The entire study lasted < 2.5 hours. The patient population was categorized into three groups: 51 consecutive patients with coronary angiography and no previous myocardial infarction (group I), 58 consecutive patients with a low prescintigraphic test likelihood of coronary artery disease (group II), and 23 consecutive catheterized patients with remote Q-wave myocardial infarction (group III). For group I patients, the sensitivity and specificity for dual-isotope SPECT were 92% (35 of 38) and 85% (11 of 13), respectively, when > or = 50% coronary artery narrowing was considered significant and were 97% (34 of 35) and 81% (13 of 16) respectively, when > or = 70% narrowing was considered significant. The normalcy rate among the 58 patients of group II was 96%. Comparisons for pattern of stress-defect reversibility demonstrated that of the 97 stress defects within the infarct zones (group III), 15% were reversible and 85% were nonreversible. In contrast, of the 227 stress defects within the diseased (> or = 50% stenosis) vessel zones of the group I patients, 93% were reversible and 7% were noreversible (p < 0.001 vs group III). In conclusion, separate acquisition rest 201-TI/pharmacologic stress sestamibi dual-isotope SPECT is an efficient myocardial perfusion imaging protocol with high accuracy for detection and assessment of angiographically significant coronary artery disease. abstract_id: PUBMED:9890491 How well does radionuclide dipyridamole stress testing detect three-vessel coronary artery disease and ischemia in the region supplied by the most stenotic vessel? Purpose: This study was done to evaluate the accuracy of radionuclide dipyridamole stress imaging to detect multivessel disease and ischemia in segments of myocardium supplied by the most stenotic vessel. Methods: A retrospective analysis of consecutive patients with known triple-vessel disease of at least 50% stenosis in each of the three major epicardial coronary arteries who had exercise (n=44) or dipyridamole (n=86) stress testing, or both, within 6 months of coronary angiography. Results: The accuracy of dipyridamole stress testing to detect three-vessel disease was 52% and ischemia was detected in the region supplied by the most stenotic vessel in 67% of patients. The sensitivity and specificity rates of radionuclide imaging to detect ischemia in the region supplied by the vessel of tightest stenosis were 69% and 74% for the left anterior descending coronary artery (LAD), 61% and 78% for the right coronary artery (RCA), and 61% and 57% for the left circumflex coronary artery (LCX). Based on these values, in 39% of patients in whom the RCA or LCX was the most stenotic vessel and in 31% of patients in whom the LAD was the most stenotic vessel, perfusion defects were not present on their nuclear scans. Conclusions: These results have important implications for interventional cardiologists who perform angioplasty on the most stenotic vessel, because the regions supplied by these vessels may not be the most ischemic. Furthermore, dipyridamole stress imaging may significantly underestimate the number of patients with substantial three-vessel coronary artery disease when qualitative imaging is done. Answer: The detection of single-vessel coronary artery disease (svCAD) by dipyridamole stress echocardiography has been a challenge, but recent studies suggest that the addition of certain techniques may improve its sensitivity. The addition of myocardial contrast echocardiography (MCE) perfusion data to stress echocardiography has been shown to improve the sensitivity for detecting svCAD. Specifically, the presence of inducible wall motion abnormalities (WMA) alone had a sensitivity of 35% for detecting svCAD defined as ≥70% stenosis, but this increased to 74% with the addition of MCE (PUBMED:19222128). This suggests that incorporating MCE into the stress echocardiography protocol can significantly enhance the detection of svCAD. Moreover, the use of global longitudinal strain (GLS) by automated function imaging in addition to wall motion (WM) analysis during dipyridamole stress echocardiography has also been found to improve diagnostic accuracy. GLS analysis, particularly when comparing peak-dose with low-dose values, provided higher sensitivity and specificity compared to WM changes alone (PUBMED:26165447). In comparison with other modalities, dobutamine stress echocardiography has been shown to have comparable sensitivity and higher specificity than stress thallium-201 SPECT for detecting coronary artery disease in women (PUBMED:8993941). Additionally, dobutamine-atropine stress echocardiography (DASE) and dipyridamole Technetium 99-m (Tc-99m) sestamibi SPECT scintigraphy (DMIBI) were found to be comparable tests for the detection of CAD, with DASE having greater specificity, especially for multivessel disease (PUBMED:11028482). Overall, while the detection of svCAD using dipyridamole stress echocardiography has been less than optimal in the past, the addition of MCE perfusion analysis and GLS assessment has improved its sensitivity and accuracy. Therefore, with these enhancements, detecting svCAD by dipyridamole stress echocardiography may no longer be as problematic as it once was.
Instruction: Is GERD a risk factor for laryngeal cancer? Abstracts: abstract_id: PUBMED:15744163 Is GERD a risk factor for laryngeal cancer? Objective/hypothesis: The significance of gastroesophageal reflux disease (GERD) in laryngeal cancer is controversial due to disparate studies. To evaluate the overall strength of the association of GERD with laryngeal cancer, we performed meta-analysis of the original studies in literature. Study Design: Meta-analysis. Methods: All studies cited on Ovid Medline (1966-June 2004), EMBASE (1980-June 2004), and Cochrane database describing GERD and laryngeal cancer were eligible for inclusion. The inclusion criteria for the study included original controlled study design and a clear documentation of the reflux prevalence in cases and controls. Statistical analysis was performed by NCSS software. Results: Fifteen original studies were identified. Eight studies did not have control groups, two studies did not clearly document GERD prevalence in controls, and two studies were published using the same data, one of which was included in this meta-analysis. Thus, four studies qualified for inclusion for the meta-analysis. The methodologic quality in the studies was heterogeneous, not only in the evaluation of confounding risk factors such as smoking and alcohol but also in the mode of GERD diagnosis. There was also significant heterogeneity of effect of reflux among the studies (P = .001). The pooled odds ratio on the basis of fixed-effects model was 2.86 (95% CI, 2.73-2.99), and on the basis of random-effects model was 2.37 (95% CI 1.38-4.08). Conclusion: Our meta-analysis suggests that GERD may be a significant risk factor for laryngeal cancer. However, given the heterogeneity of the published data, future prospective controlled studies are needed. abstract_id: PUBMED:32854119 GERD and adjuvant radio-chemotherapy predespose to recurrent voice prosthesis leakage After total laryngectomy, regaining ability to speech is a keystone in regards of life quality. Voice prostheses have been shown to be a sufficient tool for satisfying communication, although frequent replacements of prostheses are burdening certain patients. Therefore, a more accurate understanding of mechanisms of prosthetic leakage is urgently needed. Methods: We performed a retrospective analysis of 58 Patients after laryngectomy. Additionally, we analyzed pre- and post-therapeutic CT-scans of 22 Patients regarding pharyngeal stenosis. Results: In 40 Patients, at least one replacement of voice prosthesis was documented during observation period, median device life was 235 days. Patients treated with adjuvant radiotherapy (RT) showed a significantly longer device life than patients with adjuvant radio-chemotherapy (RCT, p = 0.002). Furthermore, patients suffering of gastroesophageal reflux disease (GERD) showed a significantly shortened device life (p = 0.04).17 patients (42.5 %) suffered of clinically relevant stenosis of the neopharynx, which was treated with dilatation in 14 patients (82 %) and did not affect prosthesis device life. Conclusion: GERD is a risk factor for shortened voice prosthesis' device life and therefore should be treated effectively after laryngectomy. Also, adjuvant RCT predisposes a shortened device life.Stenosis is observed frequently after laryngectomy but does not affect device life when effectively treated. abstract_id: PUBMED:11467626 Gastroesophageal reflux disease is a risk factor for laryngeal and pharyngeal cancer. Objective: Gastroesophageal reflux disease (GERD) is a proposed risk factor for developing laryngeal and pharyngeal cancers. No controlled study has examined this association. Methods: A case-control-study was performed using the computerized hospitalization and outpatient databases of the US Department of Veterans Affairs. All patients, who were veterans, had been identified as being hospitalized with laryngeal or pharyngeal during 1991 to 1997. In addition, all persons diagnosed with laryngeal or pharyngeal cancer in 1999 in the outpatient files were identified. From the same patient populations, four nonmatched control subjects were randomly assigned for each case. The medical history for cases and controls was retrospectively searched for GERD diagnoses, tobacco use, and alcohol dependence. Multivariable logistic regression analyses were performed to assess the risk factors for laryngeal and pharyngeal cancers. Results: A total of 8,228 hospitalized patients with laryngeal cancers and 1,912 with pharyngeal cancers were compared to 32,912 and 7,648 hospitalized controls, while 9,292 outpatients with laryngeal cancer and 2,769 outpatients with pharyngeal cancer were compared with 37,168 and 11,076 outpatient controls without cancer. Among hospitalized persons, the prevalence of GERD was higher among patients with laryngeal cancer (8.9 vs 4.0%, p < 0.0001) and pharyngeal cancer (6.2 vs 3.8%, p < 0.0001). In a multivariable logistic regression analysis that was controlled for age, gender, ethnicity, smoking, and alcohol, GERD was associated with an adjusted odds ratio (OR) of 2.40 for laryngeal cancer among hospitalized patients (95% CI 2.15-2.69, p < 0.0001) and an adjusted OR of 2.38 (95% CI 1.87-3.02, p < 0.0001) for pharyngeal cancer. For outpatients, GERD was associated with an adjusted OR = 2.31 (95% CI 2.10-2.53) for laryngeal cancer and adjusted OR = 1.92 (95% CI 1.72-2.15). Conclusions: Among US veterans, the risk for laryngeal or pharyngeal cancers is modestly increased in the presence of GERD. This effect seems to be independent of age, gender, smoking, and alcohol intake. abstract_id: PUBMED:21046549 Reevaluation of gastroesophageal reflux disease as a risk factor for laryngeal cancer. Objectives/hypothesis: The relationship between gastroesophageal reflux disease (GERD) and laryngeal cancer has not been fully elucidated. This case-control study investigates whether GERD increases the odds of developing these malignancies. Study Design: Case-control study. Methods: Rates of GERD among cases of laryngeal cancer identified in the Veterans Health Administration outpatient care files (year 2000-2006) were compared with controls. Cases (N = 14,449) were frequency matched 1:1 with controls. Multivariate logistic regression was used to determine the association between GERD and cancer. Results: After adjusting for tobacco and/or alcohol use, there was no association between GERD and laryngeal cancer (adjusted odds ratio, 1.01; 95% confidence interval, 0.92-1.12, P =.780). Although an association was found when time from GERD diagnosis to malignancy was less than 3 months, it disappeared when this period was extended further. Conclusions: In this population, there was no increased risk of laryngeal cancer among patients with GERD. However, in subsite analysis, a possible relationship between GERD and glottic cancer was observed. Reverse causality must be considered in future studies assessing the relationship between reflux and laryngeal cancer to limit misclassification bias. abstract_id: PUBMED:32841763 Gastroesophageal reflux disease and head and neck cancers: A systematic review and meta-analysis. Purpose: Gastroesophageal reflux disease (GERD) has been associated with head and neck cancer (HNC), including laryngeal and pharyngeal anatomical sites. A systematic review and meta-analysis was performed to examine these associations. Methods: Articles were retrieved from the Medline, Web of Science, Scopus, and Embase databases using keywords "gastroesophageal reflux disease", "laryngopharyngeal reflux", "head and neck cancer", and associated variants. Inclusion criteria were English language publications, human subjects, and controlled studies with described development of head and neck cancer among individuals with GERD. 13 studies with a total of 39,824 patients were included. Results: Overall, presence of GERD was associated with an increased risk of developing HNC (OR = 1.86, 95% Confidence Interval [CI] = 1.27-2.74). This association remained significant with laryngeal cancers (OR = 1.95, 95% CI = 1.33-2.86), but not pharyngeal cancers (OR = 1.56, 95% CI = 0.86-2.83). Subgroup analyses of hypopharyngeal (OR = 2.26, 95% CI = 0.67-7.68) and oropharyngeal subsites (OR = 1.39, 95% CI = 0.51-3.84) were not statistically significant. Meta-analysis of studies that objectively assessed reflux, such as pH monitor placement, showed statistical significance (OR = 2.81, 95% CI = 1.36-5.81), while studies that used subjective reporting or chart review of GERD were not significant (OR = 1.46, 95% CI = 0.89-2.40). Association between H. pylori infection and head and neck cancers was not statistically significant (OR = 2.66, 95% CI = 0.59-11.97). Conclusion: A diagnosis of GERD is associated with a later diagnosis of HNC, but this association is not significant for pharyngeal cancers. Associations of GERD with HNC may be specific to laryngeal cancers. Level Of Evidence: Systematic review and meta-analysis of case control studies (3a). abstract_id: PUBMED:25998850 Acid reflux and head and neck cancer risk: A nationwide registry over 13 years. Objective: Gastro-esophageal reflux disease (GERD) is a highly prevalent disorder; however, important questions remain regarding the link between GERD and extra-esophageal cancers. This nationwide cohort study investigated the risk of developing head and neck cancers (HNCs) among patients with GERD. Methods: Newly diagnosed GERD patients aged ≥20 years without antecedent cancer were included. Case data were obtained from the National Health Insurance Research Database covering period from 1998 to 2010. We compared the standardized incidence ratios (SIRs) of cancer among GERD patients with those of the general population. Results: A total of 98 cancers were identified among the 39,845 GERD patients in the study, representing 128,361 person-years. The SIR for all cases of cancer was 1.59 (95% CI 1.29-1.93). GERD patients exhibited significantly higher SIRs for oropharyngeal (SIR 3.58, 95% CI 1.85-6.25) and hypopharyngeal (SIR 3.96, 95% CI 2.35-6.26) cancers. Male patients had a significantly higher risk of HNCs (SIR 1.70, 95% CI 1.36-2.10), particularly oropharyngeal (SIR 4.01, 95% CI 2.00-7.17) and hypopharyngeal (SIR 3.91, 95% CI 2.28-6.26) cancers. Following adjustment for age and co-morbidities, the hazard ratio was 9.06 (95% CI 4.70-17.44) for males compared to females. Conclusion: There may be a potential association between GERD and risk of HNCs, which however merits further studies to confirm the causal relationship. Our observations indicate a need for careful extra-esophageal examination of patients with acid reflux. Our findings also underline the importance of raising awareness among clinicians regarding the possibility of concurrent HNCs in GERD patients with refractory laryngo-pharyngeal symptoms. abstract_id: PUBMED:15344539 Differential diagnosis of GERD It was widely accepted that the prevalence of GERD is lower in Oriental countries compared to Western countries. But the incidence of GERD has recently increased in Japan. The most commonly recognized manifestation of GERD is heartburn or a substernal burning sensation in the chest. Most patients with reflux esophagitis complain of typical symptoms such as heartburn, regurgitation or dysphagia. However, some patients complain of atypical symptoms such as hoarse voice, chronic cough, adult-onset asthma or vocal cord polyps. It is not always easy to diagnose atypical symptomatic patients as GERD. If patients who complain of these atypical symptoms have not improved with common medical treatment, GERD should be the consideration in its differential diagnosis. abstract_id: PUBMED:37676071 Gastroesophageal reflux disease and risk of cancer: Findings from the Korean National Health Screening Cohort. Aim: Little is known about the association of cancers other than esophageal adenocarcinoma with gastroesophageal reflux disease (GERD). This study aimed to examine the association between GERD and the risk of different types of cancer. Methods: A cohort study was conducted using data from the National Health Screening Cohort. We included 10,261 GERD patients and 30,783 non-GERD individuals who were matched in a 1:3 ratio by age and sex. All participants were followed-up until cancer diagnosis, death, or end of the study (December 31, 2015). Hazard ratios were calculated using the Cox proportional hazards model, adjusting for smoking and alcohol consumption, physical activity, body mass index, income, area, and Charlson Comorbidity Index. Results: The median follow-up time was 9.9 years. GERD was associated with an increased risk of esophageal (adjusted hazard ratios [aHR] = 3.20 [1.89-5.41]), laryngeal (aHR = 5.42 [2.68-10.96]), and thyroid cancers (aHR = 1.91 [1.55-2.34]) after controlling for all covariates. The results were consistent when examining GERD with esophagitis (K210) and without esophagitis (K219) separately. For thyroid cancer, the results were insignificant after controlling for having ever-received thyroid biopsy procedures. A dose-response relationship was found between GERD and esophageal cancer as well as laryngeal cancer, with patients with a longer duration of GERD treatment showing a stronger effect. In contrast, GERD was associated with a reduced risk of colorectal (aHR = 0.73 [0.59-0.90]), liver (aHR = 0.67 [0.51-0.89]), and pancreatic cancers (aHR = 0.43 [0.24-0.76]), which might have resulted from differences in healthcare utilization between GERD and non-GERD groups. Conclusion: GERD was associated with an increased risk of esophageal and laryngeal cancers. Additionally, early detection and treatment of precancerous lesions among the GERD group could lead to a lower risk of colorectal, liver, and pancreatic cancers. abstract_id: PUBMED:25131391 Gastroesophageal reflux and carcinoma of larynx or pharynx: a meta-analysis. Conclusion: The meta-analysis supported the proposition that the prevalence of gastroesophageal reflux disease (GERD) was associated with laryngeal cancer, particularly in the hospital-based control group and diagnosed by esophagogastroduodenoscopy (EGD) or esophageal pH monitoring. However, no significant association was found between GERD and pharyngeal carcinoma. Objectives: A number of studies have been conducted to investigate the relationship between gastroesophageal reflux and laryngeal or pharyngeal carcinoma. The conclusions are still debated. Methods: We conducted a systematic review of studies associated with the prevalence of GERD in laryngeal or pharyngeal cancer, published up to November 2013.Odds ratios (ORs) and corresponding 95% confidence intervals (CIs) were estimated using random effects models or fixed effects models, according to heterogeneity I(2). Results: Ten studies were included in this meta-analysis. On average, GERD was significantly higher in the tumor tissue of the study group compared with normal tissue of the control group (OR = 2.17, 95% CI = 1.50, 3.14; random effects analysis). The pooled ORs for laryngeal carcinoma were 2.21 (95% CI = 1.53-3.19; I(2) = 97, random effects model) and 3.76 (95% CI = 0.21-67.48; I(2) = 94, random effects model) for pharyngeal carcinoma. abstract_id: PUBMED:12894422 Gastro-oesophageal reflux disease presenting as laryngeal pathology--epidemiology, symptoms, diagnosis. Preliminary report Unlabelled: Symptoms from the ears, throat, nose and larynx--the laryngeal mask's manifestations of gastroesophageal reflux disease (GERD) are important clinical problem of significant occurrence. Aim: (1) Determination of GERD's frequency of occurrence, especially within the group of patients with the pathology of the posterior laryngeal commissure. (2) Evaluation of the effective algorithms for detecting and management of patients demonstrating symptoms of laryngeal mask, which develop on the ground of gastroesophageal reflux disease. The investigation included 9 patients who were hospitalized in the Department of Otolaryngology of Karol Marcinkowski University of Medical Sciences in Poznań. Every patient was carefully interviewed by means of 3 questionnaires: the first one being Karlsson's general questionnaire, the second one--author's questionnaire designed in the Department of Otolaryngology of Medical Academy in Poznań which emphasises laryngeal symptoms; and the third one being the inquiry including serious past diseases, regularly taken medications, and stimulants. The objective condition of the hypopharynx and larynx was estimated on the basis of indirect laryngoscopy, followed by manometric examination of the oesophagus and 24-hour pH-metry carried out in Motility Laboratory of Surgical Department. 8 patients (88.9%) out of 9 were diagnosed as having GERD on the ground of the questionnaires as well as on the base of the results from the 24-hour pH-metry, and among 7 patients (87.5%) out of the 8 motility disfunction of the oesophagus was detected. Within the group of 8 patients with diagnosed GERD on the ground of additional tests 3 patients (37.5%) were diagnosed as having posterior laryngitis, 2 patients (25%) were diagnosed as having chronic laryngitis and the last 1 (12.5%) suffering from dysphagia. Among 3 patients (37.5%) with tumors of the larynx of rare location within the posterior laryngeal commissure coexistence of gastroesophageal reflux disease (GERD) was proved. The remaining 1 patient (11.1%) with the larynx cancer had no confirmation of GERD. The connection between chronic laryngitis and gastroesophageal reflux disease and/or motility disfunction of the oesophagus described in the literature was confirmed within the group of examined patients. Awareness of GERD and diagnostics carried out in that direction is needed in the group of patients with typical symptoms of the laryngeal mask. Significant frequency of coexistence of gastroesophageal reflux disease and pathology of the posterior laryngeal commissure requires effective algorithms for management of these patients. Answer: Yes, GERD is considered a risk factor for laryngeal cancer. Several studies have investigated the association between gastroesophageal reflux disease (GERD) and laryngeal cancer, with varying results. A meta-analysis of original studies suggested that GERD may be a significant risk factor for laryngeal cancer, with a pooled odds ratio indicating an increased risk, although the heterogeneity of the published data indicates that future prospective controlled studies are needed (PUBMED:15744163). Another study found that GERD was associated with an increased risk of laryngeal cancer among US veterans, independent of age, gender, smoking, and alcohol intake (PUBMED:11467626). A systematic review and meta-analysis also supported the association of GERD with an increased risk of developing head and neck cancer (HNC), specifically laryngeal cancers, but not pharyngeal cancers (PUBMED:32841763). Additionally, a nationwide registry study over 13 years indicated a potential association between GERD and the risk of head and neck cancers (PUBMED:25998850). A study using data from the Korean National Health Screening Cohort found that GERD was associated with an increased risk of esophageal and laryngeal cancers (PUBMED:37676071). Another meta-analysis supported the proposition that the prevalence of GERD was associated with laryngeal cancer, particularly when diagnosed by esophagogastroduodenoscopy (EGD) or esophageal pH monitoring (PUBMED:25131391). However, it is important to note that not all studies have found a positive association. One study found no association between GERD and laryngeal cancer after adjusting for tobacco and/or alcohol use, suggesting that the relationship may not be as clear-cut and that reverse causality must be considered in future studies (PUBMED:21046549). In summary, while there is evidence to suggest that GERD is a risk factor for laryngeal cancer, the strength of this association varies across studies, and further research is needed to fully understand the relationship.
Instruction: Home medication cabinets and self-medication: a source of potential health threats? Abstracts: abstract_id: PUBMED:18364405 Home medication cabinets and self-medication: a source of potential health threats? Background: Data regarding the contents of home medication cabinets (HMCs), the management of leftover medications, and the inclination of patients toward self-initiated treatment using nonprescription drugs are scarce. Objective: To evaluate the nature and safety of medication storage and intended self-medication in a general population. Methods: A cross-sectional study was conducted in 72 Belgian community pharmacies. Pharmacy customers (N = 288, aged 18-80 y) were visited in their homes by pharmacy students. The HMCs were inventoried and the participants were interviewed. Results: A mean of 31 +/- 17 (range 6-136) drug packages were identified per household; in one-third of the cases, the packages were not stored safely. Prescription drugs accounted for 34% of the total. The most frequently encountered categories of registered medicines were nonopioid analgesics (7.2%), nonsteroidal antiinflammatory drugs (NSAIDs) (6.9%), nasal decongestants (3.5%) and antinausea agents (3.2%). Despite their high prevalence, NSAIDs and non-opioid analgesics did not predominate among the most frequently used drugs, whereas food supplements were used daily in 23.3% of households. Twenty-one percent of the drugs were expired, 9% were not stored in the original container, and the package insert was missing for 18%. Self-initiated treatment was considered for 56% of all drugs (over-the-counter drugs, 74%; prescription drugs, 21%). Indication, dosage, or treatment duration was misjudged by only 5.2% of the participants, but consulting the package insert was allowed. The tendency toward self-treatment decreased with age and with increasing number of medications taken daily (p = 0.002). Conclusions: We found large amounts of drugs per household, with a high prevalence of analgesics and NSAIDs. Self-medication, although generally acceptable in terms of indication and dosage, was commonly practiced, also with prescription drugs. Taking into account that younger people showed a significantly higher intention of self-medication, a sustained awareness of the risks of self-medication is warranted for the future. abstract_id: PUBMED:33627223 Medication self-management: Considerations and decisions by older people living at home. Background: Medication self-management is complicated for older people. Little is known about older persons' considerations and decisions concerning medication therapy at home. Objective: (s): To explore how older people living at home self-manage their medication and what considerations and decisions underpin their medication self-management behavior. Methods: Semi-structured interviews with consenting participants (living at home, aged ≥65, ≥5 different prescription medications daily) were recorded and transcribed with supporting photographs. Content was analyzed with a directed approach and presented according to three phases of medication self-management (initiation, execution, and discontinuation). Results: Sixty people were interviewed. In the initiation phase, participants used different techniques to inform healthcare professionals and to fill and check prescriptions. Over-the-counter medication was seldom discussed, and potential interactions were unknown to the participants. Some participants decided to not start treatment after reading the patient information leaflets for fear of side effects. In the execution phase, participants had various methods for integrating the use of new and chronic medication in daily life. Usage problems were discussed with healthcare professionals, but side effects were not discussed, since the participants were not aware that the signs and symptoms of side effects could be medication-related. Furthermore, participants stored medication in various (sometimes incorrect) ways and devised their own systems for ordering and filling repeat prescriptions. In the discontinuation phase, some participants decided to stop or change doses by themselves (because of side effects, therapeutic effects, or a lack of effect). They also mentioned different considerations regarding medication disposal and disposed their medication (in)correctly, stored it for future use, or distributed it to others. Conclusions: Participants' considerations and decisions led to the following: problems in organizing medication intake, inadequate discussion of medication-related information with healthcare professionals, and incorrect and undesirable medication storage and disposal. There is a need for medication self-management observation, monitoring, and assistance by healthcare professionals. abstract_id: PUBMED:19443087 Self-medication and the elderly. The reality of the home medicine cabinet Objective: To estimate the prevalence of self-medication in the elderly, to describe the characteristics of self-medication and home medicine cabinets. Design: Cross-sectional descriptive study. Setting: An urban primary health care (PHC) centre in Spain. Participants: A total of 240 patients 75 years of age or older. Main Measurements: A structured questionnaire filled in by home care nurses from April to June 2006. Self-medication was classified into 4 sub-groups: pharmacological or herbal, exclusively pharmacological, exclusively herbal and pharmacological and herbal. The study variables were: age, sex, living alone, number of chronic diseases, number of chronic prescriptions and medicine cabinet characteristics (drugs accumulation, expired drugs, chronic drugs out of prescription, location of medicine chest and periodic review). Other variables were: drug types, source, and acute/chronic reason for self-medication. Results: Self-medication frequency was 31,2% (95% CI, 26-36), with 22.9% (95% CI, 17.5-27.5) being pharmacological and 15.4% (95% CI, 23.1-17.3), herbal. The drugs most used in self-medication were analgesics (30.9%) and cold remedies (27.2%). The pharmacy was the most usual source (49.3%). A total of 41.6% contained drugs that were out of chronic prescription, and 34.4% accumulated more than three boxes of the same medication. Conclusions: There is a significant prevalence of self-medication among the elderly. Accumulating drugs in homes is a very extensive practice. Pharmacists and PHC professionals need to be trained in the responsible use of chronic and self-medication. Although self-care has to be encouraged, PHC professionals must make their patients aware of the harmful effects of self-medication. abstract_id: PUBMED:26034822 Optimizing medication safety in the home. Medication safety among community-dwelling older adults in the United States is an ongoing health issue impacting health outcomes, chronic disease management, and aging in place at home. This article describes a medication safety improvement project that aimed to: (1) Increase the ability of participants to manage medications, (2) Identify and make necessary medication changes, (3) Create an accurate up-to-date medication list to be available in the home, and (4) Provide communication between the primary care provider, participant, and case manager. An in-home medication assessment was completed for 25 participants using an evidence-based medication management software system. This process was used to review medications; identify medication-related problems; create a shared medication list; and convey this information to the primary care provider, case manager, and client while addressing needed medication changes. Educational interventions on management and understanding of medications were provided to participants to emphasize the correct use of medications and use of a personal medication record. Outcome improvements included provision of an accurate medication list, early identification of medication-related problems, identification of drug duplication, and identification of medication self-management challenges that can be useful for optimizing medication safety-related home healthcare and inform future interventions. abstract_id: PUBMED:31349262 Nurses Acceptance of Automated Medication Dispensing Cabinets. The use of automated dispensing cabinets (ADCs) to enhance medication processes in hospitals has been increasing recently. Studies evaluated the effects of this technology on patient safety, workflow efficiency and cost reduction. To evaluate factors affecting nurses' attitudes and acceptance of using ADCs, an online survey, including seven categories with closed-ended questions and one open-ended question, was developed based on technology acceptance model and instruments used in previous studies. Response rate was 29.4% of 1,062 nurses at King Faisal Specialist Hospital and Research Centre, Jeddah, Saudi Arabia. Perceived usefulness, perceived ease of use, perceived usefulness to enhance control systems and training have positive effects on improving nurses' attitudes and increasing acceptance of using ADCs. Perceived risks had negative effects. The qualitative analysis of the open-ended responses supported these results and helped to identify many areas for improvement, especially in addressing perceived risks associated with the use of this technology. abstract_id: PUBMED:19391434 Home medication cabinets and self-medication: a source of potential health threats N/A abstract_id: PUBMED:37064318 Self-assessed medication risk factors as part of comprehensive health screening in home-dwelling older adults. Background: Poor medication management may negatively impact the health and functional capacity of older adults. This cross-sectional study aimed to identify medication-related risk factors in home-dwelling residents using a validated self-assessment as part of comprehensive health screening. Methods: The data were derived from comprehensive health screening (PORI75) for older adults of 75 years living in Western Finland in 2020 and 2021. One of 30 validated measures in health screening focused on identifying medication-related risk factors (LOTTA Checklist). The Checklist items were divided into (1) systemic risk factors (10 items) and (2) potentially drug-induced symptoms (10 items). Polypharmacy was categorized according to the number of used drugs: (1) no polypharmacy (<5 drugs), (2) polypharmacy (≥5 and <10), and (3) excessive polypharmacy (≥10). The linearity across these three polypharmacy groups was evaluated using the Cochran-Armitage test. Results: Altogether, 1024 out of 1094 residents who participated in the health screening consented to this study (n = 569 in 2020 and n = 459 in 2021). The mean number of all drugs in use was 7.0 (range 0-26; SD 4.1), with 71% of the residents using >5 drugs, that is, having polypharmacy. Of the systemic risk factors most common was that the resident had more than one physician responsible for the treatment (48% of the residents), followed by missing drug list (43%), missing regular monitoring (35%), and unclear durations of the medication (35%). The most experienced potentially drug-induced symptoms were self-reported constipation (21%), urinating problems (20%), and unusual tiredness (17%). An increasing number of drugs in use, particularly excessive polypharmacy, was associated with various medication-related risk factors. Conclusion: As a part of comprehensive health screening the LOTTA Checklist provides useful information to prevent medication-related risk factors in home-dwelling older adults. The Checklist could be used to guide planning and implementing health services in the future. abstract_id: PUBMED:16267190 Medication management at home: medication-related risk factors associated with poor health outcomes. Background: Some patients may have medication-related risk factors only identified by home visits, but the extent to which those risk factors are associated with poor health outcomes remains unclear. Objective: To determine the association between medication-related risk factors and poor patient health outcomes from observations in the patients' homes. Design: Cross-sectional study. Setting: Patients' homes. Subjects: 204 general practice patients living in their own homes and at risk of medication-related poor health outcomes. Methods: Medications and medication-related risk factors were identified in the patients' homes by community pharmacists and general practitioners (GPs). The medication-related risk factors were examined as determinants of patients' self-reported health related quality of life (SF-36) and their medication use, as well as physicians' impression of patient adverse drug events and health status. Results: Key medication-related risk factors associated with poor health outcomes included: Lack of any medication administration routine, therapeutic duplication, hoarding, confusion between generic and trade names, multiple prescribers, discontinued medication repeats retained and multiple storage locations. Older age and female gender were associated with some poorer health outcomes. In addition, expired medication and poor adherence were also associated with poor health outcomes, however, not independently. Conclusion: The findings support the theory that polypharmacy and medication-related risk factors as a result of polypharmacy are correlated to poor health outcomes. abstract_id: PUBMED:15000104 Preventing medication errors in home care. This practice brief highlights the results of two home health care studies on medication errors. The first study determined how often medication errors occur in home health care. The second study tested a strategy to reduce these errors. Although nearly one third of home care patients are at risk for potential medication errors, adding a simple, practical program can reduce the potential for errors. These results should encourage home care agencies to be more vigilant in monitoring medication errors and to institute programs that help prevent errors from occurring. abstract_id: PUBMED:29785094 Association of health literacy and medication self-efficacy with medication adherence and diabetes control. Introduction: The exact pathway linking health literacy, self-efficacy, medication adherence, and glycemic control for type 2 diabetes remains unclear. Understanding the relationship between patient factors, medication adherence, and lower glycated hemoglobin (HbA1c) may help patients better manage their disease. This study examined the association of health literacy and medication self-efficacy with self-reported diabetes medication adherence, and the association of health literacy, medication self-efficacy, and self-reported diabetes medication adherence with HbA1c of patients with type 2 diabetes. Methods: This cross-sectional study utilized a face-to-face questionnaire at two family medicine clinics in a Midwestern state among 174 patients; subjects enrolled were at least 20 years old with diagnosed type 2 diabetes, prescribed at least one oral diabetes medicine, and understood English. Questionnaires were administered to assess the participants': health literacy, using the Newest Vital Sign six-item questionnaire (NVS); self-efficacy for medication use, using the 13-item Self-Efficacy for Appropriate Medication Use Scale; and self-report medication adherence, using the eight-item Morisky Medication Adherence Scale. HbA1c values were obtained from participants' electronic medical records. Multiple linear regressions were used to explore the association of health literacy and medication self-efficacy with both medication adherence and HbA1c level after controlling for all other covariates. Results: Self-reported health status (β = 0.17, p = 0.015) and medication self-efficacy (β = 0.53, p < 0.001) were positively associated with diabetes medication adherence. Health literacy was neither associated with diabetes medication adherence (β = -0.04, p = 0.586) nor HbA1c (β = -0.06, p = 0.542). Lower diabetes medication adherence (β = -0.26, p = 0.008) and higher number of prescribed medications (β = 0.28, p = 0.009) were correlated with higher HbA1c. Conclusion: Health literacy, as measured by the NVS, does not correlate with medication adherence or glycemic control among patients with type 2 diabetes. Interventions to improve patients' self-efficacy of medication use may improve diabetes medication adherence. Answer: The contents and management of home medication cabinets (HMCs) and the practice of self-medication have been identified as potential sources of health threats. A study conducted in Belgian community pharmacies found that an average household contained 31 drug packages, with one-third not stored safely. Prescription drugs accounted for 34% of the total, with nonopioid analgesics and nonsteroidal anti-inflammatory drugs (NSAIDs) being the most common. Despite the prevalence of these drugs, they were not the most frequently used, as food supplements were used daily in 23.3% of households. Notably, 21% of the drugs were expired, 9% were not stored in the original container, and the package insert was missing for 18%. Self-medication was considered for 56% of all drugs, with a higher tendency among younger people, indicating a need for sustained awareness of the risks associated with self-medication (PUBMED:18364405). Another study highlighted that older people living at home often self-manage their medication with various considerations and decisions that can lead to problems such as inadequate discussion of medication-related information with healthcare professionals, incorrect and undesirable medication storage and disposal, and inadequate organization of medication intake. The study suggested a need for observation, monitoring, and assistance by healthcare professionals in medication self-management (PUBMED:33627223). In Spain, a study found a significant prevalence of self-medication among the elderly, with 31.2% engaging in the practice. The study revealed that accumulating drugs at home is a common practice, and there is a need for training in the responsible use of chronic and self-medication among pharmacists and primary healthcare professionals (PUBMED:19443087). In the United States, a medication safety improvement project aimed to increase participants' ability to manage medications, identify and make necessary medication changes, create an accurate up-to-date medication list, and provide communication between the primary care provider, participant, and case manager. The project resulted in the provision of an accurate medication list, early identification of medication-related problems, and identification of drug duplication and medication self-management challenges (PUBMED:26034822). Overall, these studies indicate that home medication cabinets and self-medication practices can indeed be sources of potential health threats, particularly when there is a lack of safe storage, expired medications, and inadequate communication with healthcare professionals. There is a clear need for better management and education regarding medication use, especially among the elderly and those who self-medicate.
Instruction: Is It Time to Revisit the Role of Prostate-specific Antigen Kinetics in Active Surveillance for Prostate Cancer? Abstracts: abstract_id: PUBMED:27130265 Is It Time to Revisit the Role of Prostate-specific Antigen Kinetics in Active Surveillance for Prostate Cancer? Objective: To identify factors that are not available at the time of prostate cancer diagnosis and are associated with the risk of biopsy progression in active surveillance (AS) patients. Materials And Methods: The study included 314 AS patients who had at least 1 repeat biopsy. We used logistic regression to analyze the association between prostate-specific antigen (PSA) and its derivatives, including PSA density, PSA velocity (PSAV) and doubling time (PSADT); presence of bilateral disease and number of previous successive negative surveillance biopsies; and the risk of progression on the surveillance biopsies first through fourth. Results: Over a median follow-up of 3.1 years, patients had a mean of 2.4 biopsies. The median time from diagnosis to the last biopsy was 2.3 years. The biopsies were performed at fairly equal intervals. For surveillance biopsies 1 through 3, none of the studied factors was adding significant prognostic information to the baseline characteristics. PSAV and PSADT were associated with the risk of progression on the fourth biopsy; this association was independent of baseline characteristics. No progression on the fourth biopsy was noted in 23 patients with negative PSAV. Among 54 patients with PSADT of more than 3 years only, 2 progressed whereas 6 out of 9 patients with a PSADT less than 3 years had biopsy progression on the fourth surveillance biopsy. Conclusion: PSA kinetics may be helpful in defining the indications for prostate biopsy in AS patients who are followed with regular biopsies for more than 3-4 years. abstract_id: PUBMED:28847461 Active surveillance of prostate cancer Several prospective studies have demonstrated the safety of active surveillance as a first treatment of prostate cancer. It spares many patients of a useless treatment, with its potential sequelae. Patients with a low-risk cancer are all candidates for this approach, as recommended by the American Society of Clinical Oncology (ASCO). Some patients with an intermediate risk could be also concerned by active surveillance, but this is still being discussed. Currently, the presence of grade 4 lesions on biopsy is a contra-indication. Modalities included a repeated prostate specific antigen test and systematic rebiopsy during the first year after diagnosis. MRI is now proposed to better select patients at inclusion and also during surveillance. No life style changes or drugs are significantly associated with a longer duration of surveillance. abstract_id: PUBMED:37331345 Value of Prostate-Specific Antigen Kinetics in Patients with Low-Risk Prostate Cancer under Active Surveillance. Introduction: This study analyzes the value of PSA kinetics, PSA speed (vPSA), and PSA doubling time (PSAdt), in patients with low-risk prostate cancer who are in an active surveillance (AS) program. Methods: An observational, retrospective, and longitudinal study of a sample of 86 patients included in AS program between January 2014 and October 2021 was conducted. A review of their medical records was performed, and PSA kinetics were calculated, analyzing the causes of discontinuation of the AS program and its relationship with PSA kinetics. Results: The mean age was 63.39 years, and the median follow-up was 62.55 months. The mean PSA at diagnosis was 8.27 ng/mL. A median of PSAdt of 62.55 months and 1.3 ng/mL/year for vPSA was obtained. 35 patients left the program, with a higher percentage of patients leaving with a PSAdt less than 36 months (73.7 vs. 31.1%) and a vPSA greater than 2 ng/mL/year (68.2 vs. 31.3%). The probability of permanence and the time of permanence in AS were statistically significantly higher for those patients with favorable kinetic parameters. Conclusion: PSA kinetics is a parameter to take into account when making decisions to keep patients in an AS program. abstract_id: PUBMED:26763088 Prediagnostic prostate-specific antigen kinetics and the risk of biopsy progression in active surveillance patients. Objective: To analyze the association between prediagnostic prostate-specific antigen kinetics and the risk of biopsy progression in prostate cancer patients on active surveillance, and to study the effect of prediagnostic prostate-specific antigen values on the predictive performance of prostate-specific antigen velocity and prostate-specific antigen doubling time. Methods: The study included 137 active surveillance patients with two or more prediagnostic prostate-specific antigen levels measured over a period of at least 3 months. Two sets of analyses were carried out. First, the association between prostate-specific antigen kinetics calculated using only the prediagnostic prostate-specific antigen values and the risk of biopsy progression was studied. Second, using the same cohort of patients, the predictive value of prostate-specific antigen kinetics calculated using only post-diagnostic prostate-specific antigens and compared with that of prostate-specific antigen kinetics based on both pre- and post-diagnostic prostate-specific antigen levels was analyzed. Results: Of 137 patients included in the analysis, 37 (27%) had biopsy progression over a median follow-up period of 3.2 years. Prediagnostic prostate-specific antigen velocity of more than 2 ng/mL/year and 3 ng/mL/year was statistically significantly associated with the risk of future biopsy progression. However, after adjustment for baseline prostate-specific antigen density, these associations were no longer significant. None of the tested prostate-specific antigen kinetics based on combined pre- and post-diagnostic prostate-specific antigen values were statistically significantly associated with the risk of biopsy progression. Conclusions: Historical prediagnostic prostate-specific antigens seems to be not clinically useful in patients diagnosed with low-risk prostate cancer on active surveillance. abstract_id: PUBMED:29433975 Refined Analysis of Prostate-specific Antigen Kinetics to Predict Prostate Cancer Active Surveillance Outcomes. Background: For men on active surveillance for prostate cancer, utility of prostate-specific antigen (PSA) kinetics (PSAk) in predicting pathologic reclassification remains controversial. Objective: To develop prediction methods for utilizing serial PSA and evaluate frequency of collection. Design, Setting, And Participants: Data were collected from men enrolled in the multicenter Canary Prostate Active Surveillance Study, for whom PSA data were measured and biopsies performed on prespecified schedules. We developed a PSAk parameter based on a linear mixed-effect model (LMEM) that accounted for serial PSA levels. Outcome Measurements And Statistical Analysis: The association of diagnostic PSA and/or PSAk with time to reclassification (increase in cancer grade and/or volume) was evaluated using multivariable Cox proportional hazards models. Results And Limitations: A total of 851 men met the study criteria; 255 (30%) had a reclassification event within 5 yr. Median follow-up was 3.7 yr. After adjusting for prostate size, time since diagnosis, biopsy parameters, and diagnostic PSA, PSAk was a significant predictor of reclassification (hazard ratio for each 0.10 increase in PSAk=1.6 [95% confidence interval 1.2-2.1, p<0.001]). The PSAk model improved stratification of risk prediction for the top and bottom deciles of risk over a model without PSAk. Model performance was essentially identical using PSA data measured every 6 mo to those measured every 3 mo. The major limitation is the reliability of reclassification as an end point, although it drives most treatment decisions. Conclusions: PSAk calculated using an LMEM statistically significantly predicts biopsy reclassification. Models that use repeat PSA measurements outperform a model incorporating only diagnostic PSA. Model performance is similar using PSA assessed every 3 or 6 mo. If validated, these results should inform optimal incorporation of PSA trends into active surveillance protocols and risk calculators. Patient Summary: In this report, we looked at whether repeat prostate-specific antigen (PSA) measurements, or PSA kinetics, improve prediction of biopsy outcomes in men using active surveillance to manage localized prostate cancer. We found that in a large multicenter active surveillance cohort, PSA kinetics improves the prediction of surveillance biopsy outcome. abstract_id: PUBMED:35243397 Magnetic Resonance Imaging-guided Active Surveillance of Prostate Cancer: Time to Say Goodbye to Protocol-based Biopsies. Traditional protocols for active surveillance (AS) are commonly based on digital rectal examination, prostate-specific antigen (PSA), and standard transrectal biopsy, meaning that initial classification errors and inaccurate lesion monitoring can occur. Protocol-based biopsies are performed to assess changes in cancer grade and extent at prespecified intervals, but this approach represents a barrier to AS adherence and tolerability. There is evidence to support the use of magnetic resonance imaging (MRI) during AS, as this technique (associated with favourable PSA kinetics) offers an opportunity to follow patients on AS without the need for routine, protocol-based biopsies in the absence of signs of radiological progression provided that image quality, interpretation, and reporting of serial imaging are of the highest standards. Patient Summary: In this report we looked at the role of magnetic resonance imaging (MRI) scans in avoiding unnecessary prostate biopsies for patients being monitored for low- or intermediate-risk prostate cancer. We conclude that patients on active surveillance can be monitored with MRI scans over time and that biopsies could be used only when there are changes on MRI or a rising prostate-specific antigen (PSA) not explained by an increase in prostate size. abstract_id: PUBMED:25374902 Active surveillance in men with low-risk prostate cancer: current and future challenges. Introduction: The implementation of prostate-specific antigen (PSA) screening has coincided with a decrease in mortality rate from prostate cancer at the cost of overtreatment. Active surveillance has thus emerged to address the concern for over-treatment in men with low-risk prostate cancer. Methods: A contemporary review of literature with respect to low-risk prostate cancer and active surveillance was conducted. The premise of active surveillance, ideal candidates, follow-up practices, treatment triggers, and the observed outcomes of delayed interventions are reviewed. Various institutional protocols are compared and contrasted. Results: Eligibility criteria from various institutions share similar principles. Candidates are followed with PSA kinetics and/or repeat biopsies to identify those who require intervention. Various triggers for intervention have been recognized achieving overall and cancer-specific survival rates > 90% in most protocols. New biomarkers, imaging modalities and genetic tests are also currently being investigated to enhance the efficacy of active surveillance programs. Conclusion: Active surveillance has been shown to be safe and effective in managing men with low-risk prostate cancer. Although as high as 30% of men on surveillance will eventually need intervention, survival rates with delayed intervention remain reassuring. Long-term studies are needed for further validation of current active surveillance protocols. abstract_id: PUBMED:32482513 Predictive factors of active surveillance interruption for prostate cancer after 5years of follow-up Introduction And Purpose: The objective of this work was to identify the risk factors (RFs) of active surveillance (AS) interruption in a prostate cancer (PCa) single-center retrospective cohort of patients. Material And Method: All patients in AS between January 2011 and October 2019 were retrospectively included in a computerized database. The group of patients who had an AS interruption was compared to the one still under AS, in order to identify potential risk factors for the interruption of the surveillance protocol. Results: Two hundred and two patients have been included in the AS cohort with a median follow-up of 32months. At the time of analysis, 72 patients (36%) were not under the AS protocol anymore, 118 (58%) were still under AS and 12 (6%) were lost of follow-up. Sixty-six patients (92%) had left SA due to PCa progression, 4 (5%) by personal choice and 2 (3%) switched to watchful waiting. A PSA doubling Time<3years (PSADT<3years) has been identified as the only statistically significant RF for AS interruption, both in the unvaried (P<0.001) and multivariate (OR=5.403, P<0.01) analysis. It was also the only RF of AS interruption in the early analysis in the first three years of AS, in the unvaried analysis (P=0.021) and the multivariate analysis (OR=3.612, P=0.018). Conclusion: PSADT was the only RF of AS early and late interruption in our study. It represents a major inclusion criterion in AS protocol during the initial assessment. Level Of Evidence: 3. abstract_id: PUBMED:20620962 Active surveillance for localized prostate cancer The widespread application of PSA screening has led to an important increase of the small and well-differentiated prostate cancer. Despite natural history of prostate cancer has not been completely elucidated; it has been proved that the evolution of low grade tumours was favorable and that some of them remain indolent. In these cases, curative therapies and their associated morbidities might be considered as overtreatment. Active surveillance should be an option to limit this overtreatment. It is obvious that the initial risk stratification used for active surveillance wasn't enough restrictive. From now on, it seems that a PSA<10 ng/ml, a Gleason score<7 and more than 10 prostate biopsies are the good criteria to propose for the selection of eligible patients. However, the debate about adequate and accurate criteria is still ongoing between several teams worldwide involved in active surveillance. International prospective studies are in progress and are necessary to establish selections criteria and modalities of surveillance and predictors of active treatment. We need to wait for conclusion from prospective studies results. However, it appears that active surveillance offers yet the possibility to delay active treatment and its complications in selected cases. abstract_id: PUBMED:25202685 Prostate cancer: a review of active surveillance. The objective of this paper is to review the current recommendations for active surveillance in prostate cancer from the present prospective studies. Worldwide, there are increasing numbers of men with prostate cancer. It is now accepted as standard care that a number of men with favorable-risk disease can be followed with active surveillance. In 1995, the first prospective studies were initiated to assess the feasibility of active surveillance, in which the decision to intervene was determined by prostate-specific antigen and/or histological progression. The strategy was to provide therapy individualized to the biological behavior of the cancer. Clinical trials assessing active surveillance have usually included patients younger than 70 years of age, although the guidelines have changed over time for Gleason score and prostate-specific antigen, eg, doubling time, thereby changing the indication for active treatment. The present review focuses on patient selection, prospective studies reported in the literature, and future directions. Answer: Based on the information provided in the abstracts, it appears that the role of prostate-specific antigen (PSA) kinetics in active surveillance (AS) for prostate cancer is indeed an area worth revisiting. Several studies have highlighted the potential value of PSA kinetics, including PSA velocity (PSAV) and PSA doubling time (PSADT), in managing patients with low-risk prostate cancer who are under AS programs. For instance, one study found that PSA kinetics may be helpful in defining indications for prostate biopsy in AS patients who are followed with regular biopsies for more than 3-4 years (PUBMED:27130265). Another study concluded that PSA kinetics is a parameter to consider when making decisions to keep patients in an AS program, as favorable kinetic parameters were associated with a higher probability and longer time of remaining in AS (PUBMED:37331345). Moreover, a refined analysis of PSA kinetics suggested that incorporating serial PSA measurements into AS protocols could improve the prediction of biopsy outcomes, indicating that PSA kinetics calculated using a linear mixed-effect model significantly predicts biopsy reclassification (PUBMED:29433975). However, it is also noted that the predictive value of prediagnostic PSA kinetics was not significant after adjusting for baseline PSA density, suggesting that historical prediagnostic PSA values may not be clinically useful in patients diagnosed with low-risk prostate cancer on AS (PUBMED:26763088). Additionally, the use of magnetic resonance imaging (MRI) in conjunction with favorable PSA kinetics has been proposed as a means to follow patients on AS without the need for routine protocol-based biopsies, provided that the quality and interpretation of serial imaging are maintained at high standards (PUBMED:35243397). In summary, the evidence suggests that PSA kinetics can be a valuable tool in the management of patients on AS for prostate cancer, potentially improving decision-making and reducing the need for unnecessary biopsies. However, the optimal use of PSA kinetics in AS protocols may require further refinement and validation in clinical practice.
Instruction: Is it time for a new TNM classification in esophageal carcinoma? Abstracts: abstract_id: PUBMED:31029372 The 8th edition of the AJCC-TNM classification: New contributions to the staging of esophagogastric junction cancer. The new 8th edition of the TNM classification system for esophageal and cardia or esophagogastric junction cancer provides important innovations in the TNM stages. Two classifications are presented, updated by stages, clinical (cTNM) and pathological (pTNM) methods, together with another pathological classification applicable to cases receiving neoadjuvant treatment (ypTNM). There is a notable increase in complexity compared to previous versions, but it is still early to determine whether the current modifications will result in a clear improvement in the prognostic discrimination of survival among the patient groups (which is their main objective), although the initial expectations are favorable. abstract_id: PUBMED:32519741 Updates in the 8th edition of the TNM staging system for esophagus and esophagogastric junction cancer. The tumor-node metastasis (TNM) classification, originally developed in 1943 and subsequently adopted by the Union for International Cancer Control and the American Joint Committee on Cancer, is regularly updated based on new information and developments. The TNM classification system is the main tool used for both clinical and pathological staging of cancers worldwide. The 8th edition of the TNM classification for esophageal and esophagogastric junction (EGJ) cancer, released in 2017, was updated from the 7th edition based on additional data supplied by the Worldwide Esophageal Cancer Collaboration group. We summarize the main changes between the 7th and 8th editions of this TNM classification. Notable changes included separate clinical, pathological and pathological prognostic staging for adenocarcinomas and squamous cell carcinomas. Pathological prognostic staging was also improved by updating the T- and N-factors regarding histopathological differentiation and tumor location, respectively. The definition of EGJ cancer was changed from tumors centered within 5 cm to tumors within 2 cm of the EGJ. These updates to the TNM classification will help to improve the personalized management and treatment of patients with esophageal and EGJ cancers. abstract_id: PUBMED:30345104 Clinical implication of the innovations of the 8th edition of the TNM classification for esophageal and esophago-gastric cancer. Epidemiology of esophageal cancer and esophagogastric junction (EGJ) has deeply changed for the past two decades with a dramatically increase of adenocarcinoma whereas squamous cell carcinoma (SCC) has slowly decreased. Moreover, the two histological types differ in a number of features including risks factors, tumor location, tumor biology and outcomes. In acknowledgement of these differences, the newest 8th edition of the American Joint Committee on Cancer (AJCC) tumor, node and metastasis (TNM) staging classification of epithelial cancers of the esophagus and EGJ has refined this histology-specific disease stage with incorporation of new anatomic and non-anatomic categories. Based on data-driven of patients collected through the Worldwide Esophageal Cancer Collaboration (WECC) group, the 8th edition database encompasses a six-continent cohort of 22,654 patients among 33 institutions including patients treated with surgery alone and, for the first time, patients treated after neoadjuvant treatment. Anatomic categories include T descriptors (tumor invasion), N descriptors (regional lymph node invasion) and M descriptors (distant site). Non anatomic categories include grade differentiation (G descriptors) and tumor location (L descriptors). Category descriptors are currently assessed by endoscopy with biopsy, by endoscopy ultrasound fine-needle aspiration (EUS-FNA), by thoracic-abdominal-pelvic computed tomography (CT) and whole body flurodeoxyglucose positron emission tomography (FDG-PET) fused with CT. The new 8th edition considers separate and temporally related cancer classification based on the treatment strategy: clinical cTNM (before any treatment), pathologic pTNM (after surgery alone) and postneoadjuvant pathologic ypTNM (after neoadjuvant treatment followed by surgery). The 8th edition permits a more robust and reliable random forest-based machine learning analysis. Refinement of each T, N, M categories and subcategories makes the 8th edition more accurate and more adaptable to the current practice including neoadjuvant regimen. The main objective of this review is to examine the current staging of esophageal cancer and the new aspects of the 8th edition with its applications to clinical practice. abstract_id: PUBMED:26379858 Turning left or right? A comparative analysis in adenocarcinomas of the esophagogastric junction according to the seventh AJCC TNM classification for cancers of the esophagus and stomach: experience in a Chinese single institution. The seventh AJCC TNM classification defines rules for classifying adenocarcinomas of esophagogastric junction (AEG II and III) as a part of esophageal cancer. But there are still many controversies over the classification system. The study aims to evaluate and compare whether AEG should be classified as cancers of esophagus or stomach. A single-center cohort of patients with AEG or proximal third gastric adenocarcinoma underwent surgical resection with curative intent in Shanghai from November 2004 to July 2011. We compared the clinicopathologic features between AEG (n=291) and proximal third gastric adenocarcinoma (n=176) and analyzed overall survival probabilities of AEG using the latest seventh AJCC TNM classification for cancers. Patients with AEG not only show more advanced diseases, but also have a significantly worse 5-year survival rate than those with proximal third gastric adenocarcinoma (P=0.027). In 291 patients with AEG, the gastric T classification is monotone but indistinct except for pT2 versus pT3 (P=0.001) and pT4a versus pT4b (P=0.012). The esophageal T classification is neither monotone nor distinct. For the N classification, both schemes are monotone and distinct. The gastric scheme is indistinctive for stages IA versus IB (P=0.428), for IIA versus IIB (P=0.376), for IIB versus IIIA (P=0.086), for IIIA versus IIIB (P=0.087), and for IIIC versus IV (P=0.928). The esophageal scheme is indistinct only except for IIIB versus IIIC (P=0.002). The gastric scheme includes one heterogeneous stage group (stage IIIC, P<0.001), whereas all stage groups are homogeneous in the esophageal scheme. Although AEG shows different clinicopathological features and surgical outcomes of patients, the current seventh AJCC TNM classification which stages the AEG in the esophageal scheme does not demonstrate the advantages in the assessment of the patient prognosis. We propose a revised staging system to clarify the AEG with esophageal invasion. abstract_id: PUBMED:4094095 A proposal for a new TNM classification of esophageal carcinoma. Japanese Committee for Registration of Esophageal Carcinoma. The TNM classification of malignant tumors is known to reflect the prognosis well and is being used widely in many countries. For TNM classification of the esophagus, we present a new one which has been modified according to the Japanese Guidelines for Esophageal Carcinoma. The new T classification is based on the depth of the invasion of the tumor. The new N contains metastasis not only to the mediastinal nodes but also to the perigastric nodes. We collected data on 3,211 patients with carcinoma of the thoracic esophagus during the period 1969 to 1978 in Japan. These patients were examined according to the present and new TNM classifications. The results of the study show that the new classification has a better correlation with the survival rates, and new T and new N are easily estimated by clinical examination. abstract_id: PUBMED:9170533 TNM classification of carcinoma of the esophagus TNM classification of esophageal carcinoma was first described in the supplement to the first edition of the TNM classification in 1973. In the second edition, the classification was changed based on the data of 1,000 cases from the Task Force on Esophagus of American Joint Committee. In this edition, only the clinical classification was described, but the third edition included both clinical and post-surgical histopathological classification. But the criteria for T and pT classification differed. Before the fourth edition, specialists from Japan and the United States met in Hawaii in 1984. Data of the Japanese Nationwide Registration, including 7,742 patients from 1969 to 1978, were presented. After discussion based on these data, T was classified according to the depth of invasion, and perigastric lymph nodes were included in Regional Nodes in the fourth edition. Then, the TNM Research Committee of ISDE collected patient data of esophageal carcinoma from seven countries, and they were studied according to many factors. Based on these data, two proposals were made to the UICC TNM Committee. First, T1 should be divided into two categories: T1a, Tumor invasion of lamina propria; and T1b, Tumor invasion of submucosa. Second, metastases to distant lymph nodes should be grouped into the N classification instead of M classification. The first was accepted in the TNM Supplement of 1993, and the second will be accepted in the Fifth Edition, which will appear in 1997. It is important to accumulate data on many patients using the uniform registration form and to follow these patients very closely in the discussion of revisions to the TNM classification. abstract_id: PUBMED:25842811 Esophagogastric junction cancer in the TNM classification The current 7th version of the TNM classification newly defines esophagogastric junction cancer as cancer invading the junction, of which the epicenter is located within 5cm proximal or distal to the junction. Progression of tumors is newly staged in this version of the TNM classification by separating adenocarcinoma and squamous cell carcinoma. The background to this revision was to resolve the confusion that occurred when classification of cancer of the esophagus or stomach was left to the judgment of the physician in the 6th version, to acknowledge that the prognosis of gastric cancer patients differs depending on tumor location, and to reflect the rapid increase in the incidence of adenocarcinoma of the cardia and lower esophagus in the USA. Siewert classification is reflected in the definition of esophagogastric junction cancer. The new classification appears appropriate for esophageal squamous cell carcinoma and Siewert type I adenocarcinoma, but there is doubt about type II and type III adenocarcinomas. German, Korean, and Japanese surgeons have investigated which classification of esophageal or stomach junction cancer was better for predicting patient prognosis, and the Japanese report demonstrated that its classification as stomach cancer was more appropriate. abstract_id: PUBMED:25561808 Evaluation of the 7th edition of the TNM classification in patients with resected esophageal squamous cell carcinoma. Aim: To evaluate the prognostic factors and tumor stages of the 7(th) edition TNM classification for esophageal cancer. Methods: In total, 1033 patients with esophageal squamous cell carcinoma (ESCC) who underwent surgical resection with or without (neo)adjuvant therapy between January 2003 and June 2012 at the Thoracic Surgery Department II of the Beijing Cancer Hospital, Beijing, China were included in this study. The following eligibility criteria were applied: (1) squamous cell carcinoma of the esophagus or gastroesophageal junction identified by histopathological examination; (2) treatment with esophagectomy plus lymphadenectomy with curative intent; and (3) complete pathologic reports and follow-up data. Patients who underwent non-curative (R1) resection and patients who died in hospital were excluded. Patients who received (neo)adjuvant therapy were also included in this analysis. All patients were restaged using the 7(th) edition of the Union for International Cancer Control and the American Joint Committee on Cancer TNM staging systems. Univariate and multivariate analyses were performed to identify the prognostic factors for survival. Survival curves were plotted using the Kaplan-Meier method, and the log-rank test was used to evaluate differences between the subgroups. Results: Of the 1033 patients, 273 patients received (neo)adjuvant therapy, and 760 patients were treated with surgery alone. The median follow-up time was 51.6 mo (range: 5-112 mo) and the overall 5-year survival rate was 36.4%. Gender, "pT" and "pN" descriptors, (neo)adjuvant therapy, and the 7(th) edition TNM stage grouping were independent prognostic factors in the univariate and multivariate analyses. However, neither histologic grade nor cancer location were independent prognostic factors in the univariate and multivariate analyses. The 5-year stage-based survival rates were as follows: IA, 84.9%; IB, 70.9%; IIA, 56.2%; IIB, 43.3%; IIIA, 37.9%; IIIB, 23.3%; IIIC,12.9% and IV, 3.4%. There were significant differences between each adjacent staging classification. Moreover, there were significant differences between each adjacent pN and pM subgroup. According to the pT descriptor, there were significant differences between each adjacent subgroup except between pT3 and pT4 (P = 0.405). However, there was no significant difference between each adjacent histologic grade subgroup and between each adjacent cancer location subgroup. Conclusion: The 7(th) edition is considered to be valid for patients with resected ESCC. However, the histologic grade and cancer location were not prognostic factors for ESCC. abstract_id: PUBMED:33434341 The prognostic value of tumor deposits and the impact on the TNM classification system in esophageal cancer patients. Objective: To explore the prognostic significance of tumor deposits (TDs), isolated tumor foci lacking residual lymph nodes, in esophageal cancer (EC). Methods: A retrospective review of patients with EC undergoing esophagectomy between 2005 and 2017 was conducted. The prognostic value of TD was evaluated using a Cox regression model. Patients from different sources and periods were split into discovery and validation sets. A propensity score matching model was used in the validation set to reduce the confounding bias. The impact of TD on the TNM classification system was evaluated. Results: The discovery and validation sets included 179 and 2875 patients, respectively. Propensity-matched patients with and without TDs were constructed in the validation set with 132 patients in each group. Overall survival (p < .001 and p = .004, respectively) and disease-free survival (p < .001 and p = .019, respectively) were both decreased in TD positive patients in the discovery set and propensity-matched groups of validation set. Classifying patients with TDs into pN3 stage improved the discriminative power of the current TNM staging system. Conclusions: TD is an independent prognostic factor for EC. The inclusion of TD in the TNM staging system may upstage appropriate patients to help guide therapy, and future studies are warranted. abstract_id: PUBMED:3783965 Problem of the general rules and the TNM classification According to the TNM classification stage of disease progression, intrathoracic and lower esophagus carcinoma without good prognosis were stage II by T3. On the other hand, cardiac and cervical lymph node metastases with good prognosis were categorised as M1 and correspond to stage IV, so that the meaning of stage grouping is disappeared each other. A new classification of esophagus carcinoma based on the depth of invasion is proposed by the Esophagus Carcinoma Registration Committee. Answer: The TNM classification system is a critical tool for staging cancers, including esophageal carcinoma, and it is periodically updated to reflect new research and clinical findings. The 8th edition of the TNM classification for esophageal and esophagogastric junction (EGJ) cancer, released in 2017, introduced significant changes from the 7th edition, such as separate clinical, pathological, and pathological prognostic staging for adenocarcinomas and squamous cell carcinomas, as well as updates to the T- and N-factors regarding histopathological differentiation and tumor location, respectively (PUBMED:32519741). These updates aimed to improve personalized management and treatment of patients with esophageal and EGJ cancers. The 8th edition also refined the staging system by incorporating new anatomic and non-anatomic categories based on data from the Worldwide Esophageal Cancer Collaboration group, which included a large cohort of patients treated with surgery alone and those treated after neoadjuvant treatment (PUBMED:30345104). The definition of EGJ cancer was changed from tumors centered within 5 cm to tumors within 2 cm of the EGJ, and the classification now considers separate and temporally related cancer classifications based on the treatment strategy (PUBMED:31029372). Despite these updates, there is ongoing discussion about the adequacy of the current TNM classification for esophageal carcinoma. For instance, a study from a Chinese single institution suggested that the 7th edition TNM classification did not demonstrate advantages in assessing patient prognosis for adenocarcinomas of the esophagogastric junction (AEG) and proposed a revised staging system to clarify AEG with esophageal invasion (PUBMED:26379858). Additionally, there have been proposals for further refinement of the TNM classification, such as the inclusion of tumor deposits (TDs) in the staging system, which has been shown to be an independent prognostic factor and may improve the discriminative power of the current TNM staging (PUBMED:33434341). Given the complexity of esophageal carcinoma and the continuous evolution of our understanding of its prognosis and treatment, it is likely that further updates to the TNM classification will be necessary in the future. The periodic revisions are essential to ensure that the classification system remains relevant and accurately reflects the prognosis and guides the treatment of patients with esophageal carcinoma. Therefore, while the 8th edition represents a significant step forward, the question of whether it is time for a new TNM classification in esophageal carcinoma remains open, with the expectation that future editions will continue to refine and improve upon the current system.
Instruction: Are the outcomes of transobturator tape procedure for female stress urinary incontinence durable in long-term follow-up? Abstracts: abstract_id: PUBMED:32015908 Efficacy of urodynamic studies in predicting long-term outcomes of the transobturator tape: do they augment clinical assessment? Introduction: Some controversy exists regarding necessity for urodynamic evaluation prior to surgical management of stress urinary incontinence (SUI). We aimed to interrogate the role of pre and post-operative urodynamic studies versus clinical assessment in predicting long-term patient reported outcomes of transobturator tape (TOT) placement. Material And Methods: A 100 patient cohort of women post TOT insertion for stress/mixed urinary incontinence 2005-2010, under a single surgeon, was identified. Results of pre and post-operative clinical assessment and urodynamic studies were retrospectively evaluated. Long-term patient reported outcome measures (PROMs) were assessed using the International Consultation on Incontinence Questionnaire (ICIQ) Short Form, Patient Global Impression of Severity (PGI-S) and Patient Global Impression of Improvement (PGI-I) questionnaires. The role of urodynamic studies in predicting postoperative voiding dysfunction, and long-term procedure outcomes was analysed. Statistical correlations were performed using SPSS. Results: Questionnaire response rate was 76/100 (76%) at mean follow-up 9.4 years (7.25-12.75). Mean ICIQ score was 6.32 (1-20). No significant correlations between preoperative pDet QMax and postoperative uroflow/duration of intermittent self catheterisation (ISC), or between preoperative leak-point pressures and outcome were observed. Postoperative urodynamic tests did not reliably predict long-term success in SUI cure. Preoperative clinical urgency was a more reliable predictor of long-term clinical urgency than urodynamic detrusor overactivity. Whilst patients with mixed urinary incontinence at long-term follow-up tended to have the highest (worst) overall ICIQ-SF and ICIQ quality of life score, no studied variables on preoperative CMG were significantly correlated with long-term PROMs. Conclusions: Whilst urodynamic studies provide important baseline bladder function data, prior to mid-urethral sling placement, this study finds no specific value of either pre or postoperative urodynamics in predicting long-term patient reported outcomes of transobturator tape placement. abstract_id: PUBMED:32603568 Long-term Outcomes of 100 Transobturator Tape Procedures. Aim Satisfactory short-term outcomes of transobturator tapes (TOTs) are recognized, yet a lack of long-term data exists. We investigated long-term patient-reported outcomes of TOTs. Methods A retrospective review was performed of 100 female patients post TOT insertion by a single surgeon (2005-2010). Results and postoperative complications were identified. At long-term follow-up, patients completed ICIQ-Short Form, PGI-S and PGI-I questionnaires. Results Mean age was 51.7 years (33-75), mean follow-up 9.4 years (7.25 - 12.75). Clinically, 68/100(68%) had mixed and 32/100(32%) pure stress urinary incontinence (SUI). Short-term cure/significant improvement in SUI was seen in 98/100(98%). Grade >2 Clavien-Dindo complications occurred in 10/100(10%) within 6 months of surgery. Long-term questionnaire response rate was 76/100(76%). 62/76 (81.57%) described current urinary condition as "much" or "very much" "better." No new complications emerged at long-term follow-up. Conclusion TOTs demonstrate high success rates in treatment of SUI, with no late-onset complications identified in our study. Recent concerns surrounding use of polypropylene tapes make reporting of long-term outcomes both desirable and necessary. abstract_id: PUBMED:25125897 Long-term outcome of transobturator suburethral tape procedure for treatment of female stress urinary incontinence. Objective: The aim is to evaluate the long-term safety and efficacy of transobturator tape (TOT) procedure in the treatment of women with stress urinary incontinence (SUI) by subjective and objective measures. Patients And Methods: A total 48 women with SUI underwent the TOT procedure during the period from December 2005 to February 2008. The follow-up period ranged between 60 and 84 months (mean 71 months). Mean age was 44.21 ± 7.52 (range: 30-58). Preoperative and early postoperative data were retrieved from the patient's medical files. Follow-up evaluation was carried out every 3 months during the 1(st) year and yearly afterwards by history taking (including incontinence and quality-of-life questionnaire), clinical examination, urine analysis, abdominopelvic ultrasonography, and urodynamic studies when indicated. Results: At 12-month follow-up, the cure, improvement, and failure rates were 39 patients (81.25%), 5 (10.42%) and 4 (8.33%), respectively. The corresponding rates at the last follow-up were 38 (79.15%), 5 (10.42%) and 5 (10.42%) respectively. The postvoid residual urine and peak flow rates did not differ significantly between the preoperative values and at the last follow-up. The complication rates after a mean 71-month included one patient with voiding difficulty and two patients with de novo urgency. No case was recorded with retention of urine or erosion. Conclusions: Transobturator tape procedure appeared to be effective minimally invasive procedure for SUI with low rate of complication and good long-term outcome. abstract_id: PUBMED:26264474 Retropubic tension-free vaginal tape and inside-out transobturator tape: a long-term randomized trial. Introduction And Hypothesis: The tension-free vaginal tape (TVT) and inside-out transobturator tape (TVT-O) are first-line surgical treatments for stress urinary incontinence (SUI). However, there is a lack of information regarding the long-term comparative safety of these procedures. Methods: A total of 140 SUI patients were randomized to the TVT or TVT-O procedure and were interviewed by an independent investigator at the follow-up. The primary outcomes were the proportions of patients with long-term postoperative complications. The secondary outcomes included the cure rates, quality of life (QOL) and sexual function based on validated questionnaires. Results: One hundred and twenty (85.71%) patients completed the long-term follow-up. More TVT patients experienced perioperative complications (P < 0.05). However, in a mean follow-up of 95 months, no significant between-group difference was found in the proportions of patients with long-term complications or in the variety of reported complications. The long-term complication rates for TVT and TVT-O were 43.1% and 27.4% respectively (P = 0.07). De novo voiding (15.8%) and storage symptoms (10.8%) were the primary long-term complications. Tape exposure was possible up to 7 years after TVT-O. The objective cure rates of TVT and TVT-O procedures were 79.30 and 69.35% respectively, which were not significantly higher than the subjective rates. The Pelvic Floor Impact Questionnaire (PFIQ-7) scores remained improved (P < 0.001) after both procedures, even at the 95-month follow-up. No difference was observed in the Pelvic Organ Prolapse/Urinary Incontinence Sexual Function Questionnaire Short Form (PISQ-12) scores after either procedure. Conclusions: In the long term, the proportion of patients with complications and the cure rates of the two procedures did not differ significantly. The long-term complication rates were high, but morbidity was low, and the QOL remained improved. abstract_id: PUBMED:32744183 Vaginocutaneous fistula: A rare long-term complication of transobturator tape procedure for stress urinary incontinence. Introduction: Synthetic mid-urethral slings are currently the most common performed surgical procedure for the treatment of female stress urinary incontinence. The transobturator mid-urethral sling technique has been widely accepted owing to its high success and low complication rates. Although complications are rarely seen, it may cause significant morbidity. Case Presentation: We report a case of vaginocutaneous fistula following transobturator mid-urethral sling procedure and a successful reconstruction with transvaginal sling excision and fistula closure. Conclusion: Vaginocutaneous fistula is a known but rarely seen long-term complication of transobturator tape. With an increased use of mesh, various delayed complications can be seen in the long-term follow-up. This case showed us the need for longer and detailed studies that evaluate the effectiveness and safety of the transobturator tape procedure. abstract_id: PUBMED:34783388 Long-term outcomes of transobturator suburethral tapes for female stress urinary incontinence. Aims: The aim is to evaluate long-term effectiveness and safety of transobturator midurethral slings (TO-MUS) for treating female stress urinary incontinence (SUI). Possible risk factors for failure and complications are also evaluated. Methods: A descriptive retrospective study was performed among women with SUI treated at a tertiary urogynecology unit between January 2004 and December 2006. Women with stress-predominant mixed urinary incontinence or with associated pelvic organ prolapse were also included. Postoperative follow-up was performed at 1, 6, and 12 months and yearly thereafter. Outcomes were classified as cured, improved or failed. Results: Of 896 women operated on over the study period, 565 were suitable for the analysis. Of them, 327 women completed the 5-year, 225 the 7-year, and 172 the 10-year follow-up periods. Success rates were 77.6%, 73.8%, and 73.2% at 5, 7, and 10 years, respectively. On multivariate Cox regression models age, body mass index, and previous incontinence surgery were related to failure. Complications of any type were described in 24.9% of patients, although most of them were mild. Only concomitant prolapse surgery was related to intraoperative and peroperative complications and the inside-out route to an increased odds of groin pain (odds ratio = 4.0). Conclusions: This study showed that TO-MUS is an effective and safe procedure in the long-term. The expected outcomes and possible side effects profile should be discussed in detail during the counseling process before opting for a treatment option. abstract_id: PUBMED:24384878 Are the outcomes of transobturator tape procedure for female stress urinary incontinence durable in long-term follow-up? Purpose: To evaluate long-term cure rates and late complication rates after treatment for female stress urinary incontinence (SUI) with transobturator tape (TOT) procedure and to compare the outcomes of 1st year versus 5th year. Methods: We analyzed 138 women who underwent TOT procedure for pure SUI and mixed urinary incontinence in two institutions during the time period of June 2005-May 2008 retrospectively. We used two kinds of polypropylene monofilament tapes (Heine Medizinurethral support system, Germany and I-STOPCL Medical, France) for the standard outside-in TOT in similar numbers. All patients were evaluated with pelvic examination including cough stress test and International Consultation on Incontinence Questionnaire-Short Form at 3 and 12 months and annually. Our primary outcome measures were rates of objective cure, subjective cure, patient satisfaction and failure for long-term follow-up. Results: The objective cure, subjective cure and patient satisfaction rates of the 126 women at 1 year were 89.6, 86.5 and 92% respectively. During 5-year follow-up, objective cure rate was stable with 87.3% rate (p = 0.554), whereas subjective cure and patient satisfaction rates were decreased to 65.9 and 73%, respectively (p = 0.001). Complications are reported according to the Clavien-Dindo classification with Gr I 14.3%, Gr II 64.3%, Gr IIIa 7.1% and Gr IIIb 14.3 %. Conclusions: TOT procedure is an effective minimal invasive procedure with satisfactory results for female SUI in short term. Although recovery in SUI symptoms was stable during 5-year follow-up, subjective cure and patient satisfaction rates decreased significantly due to urge urinary incontinence symptoms. abstract_id: PUBMED:31086536 The outcomes of transobturator tape intervention in the treatment of stress urinary incontinence: Two years' follow-up. Objective: To evaluate the clinical outcomes and the effects on quality of life of transobturator tape surgery during a 2-year follow-up period in our clinics. Methods: Eighty-seven patients with stress or mixed urinary incontinence who underwent transobturator tape surgery were included in the study conducted in Istanbul. Taksim. Training. and Research Hospital Gynecology and Obstetrics Clinic, between 2011 and 2013. The patients' demographic features, incontinence questionnaires, quality of life scores [Incontinence Impact Questionnaire (IIQ-7) and urinary distress inventories (UDI-6)], examination findings, urodynamic results, stress tests, Q tip tests, number of daily pads, ultrasonography, surgery, and cystoscopy results were recorded. Patients were evaluated 23-27 months (mean: 25.40±1.31 months) after their discharge in terms of symptoms, quality of life scores, urodynamic findings, complications, and stress test. Results: Sixty-three (72.4%) patients had stress incontinence and 24 (27.6%) patients had mixed urinary incontinence. No perioperative complications were observed in our study. After a follow-up period of two years, a significant improvement was detected in the IIQ-7 and UDI-6 questionnaires when compared with the preoperative period. Moreover, the objective cure rate was found as 88.5% (n=77). De novo urge incontinence was obtained in 5.7% (n=5) of patients and was treated with anticholinergics. Perineal pain was present in 3 (3.44%) patients and was treated with analgesics and cold packs. In 2 (2.29%) patients, vaginal mesh erosion was detected and full recovery was achieved with an excision. Urinary retention and bladder perforation was not seen in any patients. Conclusion: Our study revealed a high objective cure rate, and an improvement in symptoms and quality of life with the transobturator tape operation. abstract_id: PUBMED:27032312 Is initial success after Monarc(®) suburethral sling durable at mid-term evaluation? Objectives: Suburethral sling is the gold standard treatment for stress urinary incontinence (SUI). Short-term cure rates are high, but only few studies are available for longer assessment after transobturator tape procedure. The objectives of this study were to assess mid-term functional outcome for Monarc(®) transobturator tape after initial success, and to identify risk factors for recurrence. Material And Methods: We conducted a single centre retrospective study (2004-2013) on consecutive women with SUI who underwent Monarc(®) transobturator tape procedure and were initially cured at the postoperative medical consultation. Pre- and postoperative data (age, weight, height, body mass index, hormonal status, surgical history, associated organ prolapse [Baden and Walker], associated urinary symptoms, postoperative complications [Clavien-Dindo]) were extracted from the electronic medical record. Subjective cure was defined by a score of zero from the ICIQ-SF questionnaire, no second intervention for recurrent SUI and no need for pads at latest news. Statistical analysis was performed using SAS(®) v9.3 (P<0.05). Results: One hundred and thirty-three consecutive women underwent TOT Monarc(®) procedure, and 125 women were cured in the short-term. Among these women, 103 (82%) were available for mid-term evaluation. Sixty-four women (62%) had pure stress urinary incontinence. The mean follow-up period was 51 months [2-119]. At last follow-up, cure rate was 61%. Seventy-eight percent of women with recurrent urinary incontinence had SUI. Other women had mixed urinary incontinence (3/40), or de novo urgency (6/40). In univariate analysis, we could not identify pejorative prognostic factors for mid-term failure. Conclusion: In our experience, mid-term functional outcome after Monarc(®) transobturator tape procedure seems to deteriorate. After 4 years of follow-up, 61% of the women who were initially cured were still free from any leakage. Level Of Evidence: 4. abstract_id: PUBMED:37074367 Long-term results of a prospective randomized trial comparing tension-free vaginal tape versus transobturator tape in stress urinary incontinence. Introduction And Hypothesis: This study was aimed at investigating the long-term effectiveness of minimally invasive mid-urethral sling (MUS) surgery and at comparing the outcomes between retropubic (tension-free vaginal tape, TVT) and transobturator tape (TOT) methods in the treatment of stress urinary incontinence (SUI) and mixed urinary incontinence (MUI) with a predominant stress component in a long-term follow-up of a randomized controlled trial. Methods: This work is a long-term follow-up study of a previous prospective randomized trial conducted in the Department of Obstetrics and Gynecology at Oulu University Hospital between January 2004 and November 2006. The original 100 patients were randomized into the TVT (n=50) or TOT (n=50) group. The median follow-up time was 16 years, and the subjective outcomes were evaluated using internationally standardized and validated questionnaires. Results: Long-term follow-up data were obtained from 34 TVT patients and 38 TOT patients. At 16 years after MUS surgery, the UISS significantly decreased from a preoperative score in the TVT (11.88 vs 5.00, p<0.001) and TOT (11.05 vs 4.95, p<0.001) groups, showing a good long-term success of the MUS surgery in both groups. In comparing the TVT and TOT procedures, the subjective cure rates did not differ significantly between the study groups in long-term follow-up according to validated questionnaires. Conclusion: Midurethral sling surgery had good long-term outcomes in the treatment of SUI and MUI with a predominant stress component. The subjective outcomes of the TVT and TOT procedures were similar after a 16-year follow-up. Answer: The outcomes of the transobturator tape (TOT) procedure for female stress urinary incontinence appear to be durable in long-term follow-up. Several studies have reported on the long-term effectiveness and safety of TOT, with success rates remaining relatively stable over time. A study with a mean follow-up of 9.4 years reported that 81.57% of patients described their current urinary condition as "much" or "very much" "better," indicating a high success rate in the treatment of stress urinary incontinence (SUI) with no late-onset complications identified (PUBMED:32603568). Another study with a follow-up period ranging between 60 and 84 months (mean 71 months) found that the cure and improvement rates were 79.15% and 10.42%, respectively, at the last follow-up, suggesting that TOT is an effective minimally invasive procedure with a low rate of complications and good long-term outcomes (PUBMED:25125897). A randomized trial with a mean follow-up of 95 months showed no significant difference in the proportions of patients with long-term complications or in the cure rates between retropubic tension-free vaginal tape (TVT) and inside-out transobturator tape (TVT-O), with the long-term complication rates being high but morbidity low, and quality of life remaining improved (PUBMED:26264474). Another study with a 5-year, 7-year, and 10-year follow-up reported success rates of 77.6%, 73.8%, and 73.2%, respectively, indicating that TO-MUS is an effective and safe procedure in the long-term (PUBMED:34783388). However, it is important to note that while the objective cure rate was stable over a 5-year follow-up in one study, subjective cure and patient satisfaction rates decreased significantly due to urge urinary incontinence symptoms (PUBMED:24384878). Another study with a median follow-up of 16 years found that both TVT and TOT had good long-term outcomes in the treatment of SUI and MUI with a predominant stress component, with similar subjective cure rates between the two procedures (PUBMED:37074367). In conclusion, the available evidence suggests that the outcomes of TOT for female SUI are generally durable in the long term, with a majority of patients experiencing sustained improvement in their urinary condition.
Instruction: Is delayed facilitated percutaneous coronary intervention better than immediate in reperfused myocardial infarction? Abstracts: abstract_id: PUBMED:16622610 Is delayed facilitated percutaneous coronary intervention better than immediate in reperfused myocardial infarction? Six months follow up findings. Background: There are several new strategies proposed to improve the outcome of patients with ST-elevation myocardial infarction (STEMI). One approach is the resurgent use of facilitated percutaneous coronary interventions (PCI). Until recently, deciding whether immediate PCI after combined treatment (facilitated PCI) is more appropriate than delayed PCI (short time) has not been investigated. The aim of this study, therefore, was to investigate the outcomes in patients initially successfully treated pharmacologically and immediate PCI &lt; 2 hr, and in patients initially successfully treated with pharmacological therapy and with delayed PCI (12-72 h). Methods: 451 reperfused STEMI patients, aged 18 to 75 years, class I-II Killip, with an acceptable echocardiographic window and admitted within 12 hs of the onset of symptoms were randomized into two groups. All patients had to have successful reperfusion, to receive the combination of a standard tirofiban infusion or abciximab plus half dose rtPA. Thereafter, patients were sub-grouped as follows:group 1 (immediate PCI) patients had PCI within 2 h; and group 2 (delayed PCI) patients in which PCI was performed after 12 hs and within 72 hs. Results: The 225 reperfused (immediate-PCI) and 226 reperfused (delayed-PCI) patients (time from randomization to PCI 165 +/- 37 min in immediate PCI versus 45.1 +/- 20.2 h in delayed PCI group) showed similar results in ejection fraction, CK release and patency of the IRA. In addition, the delayed PCI group showed a significant reduction in ischemic events, restenosis and bleedings (P = 0.005, 0.01, 0.01 respectively) and significant reduced angiographic evidence of thrombus formation in the infarction-related artery (IRA) (p = 0.001). Conclusion: Our data suggest the safety and possible use of delayed facilitated PCI in patients with STEMI, and that delayed PCI in patients treated with combined lytic and IIb/IIIa inhibitors appears to be as effective and possibly superior (reduced ischemic events and repeat PCI) as immediate PCI. The patients in this study were successfully reperfused, with TIMI-3 flow and our data may not apply to patients with TIMI 0-2 flow. This strategy could allow transferring the reperfused patients and performing PCI after hours &lt; 72 hours and not immediately, thereby reducing the number of urgent PCI and costs, obtaining similar results, but mostly causing less discomfort to the patient. Our results had to be interpreted with caution, because current guidelines do not recommend the combined therapy, but suggest further studies. The study was aimed to investigate the outcomes in patients initially successfully treated pharmacologically and immediate PCI &lt; 2 h, and in patients initially successfully treated with pharmacological therapy and delayed PCI (12-72 h). All patients had to have successful reperfusion, to receive the combination of a standard abciximab or tirofiban infusion plus half dose rtPA. Similar results were observed in both groups. Delayed PCI group showed a significant lower incidence in restenosis (0.01), minor bleedings (0.005), ischemic events (0.01) and a reduced angiographic evidence of thrombus formation in IRA (0.001). Our data suggest the safety and possible use of delayed facilitated PCI in patients with STEMI. Our results had to be interpreted with caution, because current guidelines do not recommend the combined therapy, but suggest further studies. abstract_id: PUBMED:25981145 Thrombolysis-facilitated primary percutaneous coronary intervention as a therapeutic approach to stent thrombosis. Background: Stent thrombosis is a clinically significant event occurring days to weeks or, infrequently, months or years after percutaneous coronary intervention (PCI). Current therapeutic approach is immediate PCI aimed to recanalize the occluded artery in order to restore flow and diminish irreversible myocardial damage. Methods: We evaluated the coronary patency, TIMI flow and TIMI myocardial perfusion grade (TMPG) in 6 patients presenting with STEMI due to stent thrombosis treated by thrombolysis followed by immediate PCI. These were compared with control patients treated conventionally by primary PCI. Results: Immediate or early coronary angiography in the treatment group showed good coronary flow in 5 of 6 implicated arteries, whereas immediate angiography in the control group demonstrated 8 completely occluded coronary arteries of 9 with stent thrombosis. The pre-intervention TIMI flow in the control study group was 0.2±0.5 (median-0), and TMPG was 0.1±0.3 (median-0) compared with 2.1±1.1 (median-2.3, p&lt;0.001) and 1.8±1.0 (median-2, p&lt;0.001) in the treatment group, respectively. This striking difference in the rate of coronary patency, pre-procedural TIMI flow and TMPG, however, did not translate into better cardiac function in the treatment group. Conclusions: These findings suggest that thrombolysis-facilitated PCI may confer benefit and need not be considered contraindicated when treating stent thrombosis. This therapeutic approach should be evaluated as a viable therapeutic approach to stent thrombosis. abstract_id: PUBMED:26777321 Immediate Versus Delayed Invasive Intervention for Non-STEMI Patients: The RIDDLE-NSTEMI Study. Objectives: This study aimed to assess the clinical impact of immediate versus delayed invasive intervention in patients with non-ST-segment myocardial infarction (NSTEMI). Background: Previous studies found conflicting results on the effects of earlier invasive intervention in a heterogeneous population of acute coronary syndromes without ST-segment elevation. Methods: We randomized 323 NSTEMI patients to an immediate-intervention group (&lt;2 h after randomization, n = 162) and a delayed-intervention group (2 to 72 h, n = 161).The primary endpoint was the occurrence of death or new myocardial infarction (MI) at 30-day follow-up. Results: Median time from randomization to angiography was 1.4 h and 61.0 h in the immediate-intervention group and the delayed-intervention group, respectively (p &lt; 0.001). At 30 days, the primary endpoint was achieved less frequently in patients undergoing immediate intervention (4.3% vs. 13%, hazard ratio: 0.32, 95% confidence interval: 0.13 to 0.74; p = 0.008). At 1 year, this difference persisted (6.8% in the immediate-intervention group vs. 18.8% in delayed-intervention group; hazard ratio: 0.34, 95% confidence interval: 0.17 to 0.67; p = 0.002). The observed results were mainly attributable to the occurrence of new MI in the pre-catheterization period (0 deaths + 0 MIs in the immediate-intervention group vs. 1 death + 10 MIs in the delayed-intervention group). The rate of deaths, new MI, or recurrent ischemia was lower in the immediate-intervention group at both 30 days (6.8% vs. 26.7%; p &lt; 0.001) and 1 year (15.4% vs. 33.1%; p &lt; 0.001). Conclusions: Immediate invasive strategy in NSTEMI patients is associated with lower rates of death or new MI compared with the delayed invasive strategy at early and midterm follow-up, mainly due to a decrease in the risk of new MI in the pre-catheterization period. (Immediate Versus Delayed Invasive Intervention for Non-STEMI Patients [RIDDLE-NSTEMI]; NCT02419833). abstract_id: PUBMED:37171538 Early versus late delayed percutaneous coronary intervention in elderly patients with ST-segment elevation myocardial infarction. Background And Objective: There are a substantial proportion of elderly patients with ST-segment elevation myocardial infarction (STEMI) miss the optimal time window (12 h from symptom onset) of primary percutaneous coronary intervention (PCI). For these patients, the ideal timing of delayed PCI remains undetermined. Therefore, this study compared the clinical outcomes of early versus late delayed PCI in elderly patients with STEMI. Methods: From January 2014 to September 2019, 512 patients aged ≥ 65 years with STEMI who underwent delayed PCI after 12 h from symptom onset were included and then categorized into the early PCI group (12-48 h, n = 111) and late PCI group (48 h-28 days, n = 401) according to the timing of delayed PCI. Propensity score matching (PSM) was conducted to adjust the confounding factors between groups. The primary endpoint was major adverse cardiac and cerebrovascular events (MACCE), a composite of all-cause death, cardiac death, recurrent myocardial infarction (MI), stroke, and ischemia-driven revascularization. Results: During a mean follow-up of 77 months, 163 (31.8%) patients developed MACCE and 93 (18.2%) died. Early or late delayed PCI did not make a significant difference in clinical outcomes of MACCE (Before PSM: HR 0.773, 95% CI 0.520-1.149, P = 0.203; After PSM: HR 0.869, 95% CI 0.498-1.517, P = 0.622), all-cause death, cardiac death, recurrent MI, stroke, and ischemia-driven revascularization in both overall patients and the PSM cohorts. Conclusion: Early delayed PCI (12-48 h from symptom onset), for elderly patients with STEMI who present &gt; 12 h after symptom onset is not associated with better long-term clinical outcomes compared with late delayed PCI (48 h-28 days). abstract_id: PUBMED:26957418 Comparison of Immediate With Delayed Stenting Using the Minimalist Immediate Mechanical Intervention Approach in Acute ST-Segment-Elevation Myocardial Infarction: The MIMI Study. Background: Delayed stent implantation after restoration of normal epicardial flow by a minimalist immediate mechanical intervention aims to decrease the rate of distal embolization and impaired myocardial reperfusion after percutaneous coronary intervention. We sought to confirm whether a delayed stenting (DS) approach (24-48 hours) improves myocardial reperfusion, versus immediate stenting, in patients with acute ST-segment-elevation myocardial infarction undergoing primary percutaneous coronary intervention. Methods And Results: In the prospective, randomized, open-label minimalist immediate mechanical intervention (MIMI) trial, patients (n=140) with ST-segment-elevation myocardial infarction ≤12 hours were randomized to immediate stenting (n=73) or DS (n=67) after Thrombolysis In Myocardial Infarction 3 flow restoration by thrombus aspiration. Patients in the DS group underwent a second coronary arteriography for stent implantation a median of 36 hours (interquartile range 29-46) after randomization. The primary end point was microvascular obstruction (% left ventricular mass) on cardiac magnetic resonance imaging performed 5 days (interquartile range 4-6) after the first procedure. There was a nonsignificant trend toward lower microvascular obstruction in the immediate stenting group compared with DS group (1.88% versus 3.96%; P=0.051), which became significant after adjustment for the area at risk (P=0.049). Median infarct weight, left ventricular ejection fraction, and infarct size did not differ between groups. No difference in 6-month outcomes was apparent for the rate of major cardiovascular and cerebral events. Conclusions: The present findings do not support a strategy of DS versus immediate stenting in patients with ST-segment-elevation infarction undergoing primary percutaneous coronary intervention and even suggested a deleterious effect of DS on microvascular obstruction size. Clinical Trial Registration: URL: http://www.clinicaltrials.gov. Unique identifier: NCT01360242. abstract_id: PUBMED:26851700 Intracoronary injection of tirofiban prevents microcirculation dysfunction during delayed percutaneous coronary intervention in patients with acute myocardial infarction. Objective: To determine whether tirofiban can prevent microcirculation dysfunction during delayed percutaneous coronary intervention (PCI) of spontaneously recanalized and severe narrowing coronary artery in patients with acute myocardial infarction. Methods: 62 patients who have a single angiographically confirmed narrowing culprit coronary artery for more than 75% between 7 and 14 days after the onset of acute myocardial infarction were randomly divided into the tirofiban group (32 cases) and the placebo group (30 cases). All the patients received measurement of the index of microcirculatory resistance (IMR) before tirofiban/placebo administration and PCI. After PCI, IMR value was measured again. Results: There was no significant variation between the two groups before PCI (11.67 ± 6.45 of placebo group vs. 14.65 ± 12.45 of tirofiban group, P=0.158). After PCI, the IMR value of the tirofiban group is significantly lower than that of the placebo group (23.63 ± 9.91 of placebo group vs. 16.75 ± 9.98 of tirofiban group, P=0.008). Conclusions: Intracoronary injection of tirofiban can significantly prevent the abnormal increase of IMR value during delayed PCI in patients with acute myocardial infarction. abstract_id: PUBMED:28581994 Reperfusion Options for ST Elevation Myocardial Infarction Patients with Expected Delays to Percutaneous Coronary Intervention. Primary percutaneous coronary intervention (PCI) is the preferred reperfusion strategy for ST elevation myocardial infarction (STEMI). However, only one-third of hospitals in the US have PCI availability 24/7. For non-PCI hospitals, transfer remains the optimal strategy. For expected delays of greater than 120 minutes, a pharmacoinvasive strategy is recommended. In patients with evidence of failed reperfusion or hemodynamic instability, immediate rescue PCI should be performed. All other patients should undergo routine cardiac catheterization and PCI within 24 hours after fibrinolysis. A pharmacoinvasive strategy is best implemented within an organized regional STEMI system with prospective standardized transfer protocols. abstract_id: PUBMED:11050381 Facilitated percutaneous coronary intervention for acute myocardial infarction. Facilitated percutaneous coronary intervention is a treatment strategy for acute myocardial infarction in which patients are given medications in the emergency department that open or partially open infarct-related coronary arteries. The patients are then taken to the cardiac catheterization laboratory for early angiography and angioplasty or placement of a coronary artery stent. Preliminary evidence suggests that this treatment strategy may offer outcomes similar to or better than primary angioplasty and superior to solitary fibrinolytic therapy. In addition, the treatment can be started even in hospitals that do not have primary intervention capability. Currently, large-scale clinical trials are assessing the impact of the facilitated percutaneous coronary intervention treatment strategy on the clinical outcomes of patients with acute myocardial infarction. abstract_id: PUBMED:25434515 Percutaneous coronary intervention in stable coronary artery disease Percutaneous coronary intervention is a well-established symptomatic therapy of stable coronary artery disease. Using a literature search with special emphasis on the newly-published FAME 2 trial data, the author wanted to explore why percutaneous coronary intervention fails to reduce mortality and myocardial infarction in stable coronary artery disease, as opposed to surgical revascularisation. In the FAME 2 trial, fractional flow reserve-guided percutaneous coronary intervention with second generation drug eluting stents showed a significant reduction in the primary composite endpoint of 2-year mortality, myocardial infarction and unplanned hospitalization with urgent revascularisation as compared to medical therapy alone. In addition, landmark analysis showed that after 8 days, mortality and myocardial infarction were significantly reduced. The author concludes that percutaneous coronary intervention involving fractional flow reserve guidance and modern stents offers symptomatic, as well as prognostic benefit. abstract_id: PUBMED:30832942 Percutaneous Coronary Intervention for the Treatment of Spontaneous Coronary Artery Dissection. Most patients presenting with myocardial infarction owing to spontaneous coronary artery dissection can be managed conservatively. Revascularization should be pursued in the presence of high-risk features. Percutaneous coronary intervention is preferred over coronary artery bypass grafting, except in left main dissection. Interventionists should exercise extreme caution and meticulous techniques. Using a cutting balloon to fenestrate and decompress the false lumen is appealing and may avoid the need for long stents. Other percutaneous approaches may also be feasible, and interventionists should be familiar with these various approaches when embarking on spontaneous coronary artery dissection percutaneous coronary intervention. Answer: The question of whether delayed facilitated percutaneous coronary intervention (PCI) is better than immediate PCI in reperfused myocardial infarction has been explored in several studies, with varying results. One study found that delayed PCI (12-72 hours after initial pharmacological treatment) in patients with ST-elevation myocardial infarction (STEMI) who were successfully reperfused showed similar results in ejection fraction, CK release, and patency of the infarction-related artery (IRA) compared to immediate PCI (<2 hours). Additionally, the delayed PCI group had a significant reduction in ischemic events, restenosis, and bleeding, as well as reduced angiographic evidence of thrombus formation in the IRA (PUBMED:16622610). In contrast, the RIDDLE-NSTEMI study on non-ST-segment myocardial infarction (NSTEMI) patients found that immediate invasive intervention (<2 hours) was associated with lower rates of death or new myocardial infarction compared to delayed invasive intervention (2 to 72 hours), mainly due to a decrease in the risk of new myocardial infarction in the pre-catheterization period (PUBMED:26777321). Another study, the MIMI trial, compared immediate stenting to delayed stenting (24-48 hours) in patients with acute ST-segment-elevation myocardial infarction undergoing primary PCI. The findings did not support a strategy of delayed stenting and even suggested a deleterious effect of delayed stenting on microvascular obstruction size (PUBMED:26957418). For elderly patients with STEMI who missed the optimal time window for primary PCI, a study found no significant difference in long-term clinical outcomes between early delayed PCI (12-48 hours) and late delayed PCI (48 hours-28 days) (PUBMED:37171538). In the context of stent thrombosis, a study suggested that thrombolysis-facilitated PCI may confer benefit and should be evaluated as a viable therapeutic approach (PUBMED:25981145). Overall, the evidence suggests that the benefits of delayed versus immediate PCI may vary depending on the specific circumstances of the myocardial infarction, such as the type of myocardial infarction (STEMI vs. NSTEMI), the timing of the intervention, and the patient population. Current guidelines and further studies are needed to determine the optimal strategy for individual patients (PUBMED:16622610, PUBMED:26777321, PUBMED:37171538, PUBMED:26957418, PUBMED:25981145).
Instruction: Does gender affect neonatal hyperbilirubinemia in low-birth-weight infants? Abstracts: abstract_id: PUBMED:15730604 Does gender affect neonatal hyperbilirubinemia in low-birth-weight infants? Background: Neonatal mortality and morbidity are gender-biased in low-birth-weight (LBW) infants. The male disadvantage theory has been suggested to be responsible for these maturational differences. Objective: To examine the impact of gender on neonatal hyperbilirubinemia. Design/methods: A retrospective observational study. Data on all LBW infants admitted to George Washington University neonatal intensive care unit and surviving for &gt;48 hrs from January 1992 to March 2003 were analyzed. Males and females were compared for gestational age, birth weight, race, Apgar scores at 1 and 5 mins, peak bilirubin levels, sepsis, and intraventricular hemorrhage (IVH). Significant differences were entered in a regression model to detect the influence of gender on bilirubin (Bili). Analysis was repeated after stratification of infants into: group A, &lt;1000 g; group B, 1000-1499 g; and group C, 1500-2499 g. Results: A total of 840 infants were included in this study. When comparing males (n = 407) with females (n = 433), significant differences were detected in birth weight (1,539 +/- 541 vs. 1,428 +/- 549 g; p = .003), IVH (14.2% vs. 9%; p = .025), and Bili (10.1 +/- 3.0 vs. 9.2 +/- 2.8 mg%; p &lt; .001). No differences were detected in gestational age, sepsis, or Apgar 1 and 5. Difference in Bili for the entire group remained significant in the regression model (regression coefficient [RC] = 0.79 +/- 0.22; p &lt; .001). In subgroup analyses: group A Bili (8.4 +/- 2.3 vs. 8.0 +/- 2.0; p = .14) and group B Bili (9.0 +/- 2.1 vs. 9.2 +/- 2.2; p = .51) did not differ in bivariate or multivariate analyses. In group C, Bili was (11.3 +/- 3.1 vs. 10.1 +/- 3.3; p &lt; .001) and remained the only significant difference in the regression model (RC = 1.19 +/- 0.37; p = .001). Conclusions: Bili in LBW infants is significantly higher in males when compared with females. After stratification to birth weight subgroups, significance is retained in the 1500- to 2499-g group after logistic regression analysis. Bili levels in infants &lt;1500 g are influenced more significantly by factors other than gender, such as sepsis and IVH. abstract_id: PUBMED:20874740 Early corticosteroid treatment does not affect severity of unconjugated hyperbilirubinemia in extreme low birth weight preterm infants. Aim: To determine the relationship between early postnatal dexamethasone (DXM) treatment and the severity of hyperbilirubinemia in extreme low birth weight (ELBW) preterm infants. Methods: In 54 ELBW preterm infants, total serum bilirubin concentrations (TSB) and phototherapy (PT) data during the first 10 days were evaluated retrospectively. ELBW infants had participated in a randomized controlled trial of early DXM treatment which aimed to assess effects on chronic lung disease. Infants had been treated with DXM (0.25 mg/kg twice daily at postnatal day 1 and 2) or with placebo (normal saline). Analysis was performed on an intention to treat basis. Results: Twenty-five Infants had been randomized into the DXM group; 29 into the placebo group. Mean (±SD) TSB [120 (±19) μmol/L vs. 123 (±28) μmol/L, DXM versus placebo, respectively] and maximum TSB [178 (±23) μmol/L vs. 176 (±48), DXM versus placebo, respectively] concentrations were similar. TSB concentrations peaked 30 h earlier in the DXM group (p ≤ 0.05). The need for PT as well as the duration of PT was similar in both groups. Conclusions: Early DXM treatment does not affect the severity of neonatal hyperbilirubinemia in ELBW preterm infants. Our results seem compatible with the concept that factors other than bilirubin conjugation capacity are important for the pathophysiology of neonatal jaundice in ELBW preterm infants. abstract_id: PUBMED:15724854 Morbidity and mortality characteristics of infants hospitalized in the Pediatrics Department of the largest Turkish military hospital in 2001. A descriptive study was conducted in the pediatric inpatient unit of Gulhane Military Medical Academy, to investigate the morbidity and mortality characteristics of 532 infants hospitalized between January 1 and December 31, 2001, for treatment purposes. Of the study participants, 55.8% were boys and 44.2% were girls. The most common cause of hospitalization was neonatal hyperbilirubinemia (19.7%). The most common admission month was January (12.4%). Of 532 infants, 510 (95.9%) were discharged, whereas 22 patients died in the hospital. Twenty-one patients died in the neonatal period, and respiratory distress syndrome and neonatal sepsis were identified as the most common causes of death. Our finding of associations between male gender and low birth weight and hospital death is consistent with previous knowledge. Despite the high frequencies of pneumonia and gastroenteritis as admission diagnoses, the finding of only one pneumonia-related death and no gastroenteritis-related deaths in the study population is pleasing. abstract_id: PUBMED:33213255 The accuracy of transcutaneous bilirubin to identify hyperbilirubinemia in jaundiced neonates. Objective: To compare transcutaneous bilirubin (TcB) to total serum bilirubin (TSB) in jaundiced neonates by using a JM-103 bilirubinometer. Methods: We conducted a retrospective study in jaundiced infants ≥35 weeks' gestation. Infants' and maternal information, TcB measurements, and TSB levels were collected. The correlation and consistency between TcB and TSB were analyzed by Pearson's correlation analysis and the Bland-Altman plot. Multivariate regression analysis was used to identify characteristics associated with the difference between TcB and TSB, underestimation by TcB and overestimation by TcB. Results: A total of 787 paired samples were collected. There was a strong correlation between TcB and TSB with a Pearson correlation coefficient of 0.708 (p &lt; .001). In preterm infants, outpatients and infants with high TSB levels, the TcB measurements were less precise. The TcB-TSB difference was significantly associated with gender, maternal hypertension disease, ABO hemolytic disease, and outpatient setting (p &lt; .05). Only the outpatient setting had statistical significance in the underestimation of TSB ≥2 mg/dL. Conclusions: Generally, TcB by a JM-103 bilirubinometer had a good correlation and consistency with the TSB values. However, small for gestational age, postnatal age of hour, ABO hemolytic disease, and outpatient setting may lead to inconsistencies between the TcB and TSB. abstract_id: PUBMED:33080738 Early onset neonatal bacterial meningitis in term infants: the clinical features, perinatal conditions, and in-hospital outcomes: A single center retrospective analysis. To compare clinical features and outcomes between early and late onset of neonatal bacterial meningitis (NBM).Patients were allocated in 2 groups: early onset neonatal bacterial meningitis (ENBM) and late onset neonatal bacterial meningitis (LNBM). Data analysis includes asphyxia at birth, premature rupture of membranes (PROM), amnionitis, amniotic fluid contamination, maternal age, clinical manifestations of the patients, laboratory findings, radiological results, complications related to meningitis, duration of hospitalization and therapeutic effect.There was no difference in gender, birth weight, gestational age, and incidence of asphyxia between 2 groups. The incidence of PROM, chorioamnionitis and amniotic fluid contamination, proportion of small-for-gestational-age infants, convulsions, intracranial hemorrhage, hyperbilirubinemia, and the protein level of cerebrospinal fluid in ENBM group were higher than that in LNBM group (P &lt; .05); the proportion of fever, elevated C-reaction protein and the abnormal of platelet counts in LNBM group was higher than that in ENBM group (P &lt; .05). There was no difference in the incidence of complications and hospitalization time between 2 groups. The rate of effective treatment in LNBM group was significantly higher than that in ENBM group (P &lt; .05).Patients with conditions of amniotic fluid contamination, chorioamnionitis, small-for-gestational-age and PROM might be more prone to develop ENBM and ENBM had worse outcomes than LNBM. abstract_id: PUBMED:32609896 Identification of Perinatal Risk Factors for Auditory Neuropathy Spectrum Disorder. Objectives/hypothesis: To identify medical risk factors associated with auditory neuropathy spectrum disorder (ANSD). Study Design: Retrospective case-control study. Methods: During a 2-year period (2013-2014) patients with newly diagnosed ANSD were identified at a tertiary care facility. Twenty-two patients (n = 22) were identified aged 0.5 to 8.1 years. There were 15 males and seven females. Sixteen had bilateral, four had left-sided, and two had right-sided ANSD. Two age-matched, side-matched, and gender-matched control groups were then collected. The first group was 22 normal-hearing children (n = 22). The second was 22 children with sensorineural hearing loss (SNHL) (n = 22) who did not meet the criteria for ANSD. The chart of each subject was reviewed for the following five-predictor variables: prematurity, low birth weight, jaundice, use of mechanical ventilation, and administration of ototoxic medications. Analysis of variance was performed to analyze the prevalence of perinatal risk factors among the three groups. Multivariate linear regression was then applied. Results: When comparing the ANSD group to both the normal-hearing and SNHL groups, the subjects with ANSD had statistically significant higher rates of prematurity, low birth weight, jaundice, and mechanical ventilation. Multiple regression analysis was performed to identify predictors of ANSD compared to each control group individually. Jaundice in the first month of life approached significance when comparing the ANSD group to the normal-hearing group, and was the only medical risk factor found to be statistically significant when comparing the ANSD group to the SNHL group. Conclusions: A history of neonatal hyperbilirubinemia was significantly more common in children with ANSD compared to children with severe SNHL. Level Of Evidence: 3 Laryngoscope, 131:671-674, 2021. abstract_id: PUBMED:30961475 Neonatal hyperbilirubinaemia is associated with a subsequent increased risk of childhood-onset type 1 diabetes. Background: Type 1 diabetes (T1D) is one of the most common chronic diseases of childhood. Whether neonatal hyperbilirubinaemia increases the risk of T1D remains unclear.Aim: To estimate the association between neonatal hyperbilirubinaemia and phototherapy and the risk of T1D using a large nationwide population-based cohort.Methods: This retrospective study was conducted using data from the National Health Insurance Research Database in Taiwan from 2001 until 2005. Altogether, 23,784 neonates aged &lt;30 days diagnosed with hyperbilirubinaemia and 47,568 neonates without hyperbilirubinaemia were enrolled and frequency-matched to the hyperbilirubinaemia group by gender, age, parental occupation and urbanisation. Cox regression analysis was performed to estimate hazard ratios (HRs) and 95% confidence intervals (CI).Results: Of the 71,352 neonates included, those with hyperbilirubinaemia had a higher incidence of T1D (4.76 vs 2.68 per 10,000 person-years, p &lt; 0.001) and an earlier mean age at onset of T1D [4.13 (2.80) vs 5.80 (2.67) years, p &lt; 0.001] than those without hyperbilirubinaemia. After adjusting for confounding factors in multivariable analysis, the neonates with hyperbilirubinaemia had a 66% increased risk of developing T1D (HR 1.66, 95% CI 1.26-2.18). Girls had a 1.41-fold (HR 1.41, 95% CI 1.10-1.82) greater risk of T1D than boys. Additionally, neonates with a history of perinatal complications (HR 1.66, 95% CI 0.99-2.80) and neonatal infections (HR 2.13, 95% CI 1.45-3.15) had an increased subsequent risk of T1D.Conclusions: The results suggest that neonatal hyperbilirubinaemia is associated with a subsequently increased risk of childhood-onset T1D.Abbreviations: T1D, type 1 diabetes; CI, confidence interval; NHI, national health insurance; NHIA, National Health Insurance Administration; NHIRD, National Health Insurance Research Database; ICD-9-CM, International Classification of Diseases, Ninth Revision, Clinical Modification; G6PD, glucose-6-phosphate dehydrogenase; LBW, low birthweight; HRs, hazard ratios. abstract_id: PUBMED:32269147 Blue LED phototherapy in preterm infants: effects on an oxidative marker of DNA damage. Background: Phototherapy is used on the majority of preterm infants with unconjugated hyperbilirubinaemia. The use of fluorescent tube phototherapy is known to induce oxidative DNA damage in infants and has largely been replaced by blue light-emitting diode phototherapy (BLP). To date, it is unknown whether BLP also induces oxidative DNA damage in preterm infants. Objective: To determine whether BLP in preterm infants induces oxidative DNA damage as indicated by 8-hydroxy-2'deoxyguanosine (8-OHdG). Design: Observational cohort study. Methods: Urine samples (n=481) were collected in a cohort of 40 preterm infants (24-32 weeks' gestational age) during the first week after birth. Urine was analysed for the oxidative marker of DNA damage 8-OHdG and for creatinine, and the 8-OHdG/creatinine ratio was calculated. Durations of phototherapy and levels of irradiance were monitored as well as total serum bilirubin concentrations. Results: BLP did not alter urinary 8-OHdG/creatinine ratios (B=0.2, 95% CI -6.2 to 6.6) at either low (10-30 µW/cm2/nm) or high (&gt;30 µW/cm2/nm) irradiance: (B=2.3, 95% CI -5.7 to 10.2 and B=-3.0, 95% CI -11.7 to 5.6, respectively). Also, the 8-OHdG/creatinine ratios were independent on phototherapy duration (B=-0.1, 95% CI -0.3 to 0.1). Conclusions: BLP at irradiances up to 35 µW/cm2/nm given to preterm infants ≤32 weeks' gestation does not affect 8-OHdG, an oxidative marker of DNA damage. abstract_id: PUBMED:19242487 Unbound bilirubin predicts abnormal automated auditory brainstem response in a diverse newborn population. Objective: The objective of this study was to determine if plasma unbound or 'free' bilirubin concentration (B(f)) measured during the first 30 days of life is associated with subsequent abnormal hearing screening testing by automated auditory brainstem response (AABR) in a diverse population of newborns. Study Design: An observational study of newborns receiving AABR, plasma total bilirubin concentration (TBC) and B(f) measurements and without underlying conditions known to affect hearing was conducted. Logistic regression was used to determine associations between abnormal AABR and B(f) or TBC. The impacts of a variety of clinical factors on the regression model were also assessed. Result: A total of 191 patients with birth weights and gestations ranging from 406 to 4727 g and 24 to 42 weeks, respectively, were studied. Among them, 175 (92%) had normal (bilateral PASS) AABR and 16 had abnormal AABR (6 had unilateral REFER AABR, and 10 had bilateral REFER AABR). Mean TBC was not significantly different in babies with normal or abnormal AABR, but mean B(f) was greater in the latter group (1.76 versus 0.93 microg per 100 ml, respectively, P=0.012). B(f), but not TBC, was associated with an abnormal AABR (B(f) adjusted odds ratio 3.3, 95% CI 1.8 to 6.1). Comparing receiver-operating characteristics curves, the B(f)/TBC ratio was a better predictor of an abnormal AABR than B(f) alone. Intraventricular hemorrhage was the only confounding clinical variable. Conclusion: An abnormal AABR is associated with an elevated B(f) or B(f)/TBC ratio, but not the TBC alone. The prevalence of bilirubin neurotoxicity as a cause of audiological dysfunction may be underestimated if the TBC alone is used to assess the severity of newborn jaundice. abstract_id: PUBMED:35832130 Perinatal Outcomes and Related Risk Factors of Single vs Twin Pregnancy Complicated by Gestational Diabetes Mellitus: Meta-Analysis. Objective: Perinatal outcomes and related risk factors of single vs twin pregnancy complicated with gestational diabetes mellitus (GDM) were clarified, providing evidence for developing preventive measures. Methods: The Chinese National Knowledge Infrastructure (CNKI), China Biology Medicine (CBM), CQVIP, Wanfang, and PubMed databases were searched for published research on the perinatal outcomes and risk factors of single and twin pregnancy complicated by GDM from 2000 to 2021. The quality of the included literature was evaluated according to the Newcastle-Ottawa Scale (NOS). Meta-analysis of the included literature was conducted using RevMan5.3 software. Results: Relative to a single pregnancy group, infertility, gestational weight gain, and family history of diabetes presented statistical significance in the twin pregnancy group (P &lt; 0.05); gestational age at delivery, cesarean section, preterm birth &lt; 37 weeks, and preeclampsia presented statistical significance in the twin pregnancy group (P &lt; 0.05); and neonatal birth weight, small for gestational age (SGA), neonatal asphyxia, neonatal hypoglycemia, neonatal respiratory distress syndrome (NRDS), neonatal hyperbilirubinemia, and neonatal death presented statistical significance in the twin pregnancy group (P &lt; 0.05). Conclusion: Infertility, prenatal weight gain, and diabetes in the family are all risk factors for postpartum impaired glucose metabolism in pregnant women with GDM who are carrying twins. The gestational age at delivery, cesarean section, preterm birth &lt; 37 weeks, and preeclampsia of twin pregnant women with diabetes will affect the perinatal status of twin pregnant women. Neonatal birth weight, SGA, neonatal asphyxia, neonatal hypoglycemia, NRDS, neonatal hyperbilirubinemia, neonatal death, etc. should be paid special attention in the perinatal process. Answer: Yes, gender does affect neonatal hyperbilirubinemia in low-birth-weight (LBW) infants. A study analyzing data from LBW infants admitted to George Washington University neonatal intensive care unit found that bilirubin levels were significantly higher in males when compared with females. This difference remained significant after logistic regression analysis, particularly in the subgroup of infants weighing between 1500 to 2499 grams. However, for infants weighing less than 1500 grams, bilirubin levels were influenced more by factors other than gender, such as sepsis and intraventricular hemorrhage (IVH) (PUBMED:15730604).
Instruction: Lower urinary tract reconstruction for duplicated renal units with ureterocele. Is excision of the ureterocele with reconstruction of the bladder base necessary? Abstracts: abstract_id: PUBMED:14501647 Lower urinary tract reconstruction for duplicated renal units with ureterocele. Is excision of the ureterocele with reconstruction of the bladder base necessary? Purpose: Current practice in reconstruction of the lower urinary tract for duplicated renal systems with an associated ureterocele is excision of the ureterocele with reconstruction of the bladder and a common sheath ureteroneocystostomy. For a nonfunctioning upper pole treatment is partial nephroureterectomy. We postulate that lower urinary tract reconstruction can be performed successfully through an extravesical approach without excision of the ureterocele or reconstruction of the bladder base. We present our experience with that approach. Materials And Methods: Between 1996 and 2001, 60 patients presented with the diagnosis of ureterocele and obstruction of the upper pole ureter. Partial nephrectomy was performed in 12 cases of which 4 had reflux to the lower pole moiety. Upper pole only dismembered ureteroneocystostomy was performed in 7 of 15 cases reconstructed using the extravesical approach. Results: Average postoperative stay was 3.7 days. The Foley catheter was removed within 24 to 48 hours. Postoperative ultrasound showed decompression of the obstructed system and the ureterocele. Reflux was corrected in all patients. Flow rate with measurement of post-void residual 6 weeks postoperatively in toilet trained children showed complete bladder emptying. Conclusions: Lower urinary tract reconstruction for duplicated renal systems with obstruction of the upper pole can be accomplished safely with decreased morbidity through the extravesical approach without excision of the ureterocele or reconstruction of the bladder base. Moreover, in instances when there is no reflux to the lower pole moiety, upper pole only extravesical ureteroneocystostomy can be performed. abstract_id: PUBMED:36241446 Lower Urinary Tract Reconstruction for Ectopic Ureterocele: What Happens in the Long-term Follow-up? Background: We aimed to analyze the long-term clinical and lower urinary tract function outcomes in children with duplex system ectopic ureterocele who underwent ureteroneocystostomy and ureterocelectomy. Methods: Fifty-one patients (28 females, 23 males) who underwent a series of surgical interventions including lower urinary tract reconstruction in childhood for duplex system ectopic ureterocele in our center between 1998 and 2019, were retrospectively reviewed. The demographic and clinical data, surgical history, and the indication for ureterocelectomy were noted. Lower urinary tract dysfunction (LUTD) status was assessed through dysfunctional voiding symptom scores (DVSS) and uroflowmetry in all patients at the last follow-up. The clinical outcomes, and LUTD were evaluated. Results: At the last visit at a mean follow-up of 117.18 ± 57.87 months after ureterocelectomy, ipsilateral persistent lower pole VUR was detected in 5.6% (3/54 renal units, 2 females and 1 male) of the cases, who were treated using the subureteric injection. Abnormal DVSS (median 11, range 9-15) was detected in 27.4% (14/51 pts) of the patients. Out of these, 57.1% (8/14 pts) had storage symptoms, 35.7% (5/14 pts) had voiding symptoms, and 7.1% (1/14 pts) had both storage and voiding symptoms while 71.4%(10/14 pts) had abnormal uroflowmetry findings (plateau shaped flow curve in 2, staccato shaped curve with sustained EMG activity in 3, tower shaped curve in 2, interrupted shaped curve in 3 patients). Five patients had elevated residual volume. Anticholinergics were administered to six patients who had overactive bladder symptoms. In addition, two girls required open bladder neck reconstruction due to stress incontinence caused by bladder neck insufficiency. Conclusions: Our findings showed that clinical success was achieved using the lower urinary tract reconstruction with no need for re-operation in 90.2% of patients with duplex system ectopic ureterocele. However, LUTD was present in 27.4% of our patients in the long-term follow-up. Therefore, LUTD should be carefully assessed in the long-term follow-up of these patients. abstract_id: PUBMED:18036040 Surgical treatment for ureterocele with special reference to lower urinary tract reconstruction. Objectives: We reviewed the results of surgical treatment for children with ureterocele, especially addressing the importance of the lower urinary tract reconstruction. Methods: We present medical records of 91 children with ureterocele (31 with the intravesical type, and 60 with the ectopic type) treated during the last 14 years. As the initial treatment, we carried out transurethral incision of the ureterocele (TUI). In patients with persistent reflux, breakthrough urinary tract infection (UTI), or signs of bladder outlet obstruction due to a collapsed cele wall, we recommended that patients undergo lower urinary tract reconstruction irrespective of the renal function involved in ureterocele. The average follow-up period was 5 years (ranging 1 year and 6 months to 14 years). Results: Transurethral incision of the ureterocele was carried out as the initial treatment on 68 patients (75%). A total of 34 patients (21 intravesical [68%] and 13 ectopic [22%]) were followed medically after TUI alone. Reconstruction of the lower urinary tract was carried out in 59 patients (65%). Nephroureterectomy combined with bladder level reconstruction was carried out in four children with single system and non-functioning kidney. Follow-up voiding cystourethrography showed that only one girl had reflux, which disappeared after the first follow-up examination. Postoperative UTI occurred in 12 patients (20%) with ectopic ureterocele. Voiding dysfunction was suspected in eight patients. Conclusion: We believe that the primary objective for patients with ureterocele, especially of the ectopic type, is to reconstruct the original pathology of the lower urinary tract that may give rise to reflux, obstruction, or abnormalities of urination. Although surgery at the bladder level can be challenging, the lower urinary tract reconstruction successfully corrects the vesicoureteral reflux and bladder outlet pathology under a cosmetically acceptable incision. abstract_id: PUBMED:15592074 Primary lower urinary tract reconstruction for nonfunctioning renal moieties associated with obstructing ureteroceles. Purpose: Upper pole heminephrectomy is the conventional treatment for severely compromised nonfunctioning renal units associated with ureteroceles due to the potential morbidity of leaving a nonfunctioning renal moiety in place. This approach often fails to address the pathological anatomical defect present at the bladder level, and during long-term followup the majority of patients require subsequent lower tract surgery due to persistent ureterocele, new or persistent vesicoureteral reflux, or recurrent infections. We determined the success of primary lower urinary tract reconstruction for nonfunctioning renal moieties and the morbidity associated with leaving nonfunctioning renal units in situ. Materials And Methods: We present the collective experience of 2 institutions using definitive lower urinary tract reconstruction without upper tract ablative surgery in 16 patients with duplex collecting systems and an associated nonfunctioning renal moiety due to obstructing ureteroceles. Results: At a mean followup of 62 months upper tract dilatation was decreased or completely resolved in all patients, no loss of renal function was present and 15 of 16 patients (94%) had no evidence of persistent reflux. Postoperative complications in the form of a urinary tract infection occurred in 2 of 16 patients (13%). No patient was febrile. No patient had development of hypertension, proteinuria or tumor during followup. Conclusions: We believe that primary, single stage, lower urinary tract reconstruction in children with severely compromised or nonfunctioning renal moieties damaged by ureteroceles is superior to upper urinary tract ablative surgery, successfully correcting the obstructive or refluxing pathology with minimal morbidity and risk during long-term followup. abstract_id: PUBMED:15756063 Lower urinary tract reconstruction in ectopic ureteroceles. Objectives: The management of ectopic ureteroceles is a challenging entity in pediatric urology. In our study, we aimed to determine the outcome after lower urinary tract reconstruction in ectopic ureteroceles. Materials And Methods: A total of 18 (12 girls, 6 boys) children with ectopic ureteroceles, treated between 1993 and 2003 by complete reconstruction, were enrolled in the study and their records were retrospectively reviewed. Results: Ureterocele was found to be unilateral in 16 and bilateral in 2 children. Four patients had been previously managed by endoscopic interventions and 1 patient underwent partial nephrectomy. Ureterocelectomy was performed on 20 renal units, and of these 20 renal units, 9 underwent heminephroureterectomy, 1 ureterectomy, and 3 (with single system ureteroceles) nephroureterectomy. Seven (35%) renal units with adequate function underwent ureteral reimplantations. After a mean follow-up of 5.4 years (range 4 months to 9.5 years), no patient required a second procedure. Contralateral reflux that developed in 2 children after surgical treatment resolved spontaneously after 1 year of follow-up. Conclusion: In the management of ectopic ureterocele, lower urinary tract reconstruction is an effective treatment alternative. According to the functional status of the renal parenchyma involved, the surgical procedure was limited to the lower urinary tract in 35% of the cases. abstract_id: PUBMED:10037414 Renal transplantation in children with severe lower urinary tract dysfunction. Purpose: Renal transplantation in children with end stage renal disease due to congenital urological malformations has traditionally been associated with a poor outcome compared to transplantation in those with a normal urinary tract. In addition, the optimal urological treatment for such children remains unclear. To address these issues, we retrospectively reviewed our experience with renal transplantation in this population. Materials And Methods: Between 1986 and 1998, 12 boys and 6 girls a mean age of 8.4 years with a severe dysfunctional lower urinary tract underwent a total of 15 living related and 6 cadaveric renal transplantations. Urological anomalies included posterior urethral valves in 8 cases, urogenital sinus anomalies in 4, the prune-belly syndrome in 2, and complete bladder duplication, ureterocele, lipomeningocele and the VATER syndrome in 1 each. In 11 children (61%) bladder augmentation or continent urinary diversion was performed, 2 (11%) have an intestinal conduit and 5 (28%) have a transplant into the native bladder. Results: In this group patient and overall allograft survival was 100 and 81%, respectively. These values were the same in all children who underwent renal transplantation at our center during this era. In the 17 children with a functioning transplant mean serum creatinine was 1.4 mg./dl. Technical complications occurred in 4 patients (22%), including transplant ureteral obstruction in 2 as well as intestinal conduit stomal stenosis and Mitrofanoff stomal incontinence. Conclusions: Renal transplantation may be successfully performed in children with end stage renal disease due to severe lower urinary tract dysfunction. Bladder reconstruction, which may be required in the majority of these cases, appears to be safe when performed before or after the transplant. A multidisciplinary team approach to surgery is advantageous. abstract_id: PUBMED:9334644 Results of urinary tract reconstruction in boys with end stage bladders resulting from obstructive uropathy. Purpose: The records of 17 boys who underwent reconstruction of the lower urinary tract because of end stage bladders resulting from obstructive uropathy were reviewed to evaluate the degree to which they void and factors that favorably impacted outcome. Materials And Methods: A total of 17 boys with end stage bladders resulting from obstructive uropathy, including posterior urethral valves in 15 and obstructing ureteroceles in 2, underwent reconstruction of the urinary tract. Procedures consisted of augmentation (autoaugmentation in 3, ileocystoplasty in 2, ileocecal cystoplasty in 1 and colocystoplasty in 10), an ileal ureter in 2, bladder neck revision in 15 and appendicovesicostomy in 8. Results: All patients achieved a low pressure reservoir of adequate volume with stable or improved urinary tracts. A total of 13 patients voided sufficiently well to maintain a favorable life-style, including 5 who were completely catheter-free. With respect to the variables involved in reconstruction, bladder neck revision seemed to correlate best with a good outcome. Conclusions: In most cases end stage bladders in boys with obstructive uropathy can be reconstructed not only to protect the urinary system, but to preserve some degree of voiding potential as well. Bladder neck revision appears to be particularly helpful in achieving the latter goal. When voiding is inadequate, appendicovesicostomy provides easy access to the bladder for intermittent catheterization. abstract_id: PUBMED:18721985 Complete excision or marsupialization of ureteroceles: does choice of surgical approach affect outcome? Purpose: Two accepted open surgical techniques exist for lower urinary tract reconstruction for ureteroceles, that is complete excision/enucleation and marsupialization/partial excision. To our knowledge it is currently unknown whether 1 method offers better clinical outcomes. We sought to answer this question. Materials And Methods: We retrospectively reviewed the records of patients who underwent open surgical repair for ureterocele at 3 academic institutions. The 2 groups (complete excision vs marsupialization) were compared for each clinical outcome, including ongoing vesicoureteral reflux, new bladder diverticulum, hydronephrosis, continence, urinary tract infection and voiding dysfunction. Each clinical outcome was analyzed for independence from the surgical method. Results: A total of 33 cases of complete excision and 24 of marsupialization were collected. The excision and marsupialization groups were similar with respect to patient age at surgery, gender and average followup (55 and 38 months, respectively). Most ureteroceles were unilateral and part of a duplicated system. Approximately half were intravesical and vesicoureteral reflux was the primary indication for surgery. In the excision group 82% of ureteroceles were previously punctured but only 50% were punctured in the marsupialization group (p = 0.09). Postoperatively the groups did not differ significantly in terms of ongoing vesicoureteral reflux (13% and 14%), bladder diverticulum (3% and 9%), continence (100% and 95%), urinary tract infection (43% and 46%) or voiding dysfunction (24% and 25%, respectively). They varied significantly in terms of improved or stable hydronephrosis (70% vs 91%, p = 0.05). Conclusions: When performing lower urinary tract reconstruction for ureterocele, this study demonstrates that the choice of complete excision vs marsupialization/partial excision does not appear to differentially affect clinical outcomes. abstract_id: PUBMED:29980359 Long-term renal outcome in infants with congenital lower urinary tract obstruction. Introduction: Congenital Lower Urinary Tract Obstructions (LUTO) is a heterogeneous group of diseases in which urine elimination is obstructed at the level of bladder neck or urethra. The aim of the study is to evaluate the long-term renal outcome of patients suffering of congenital LUTO. Patients And Method: We retrospectively reviewed patients with congenital LUTO. All patients had at least 1 year follow-up. Data on surgery, renal imaging and Schwartz estimate creatinine clearance were collected. Incidence of Chronic Renal Disease (CRD) is presented with Kaplan-Meier method. Results: 40 patients were included, 23 patients with Posterior Urethral Valve (PUVs) and 17 patients with other aetiologies: anterior urethral valve (2), urethral atresia (2), urethral stenosis (2), cloacal malformations (2), obstructive ureterocele (1), bladder trigone malformation (1) and neonatal bladder-sphincter dysfunction without neurological abnormalities (7). Incidence of CRD at age 10 years was 37% in congenital LUTO, 42% in PUVs and 30% in other aetiologies, and was significantly higher in PUVs (P=0.032). Renal prognosis was significantly worsened by discover of retentional bladder wall changes in initial cystoscopy, and by loss of parenchymal differentiation or cortical microcysts in first ultrasonography. The use of urinary diversion was significantly higher in LUTO of other aetiologies. Conclusion: A high incidence of CRD is observed in patients with congenital LUTO, significantly higher in patients with PUV. LUTO of other aetiologies require step by step surgical management and higher use of urinary diversion. Precise initial evaluation in cystoscopy and ultrasonography is required and participate to evaluate future renal outcome. Level Of Incidence: 4. abstract_id: PUBMED:1625138 Surgical management of duplex ureteroceles. We reviewed the cases of 95 children with duplex ureteroceles treated in this department over an 18-year period. There were 101 ureteroceles (6 bilateral). Diagnosis and treatment were analyzed. Special attention was paid to newborns screened in utero. We always strove to preserve functional renal tissue whenever possible. In keeping with this goal, three surgical techniques were used: (1) upper pole heminephrectomy; (2) ureterocele excision, bladder neck reconstruction, and ureter reimplantation with or without cutaneous ureterostomy of the upper pole ureter; and (3) endoscopic ureterocele incision. Follow-up studies using x-ray and radionuclide imaging demonstrated satisfactory renal function in 86.6% of patients. These findings support a conservative approach to ureteroceles using endoscopic ureterocele incision as the primary treatment. Lower urinary tract reconstruction may be associated in cases involving urinary tract infection, obstruction or incontinence. Upper pole heminephrectomy should be performed only after functional evaluation following ureterocele incision or cutaneous ureterostomy. Answer: The necessity of excising the ureterocele with reconstruction of the bladder base during lower urinary tract reconstruction for duplicated renal units with ureterocele is debated. Some studies suggest that excision of the ureterocele and reconstruction of the bladder base are not always necessary. For example, one study postulates that lower urinary tract reconstruction can be performed successfully through an extravesical approach without the need for excision of the ureterocele or reconstruction of the bladder base, presenting evidence that this approach can be accomplished safely with decreased morbidity (PUBMED:14501647). However, other studies have employed different surgical techniques, such as complete excision or marsupialization of ureteroceles, and have not found significant differences in clinical outcomes between the two methods, suggesting that the choice of surgical approach may not differentially affect outcomes (PUBMED:18721985). Additionally, some studies have shown that primary lower urinary tract reconstruction without upper tract ablative surgery can be successful in children with severely compromised or nonfunctioning renal moieties damaged by ureteroceles, correcting the obstructive or refluxing pathology with minimal morbidity and risk during long-term followup (PUBMED:15592074). In contrast, other research has indicated that lower urinary tract reconstruction is an effective treatment for ectopic ureterocele, and depending on the functional status of the renal parenchyma involved, the surgical procedure can be limited to the lower urinary tract in a significant percentage of cases (PUBMED:15756063). Overall, the decision to excise the ureterocele and reconstruct the bladder base should be individualized based on the patient's specific condition, the presence of associated complications such as vesicoureteral reflux or obstruction, and the surgeon's experience and preference. The literature suggests that while some patients may benefit from a more conservative approach that avoids excision of the ureterocele, others may require a more extensive reconstruction to achieve optimal outcomes.
Instruction: Sudden reversible vitritis after keratoprosthesis: an immune phenomenon? Abstracts: abstract_id: PUBMED:16227832 Sudden reversible vitritis after keratoprosthesis: an immune phenomenon? Purpose: To report our experience with late vitritis associated with keratoprosthesis (KPro). Methods: Between 1990 and 2003, 218 patients underwent an all-polymethylmethacrylate, collar button-shaped KPro surgery. Eight patients developed a total of 12 episodes of sudden, massive vitritis. Five of these patients had an Ahmed shunt implant, 3 had anterior vitrectomy during surgery, and 4 had a soft contact lens in place. Preoperative diagnoses were multiple graft failures, chemical burn, Stevens-Johnson syndrome, or ocular cicatricial pemphigoid. All patients were maintained on prophylactic topical ofloxacin 0.3% or polymyxin-B/trimethoprim, as well as prednisolone acetate 1% (in 2 cases, medroxyprogesterone 1%), at least twice daily. Vancomycin (14 mg/mL) was also given twice daily in 2 patients. Results: Vitritis occurred in 8 patients (12 episodes), 2 to 23 months postoperatively. All patients presented with sudden, very marked decrease in vision, with little or no pain, tenderness, conjunctival redness, or discharge. Eight of the 12 events were subjected to vitreous tap and injection of antibiotics and steroids on the day of presentation. Cultures grew Staphylococcus epidermidis, only in liquid (broth) media, in 3 cases; the other 5 showed no growth. The vitritis episodes resolved after 2 to 9 weeks. Full recovery to pre-episode status of a quiet eye with clear vitreous was seen in all patients. Visual acuity recovered almost completely or completely (mental debilitation in one patient made accurate assessment uncertain). Conclusions: This phenomenon of sudden vitritis after KPro, with few other symptoms and with complete recovery, would be uncharacteristic of a bacterial endophthalmitis. It may represent a uveitic immune phenomenon. abstract_id: PUBMED:21398411 Analysis of significant factors influencing visual acuity in ocular syphilis. Background: The aim of this study is to determine whether statistical associations can be demonstrated in ocular syphilis between baseline clinical and laboratory parameters with visual acuity at presentation and with any change in visual acuity after treatment. Methods: Charts of 26 patients (42 eyes) with ocular syphilis presenting to the Uveitis clinic of the Jules-Gonin Eye Hospital were reviewed. A baseline cross-sectional analysis was performed in order to identify any association between visual acuity at presentation and demographic, clinical or laboratory parameters. After treatment, any analogy between these parameters and a change in visual acuity was subsequently assessed in a series of univariate comparisons. Results: The following factors were associated with worse initial visual acuity: severity of visual field impairment at presentation (p=0.012), macular oedema (p=0.004) and optic neuropathy (p=0.031). There was a borderline association with the presence of vasculitis on fluroangiography (p=0.072). Improvement in best corrected visual acuity after treatment was significantly associated with the presence of vasculitis on fluroangiography (p=0.005), neurosyphilis, according to lumbar puncture findings (p=0.037) and marginally with anterior uveitis (p=0.070). Inflammation relapse was associated with the coexistence of pain as presenting sign (p&lt;0.001) and with a longer duration of symptoms prior to the initial visit (p=0.023). Conclusions: Severe ocular inflammation associated with vasculitis, vitritis or anterior uveitis in ocular syphilis would appear to be a reversible phenomenon that responds well to appropriate antibiotic treatment, resulting in improvement in visual acuity. Prompt treatment enables a good visual prognosis, while any delay in therapy increases the risk of subsequent relapse. abstract_id: PUBMED:2664327 Sense and nonsense of examinations in hematogenous and exogenous intra-ocular infections Diagnosis of intraocular inflammations can be difficult, particularly if the patient is suffering from immunodeficiency. Among the endogenous inflammations this applies mainly to cases of hyaloretinitis caused by Candida or other fungi, often presenting as an acute hyalitis with sudden onset; the various ocular complications of AIDS, especially those due to toxoplasmosis (rare) and cytomegalovirus, and lyme disease, which can occasionally cause chronic panuveitis. Among the exogenous cases it applies to postoperative and posttraumatic endophthalmitis. In all these cases the diagnosis has to be established clinically if possible, since, as a consequence of the immunodeficiency, serology is hardly helpful, and only biopsy (of the vitreous) may provide additional information. Rapid diagnosis is very important because of the urgent need for therapy. abstract_id: PUBMED:25526075 Idiopathic vitritis in the setting of Boston keratoprosthesis. Purpose: The aim of this study was to revisit the clinical paradigm attributed to Boston keratoprosthesis recipients presenting with idiopathic vitreous inflammation. Methods: A retrospective chart review was performed of keratoprosthesis recipients at Massachusetts Eye and Ear Infirmary, from January 2000 to August 2013, for demographic data, indication(s) for surgery, timing and presentation of vitreous inflammation, and best-corrected visual acuity at baseline, on presentation, and after resolution of vitritis. Results: Twenty-three (23 eyes) of 346 patients developed idiopathic vitreous inflammation after keratoprosthesis implantation. Six of 23 patients presented with signs and symptoms similar to infectious endophthalmitis but were culture negative. The proportion of patients who fit the previous paradigm of sudden painless loss of vision without external signs of infection ("sterile vitritis") at their first presentation with vitritis was only 4 of 23. Vision decline was variable (median, 9 lines on Snellen chart; range, 0-24), as was time to recovery of best vision (median, 8.9 weeks; range, 0.9-36.7). Nine eyes had repeat bouts (43 episodes in 23 patients). Ten of 43 episodes did not recover to baseline vision. Seventeen of 23 eyes with idiopathic vitritis after keratoprosthesis later developed other complications. Conclusions: The current paradigm for idiopathic vitritis after keratoprosthesis implantation includes sudden painless loss of vision with full recovery of vision on treatment with periocular corticosteroids. However, idiopathic vitritis after keratoprosthesis can also mimic infectious endophthalmitis with pain and external signs of inflammation. Visual loss can be gradual. Vision may not recover to baseline despite treatment. Vitritis may be a part of a common pathway of chronic inflammation after keratoprosthesis. abstract_id: PUBMED:32823455 Triple trouble: A case of retinochoroiditis in a patient with syphilis, tuberculosis, and human immunodeficiency virus infection. A 31-year-old male patient presented with sudden onset loss of vision in the left eye. Ocular examination revealed significant vitritis with chorioretinitis lesion in the posterior pole. Subsequent investigations revealed positive human immunodeficiency virus (HIV) and syphilis serology; chest imaging revealed active pulmonary tuberculosis. Polymerase chain reaction from aqueous aspirate was positive for Mycobacterium tuberculosis. There was complete resolution of the lesions following antisyphilitic medications, antitubercular therapy along with highly active antiretroviral therapy. Syphilis and tuberculosis coinfection in a previously unknown HIV patient is rare but can occur. It is worthwhile to look for multiple coinfections in HIV patients. abstract_id: PUBMED:14767654 Retinal vasculitis due to secondary syphilis. Background: Retinal vasculitis is one of the manifestations of ocular syphilis. Case: A 29-year-old man was referred to our hospital with the complaint of sudden visual loss in the left eye lasting more than three weeks. Observations: Ophthalmoscopic examination showed retinal hemorrhages, edema, and sheathing of large retinal arteries and veins. Fluorescein angiography revealed extensive occlusion of the affected retinal arteries, veins, and capillaries. Little evidence of uveitis or vitritis was observed. The fluorescent treponemal antibody-absorption test was positive, and the Treponema pallidum hemagglutination titer was 1:10,240. The treatment with penicillin was effective, leading to resolution of the retinal hemorrhages and edema, although occlusion of the retinal vessels persisted. Conclusions: Vascular occlusion occurred simultaneously in large retinal arteries, arterioles, and capillaries as well as in segments of retinal veins, resulting in irreversible changes in the vascular walls. abstract_id: PUBMED:8705749 HLA-B27--associated uveitis presenting with diffuse vitritis. A 26-year-old man complained of a sudden decrease in vision in the left eye. He had a past medical history of pauciarticular juvenile rheumatoid arthritis with a positive HLA-B27 haplotype and negative antinuclear antibodies and later development of ankylosing spondylitis. His ocular history was unremarkable. An ophthalmic examination revealed a predominant diffuse vitritis (3+) in the left eye. The uveitis improved after treatment with topical and systemic steroids. Following an extensive clinical and laboratory evaluation, a final diagnosis of HLA-B27-associated uveitis was made. Although uncommon, diffuse vitritis can occur in the setting of this uveitis and may be a predominant symptom. abstract_id: PUBMED:2039034 Acute multifocal inner retinitis. Two patients developed acute changes in vision two to four weeks after a febrile illness. On ophthalmic examination, each patient had bilateral vitreitis without anterior segment inflammation and multiple, bilateral, round, yellow-white inner retinal lesions that were located in the posterior pole and midperiphery. Laboratory tests did not contribute to a diagnosis. Symptomatic visual loss was caused by neuroretinitis and serous retinal detachment in one patient and by an occluded branch retinal artery in the other. The multifocal retinal lesions resolved gradually without treatment over several months with minimal or no residual retinal changes. Acute multifocal inner retinal lesions may be associated with a preceding nonspecific viral illness and may cause a sudden change in vision if associated with neuroretinitis, serous retinal detachment, or retinal vessel occlusion. We have termed this constellation of ophthalmic findings acute multifocal inner retinitis. abstract_id: PUBMED:33070639 Visual loss after long-haul flight. Background: Deep cerebral venous sinus thrombosis is a reversible yet potentially serious thromboembolic event. A number of reports suggest a relationship between long-haul flights and thromboembolic events, mainly deep venous thrombosis (DVT) and pulmonary embolism (PE). It is rarely reported to cause deep cerebral venous sinus thrombosis. We report a case of a bilateral papilledema after long-haul flight secondary to deep cerebral venous sinus thrombosis with subsequent complete recovery post corticosteroid and anticoagulant therapy. Case: A case of a 21-year-old woman with no known medical illness who presented with gradual painless bilateral visual loss is described. She had a history of travelling on a long-haul flight 3 weeks prior to presentation. Examination showed presence of bilateral papilloedema, no vitritis, choroiditis and retinitis. Blood investigations showed raised international normalised ratio (INR). Otherwise, workup for infectious causes of optic disc swelling, connective tissue disease screening were normal. Magnetic resonance imaging (MRI) and magnetic resonance venography (MRV) of the brain showed loss of flow signal in the right transverse sinus and the left sigmoid sinus. Blood workup for preexisting hypercoagulable state was normal. She was diagnosed with deep cerebral venous sinus thrombosis and showed complete recovery with oral corticosteroid and anticoagulant therapy. Conclusion: Deep cerebral venous sinus thrombosis is a potentially serious consequence of long-haul flights. A high index of suspicion along with radiological techniques is needed for early detection and initiation of anticoagulation for this reversible condition. abstract_id: PUBMED:29746787 Diagnostic Challenge with Nocardia Subretinal Abscess: A Case Report from Tuberculosis-Endemic Region. Purpose: To describe a case of Nocardia subretinal abscess, which posed a diagnostic challenge due to the presence of mycobacterial genome in sample obtained from fine needle aspiration biopsy (FNAB). Methods: A retrospective chart review. Results: A 25-year-old male presented with sudden, painless onset diminution of vision of left eye and found to have placoid patch of choroiditis just temporal to the macula in right eye and showed vitritis, subretinal abscess with exudative retinal detachment and vitreous hemorrhage in left eye. Both aqueous and vitreous samples were negative microbiologically and polymerase chain reaction for various genomes. Sample obtained from FNAB was positive for mycobacterial genome and yielded Nocardia arthritidis on culture. She was treated with antitubercular therapy and intravenous and intravitreal antimicrobials. Conclusions: Nocardia subretinal abscess can be a diagnostic challenge in tuberculosis-endemic region, especially in conditions when there is co-infection with Mycobacterium tuberculosis. Answer: The phenomenon of sudden vitritis after keratoprosthesis (KPro) surgery, characterized by a marked decrease in vision with little or no pain, tenderness, conjunctival redness, or discharge, has been reported in patients who underwent KPro surgery (PUBMED:16227832). This condition occurred in 8 patients out of 218, with a total of 12 episodes of sudden, massive vitritis, which resolved after 2 to 9 weeks, and patients recovered to pre-episode status with a quiet eye and clear vitreous (PUBMED:16227832). The vitritis episodes were uncharacteristic of bacterial endophthalmitis due to the lack of typical symptoms and complete recovery, suggesting that it may represent a uveitic immune phenomenon (PUBMED:16227832). The study on idiopathic vitritis in the setting of Boston keratoprosthesis also supports the idea that vitritis after KPro can mimic infectious endophthalmitis with pain and external signs of inflammation, but may also present as a "sterile vitritis" with sudden painless loss of vision (PUBMED:25526075). The visual loss can be variable, and vision may not recover to baseline despite treatment, indicating that vitritis may be part of a common pathway of chronic inflammation after keratoprosthesis (PUBMED:25526075). These findings suggest that sudden reversible vitritis after keratoprosthesis could indeed be an immune phenomenon, potentially related to a chronic inflammatory response rather than an infectious process. However, further research would be necessary to fully understand the underlying mechanisms and to differentiate it from other causes of vitritis, such as infectious endophthalmitis or other inflammatory conditions.
Instruction: Is mammography adequate for screening women with inherited BRCA mutations and low breast density? Abstracts: abstract_id: PUBMED:18349291 Is mammography adequate for screening women with inherited BRCA mutations and low breast density? Background: Several observational studies have shown that magnetic resonance imaging (MRI) is significantly more sensitive than mammography for screening women over age 25 at high risk for hereditary breast cancer; however, MRI is more costly and less specific than mammography. We sought to determine the extent to which the low sensitivity of mammography is due to greater breast density. Methods: Breast density was evaluated for all patients on a high-risk screening study who were diagnosed with breast cancer between November 1997 and July 2006. Density was measured in two ways: qualitatively using the four categories characterized by the Breast Imaging Reporting and Data System and quantitatively using a computer-aided technique and classified as (a) &lt;or=10%, (b) 11% to 25%, (c) 26% to 50%, and (d) &gt;50% density. Comparison of sensitivity of mammography (and MRI) for each individual density category and after combining the highest two and lowest two density categories was done using Fisher's exact test. Results: A total of 46 breast cancers [15 ductal carcinoma in situ (DCIS) and 31 invasive] were diagnosed in 45 women (42 with BRCA mutations). Mean age was 48.3 (range, 32-68) years. Overall, sensitivity of mammography versus MRI was 20% versus 87% for DCIS and 26% versus 90% for invasive cancer. There was a trend towards greater mammographic sensitivity for invasive cancer in women with fattier breasts compared with those with greater breast density (37-43% versus 8-12%; P = 0.1), but this trend was not seen for DCIS. Conclusion: It is necessary to add MRI to mammography for screening women with BRCA mutations even if their breast density is low. abstract_id: PUBMED:28138892 Breast cancer screening of pregnant and breastfeeding women with BRCA mutations. Screening recommendations for women with BRCA mutations include annual breast MRI starting at age 25, with annual mammogram added at age 30. The median age of childbearing in the US is age 28, therefore many BRCA mutation carriers will be pregnant or breastfeeding during the time when intensive screening is most important to manage their increased breast cancer risk. Despite this critical overlap, there is little evidence to guide clinicians on the appropriate screening for women with BRCA mutations during pregnancy or breastfeeding. Hormonal shifts that occur during pregnancy, the postpartum period, and breastfeeding result in changes to the breasts that may further complicate the sensitivity and specificity of screening modalities. We explore the safety and efficacy of available breast cancer screening modalities, including clinical breast exam, mammogram, breast MRI, and ultrasound among women with BRCA mutations who are pregnant or breastfeeding, providing recommendations from the most current published literature and expert opinion. abstract_id: PUBMED:20625126 Is mammographic breast density a breast cancer risk factor in women with BRCA mutations? Purpose: Increased mammographic breast density is well recognized as a breast cancer risk factor in the general population. However, it is unclear whether it is a risk factor in women with BRCA mutations. We present the results of a nested case-control screening study investigating the relationship between breast density and breast cancer incidence in this population. Patients And Methods: Women ages 25 to 65 years with known BRCA mutations were enrolled onto a single-center, high-risk breast cancer screening program. Using a computer-aided technique (Cumulus), quantitative percentage density (PD) was measured for each participant on her baseline mammogram by a single, blinded observer. Results: Between November 1997 and March 2008, 462 women (mean age, 44 years; 245 BRCA1 and 217 BRCA2) were screened and 50 breast cancers were diagnosed (38 invasive, 12 ductal carcinoma in situ [DCIS]). Density was not measured in 40 women of whom four developed cancer (three invasive, one DCIS). Mean PD (+/- standard deviation [SD]) for 376 women who did not develop breast cancer was 34% (23) compared with 31% (21) for 46 women with cancer (P = .51). Logistic regression model of breast cancer incidence and PD revealed an odds ratio of 0.99 (+/- 0.01 SD) for a one-unit increase in PD (P = .44). Results remained nonsignificant in multivariate analysis, as well as when women with pure DCIS were excluded. Conclusion: Increased mammographic breast density is not associated with higher breast cancer incidence in women with BRCA mutations. On the basis of these findings, density should not be considered a factor for these women in decision making regarding prophylactic surgery or chemoprevention. abstract_id: PUBMED:29139590 Breast density in screening mammography in Indian population - Is it different from western population? Mammography is the only method presently considered appropriate for mass screening of breast cancer. However, higher breast density was strongly associated with lower mammographic sensitivity. Breast density is also identified as independent and strongest risk factors for breast cancer. Studies have shown women with high breast density have four to six times increased risk of breast cancer as compare to women with fatty breast. It varies between different age group it generally decreases with increasing age in postmenopausal women and it can be different in different ethnic groups and people from different geographical areas. This study evaluates the breast density in Indian population and its relationship with the age. We reviewed of all screening mammography examinations performed from May 2012 to January 2015 at our institute PGIMER, Chandigarh, INDIA. Descriptive analyses were used to examine the association between age and breast density. A total of 6132 screening mammograms were performed. Each subgroup categorized by decade of age. There was a significant inverse relationship between age and breast density (P &lt; .001). Twenty-two percent of patients between 40 and 49 years old had dense breasts. This percentage decreased to 9% of women in their 50s. Only 7% of women in their 60s and 8% of women in their 70s had dense breasts. This data has been compared with the Western study done in New York University (NYU) shows there is significant difference (P value &lt;.05) in the breast density in Indian and Western population with more Indians having ACR Grade 1 and 2 and Western population having 2 and 3. We found an inverse relationship between patient age and mammographic breast density. However, there were a large proportion of young women who had lower grades of mammographic density which could potentially benefit from the use of routine screening mammography in this subgroup of patients. Moreover, the breast density of Indian population is less when compared to the Western population. This might suggest that mammography is a good modality of choice for screening Indian population. abstract_id: PUBMED:25186116 Cost-effectiveness of annual versus biennial screening mammography for women with high mammographic breast density. Objectives: The sensitivity of screening mammography is much lower among women who have dense breast tissue, compared with women who have largely fatty breasts, and they are also at much higher risk of developing the disease. Increasing mammography screening frequency from biennially to annually has been suggested as a policy option to address the elevated risk in this population. The purpose of this study was to assess the cost-effectiveness of annual versus biennial screening mammography among women aged 50-79 with dense breast tissue. Methods: A Markov model was constructed based on screening, diagnostic, and treatment pathways for the population-based screening and cancer care programme in British Columbia, Canada. Model probabilities and screening costs were calculated from screening programme data. Costs for breast cancer treatment were calculated from treatment data, and utility values were obtained from the literature. Incremental cost-effectiveness was expressed as cost per quality adjusted life year (QALY), and probabilistic sensitivity analysis was conducted. Results: Compared with biennial screening, annual screening generated an additional 0.0014 QALYs (95% CI: -0.0480-0.0359) at a cost of $819 ($ = Canadian dollars) per patient (95% CI: 506-1185), resulting in an incremental cost effectiveness ratio of $565,912/QALY. Annual screening had a 37.5% probability of being cost-effective at a willingness-to-pay threshold of $100,000/QALY. Conclusion: There is considerable uncertainty about the incremental cost-effectiveness of annual mammography. Further research on the comparative effectiveness of screening strategies for women with high mammographic breast density is warranted, particularly as digital mammography and density measurement become more widespread, before cost-effectiveness can be reevaluated. abstract_id: PUBMED:23535547 Mammography screening Breast cancer screening using mammography is one of the few screening interventions that have been assessed in several large randomized controlled studies (RCTs). Results from eight RCTs are available and have been summarized in several systematic reviews and meta-analyses, most of which conclude that mammography screening achieve a relative reduction in breast cancer mortality of 20 %. In the Swiss context this translates to one prevented breast cancer death over 10 years among 1000 women who start biannual mammography screening at age 50. However, mammography screening leads to the detection of cancers that would never have been found were it not for the screening (overdiagnosis). Screening thus increases the incidence of breast cancer, by about 20 %, which in turn leads to more therapies (surgery, radiation and chemotherapy). Over ten years after starting mammography at age 50, four additional women out of 1000 will be diagnosed with breast cancer. Without screening, these women would not have had a diagnosis. Over the same ten years (five screening rounds), about one in five women will have at least one mammography result which requires further diagnostic workup. Furthermore, even when mammography screening is implemented, 20 to 30 % breast cancer patients will not be diagnosed by the screening but between screening rounds. The information and knowledge of the population about benefits and harms of mammography screening is inadequate in that the benefits are overestimated and the harms underestimated. The female population needs to be better informed on both these aspects of mammography screening. Women have a right to receive adequate evidence-based information in order to make an informed decision whether to attend mammography screening. abstract_id: PUBMED:27444927 Comparative effectiveness of breast MRI and mammography in screening young women with elevated risk of developing breast cancer: a retrospective cohort study. Screening guidelines recommend that women with 20 % or greater lifetime risk of breast cancer undergo annual breast MRI screening to supplement mammography, irrespective of age. In patients less than 40 years, mammography is often avoided due to concerns about radiation and decreased performance. However, prior studies have been limited by large percentages of women above 40 with decreased breast density. Our purpose was to test whether adding mammography to breast MRI screening compared to breast MRI screening alone in women below 40 increases cancer detection rates. After obtaining IRB approval, chart review identified patients aged 25-40 years undergoing breast MR screening (2005-2014). Demographics, risk factors, BI-RADS assessments, background parenchymal enhancement, and mammographic breast tissue density were recorded. Cancer detection rates, short-term follow-up (BIRADS 3), image-guided biopsy (BIRADS 4,5), and PPV1-3 were calculated. 342 breast MRI exams were identified (average age was 33, 37 % were nulliparous, and 64 % had prior benign biopsy), 226 (66 %) of which underwent concurrent mammography. Risk factors included 64 % with breast cancer in first-degree relative(s), 90 % had heterogeneous or extremely dense breast tissue on mammography, and 16 % were BRCA carriers. Four invasive cancers were detected by MRI (11.7 cancers/1000 examinations, 95 % CI 8.3, 15.1). None of these was detected by mammography, and no cancers were independently identified by mammography. Breast MRI screening in high-risk women under 40 yielded elevated cancer detection rates (11.7/1000). The cancer detection rate for mammography was 0 %, suggesting that MRI alone may be useful in screening high-risk women under 40. abstract_id: PUBMED:29974357 Screening mammography: benefit of double reading by breast density. Purpose: The currently recommended double reading of all screening mammography examinations is an economic burden for screening programs. The sensitivity of screening is higher for women with low breast density than for women with high density. One may therefore ask whether single reading could replace double reading at least for women with low density. We addressed this question using data from a screening program where the radiologists coded their readings independently. Methods: Data include all screening mammography examinations in the Capital Region of Denmark from 1 November 2012 to 31 December 2013. Outcome of screening was assessed by linkage to the Danish Pathology Register. We calculated sensitivity, specificity, number of interval cancers, and false positive-tests per 1000 screened women by both single reader and consensus BI-RADS density code. Results: In total 54,808 women were included. The overall sensitivity of double reading was 72%, specificity was 97.6%, 3 women per 1000 screened experienced an interval cancer, and 24 a false-positive test. Across all BI-RADS density codes, single reading consistently decreased sensitivity as compared with consensus reading. The same was true for specificity, apart from results across BI-RADS density codes set by reader 2. Conclusions: Single reading decreased sensitivity as compared with double reading across all BI-RADS density codes. This included results based on consensus BI-RADS density codes. This means that replacement of double with single reading would have negative consequences for the screened women, even if density could be assessed automatically calibrated to the usual consensus level. abstract_id: PUBMED:35059587 Genetic Counseling, Screening and Risk-Reducing Surgery in Patients with Primary Breast Cancer and Germline BRCA Mutations: Unmet Needs in Low- and Middle-Income Countries. Objective: Worldwide genetic counseling practices are variable and often not reported in low- and middle-income countries (LMICs). We present the follow-up genetic counseling, breast screening, risk-reducing salpingo-oophorectomy (RRSO) and contralateral prophylactic mastectomy (CPM) in a cohort of study patients with either BRCA pathogenic mutations or BRCA variant of unknown significance (VUS). Materials And Methods: Chart review and phone calls for the collection of information. Out of a cohort of 250 patients, 14 had deleterious mutations and 31 had a VUS, of whom 19 had primary early breast cancer. We collected information about genetic counseling, screening, CPM and RRSO. Results: Fourteen patients with deleterious mutations (7 BRCA1 and 7 BRCA2) and 19 patients with VUS mutations (20 VUS, 4 BRCA1, 16 BRCA2; 1 patient had both) were surveyed. Of 14 patients with deleterious BRCA mutations, 57.14% (8/14 patients) received genetic counseling from their oncologist. Subsequently 85.71% (12/14) are undergoing mammography screening and 35.71% (5/14) breast screening magnetic resonance imaging (MRI). Furthermore, 50% of them underwent CPM and 57.14% underwent RRSO. Of 19 patients with VUS mutations, 10.5% received genetic counseling from their oncologist; 78.9% were undergoing regular screening mammogram and 31.5% were undergoing breast MRI; one patient underwent CPM and two patients RRSO. Conclusion: Within three years from knowing they have a mutation, 50% of patients with germline BRCA mutations had undergone CPM and 60% RRSO, the majority of them had screening mammography surveillance but only 50% had screening MRI. Follow-up of patients with VUS with mammography was 78% but MRI was only 31%. Lack of MRI surveillance reflects both limited resources and insufficient counseling. Genetic counseling was done by medical oncologists, which reflects a trend in LMIC. Our Data shows the importance of the need for professional genetic counselors and optimal surveillance in Lebanon and other LMICs. abstract_id: PUBMED:21365619 Effectiveness of alternating mammography and magnetic resonance imaging for screening women with deleterious BRCA mutations at high risk of breast cancer. Background: Magnetic resonance imaging (MRI) has been used to supplement screening mammography and clinical breast examination (CBE) in women who are at high risk of developing breast cancer. In this study, the authors investigated the efficacy of alternating screening mammography and breast MRI every 6 months in women who had a genetically high risk of developing breast cancer. Methods: A retrospective chart review was performed on all women who were seen in a high-risk breast cancer clinic from 1997 to 2009. Patients with breast cancer gene (BRCA) mutations who underwent alternating screening mammography and breast MRI every 6 months were included in the study. Mammography, ultrasonography, MRI, and biopsy results were reviewed. Results: Of 73 patients who met the study criteria, 37 had BRCA1 mutations, and 36 had BRCA2 mutations. Twenty-one patients (29%) completed 1 cycle of mammography and MRI surveillance, 23 patients (31%) completed 2 cycles, 18 patients (25%) completed 3 cycles, and patients 11 (15%) completed ≥ 4 cycles. The median follow-up was 2 years (range, 1-6 years). Thirteen cancers were detected among 11 women (15%). The mean tumor size was 14 mm (range, 1-30 mm), and 2 patients had bilateral cancers. Twelve of 13 cancers were detected on an MRI but not on the screening mammography study that was obtained 6 months earlier. One cancer (a 1-mm focus of ductal carcinoma in situ) was an incidental finding in a prophylactic mastectomy specimen. One patient had ipsilateral axillary lymphadenopathy identified on ultrasonography but had no evidence of lymph node involvement after neoadjuvant chemotherapy and surgery. Conclusions: In women who were at genetically high risk of developing breast cancer, MRI detected cancers that were not identified on mammography 6 months earlier. Future prospective studies are needed to evaluate the benefits of this screening regimen. Answer: Mammography is not considered adequate for screening women with inherited BRCA mutations, even if they have low breast density. A study found that the overall sensitivity of mammography versus MRI was significantly lower for both ductal carcinoma in situ (DCIS) and invasive cancer, with mammography detecting only 20% of DCIS and 26% of invasive cancers compared to MRI's 87% and 90%, respectively. This trend was observed despite breast density, indicating that MRI should be added to mammography for screening women with BRCA mutations (PUBMED:18349291). Additionally, mammographic breast density is not associated with higher breast cancer incidence in women with BRCA mutations, suggesting that density should not be a factor in decision-making regarding prophylactic surgery or chemoprevention for these women (PUBMED:20625126). Therefore, for women with BRCA mutations, MRI is a necessary supplement to mammography for effective screening, irrespective of breast density.
Instruction: Should an Inoue balloon larger than suggested by guidelines be used for percutaneous balloon mitral valvuloplasty? Abstracts: abstract_id: PUBMED:24383377 Should an Inoue balloon larger than suggested by guidelines be used for percutaneous balloon mitral valvuloplasty? Background And Aim Of The Study: The study aim was to evaluate the immediate and long-term results of single inflation using a larger balloon size in patients with symptomatic rheumatic mitral stenosis (MS) METHODS: Among a study population of 231 patients with MS, percutaneous balloon mitral valvuloplasty (PBMV) was performed using the Inoue balloon technique. The ideal balloon size was measured according to the patients' height. Patients were allocated at random to two groups: a stepwise method was used in 115 patients (group 1), and a single-inflation method (with +2 mm larger balloon size) in 116 patients (group 2). Follow up was scheduled at six-month intervals for the first year, and annually thereafter. Patients were followed up until 2009. Results: The mitral valve area (MVA) was increased from pre-PBMV values of 1.2 +/- 0.3 cm2 and 1.1 +/- 0.21 cm2 in groups 1 and 2, respectively, to post-PBMV values of 1.9 +/- 0.34 cm2 and 2.0 +/- 0.28 cm2 in groups 1 and 2, respectively; the increase in MVA for each group was statistically significant (p &lt; 0.01) The mean transmitral pressure gradient (MMG) was decreased from pre-PBMV values of 14.1 +/- 5.5 mmHg and 13.2 +/- 5.9 mmHg in groups 1 and 2, respectively, to post-PBMV values of 5.9 +/- 2.3 mmHg and 5 +/- 2.5 mmHg in groups 1 and 2, respectively. One patient in group 2 underwent surgery due to severe mitral regurgitation. The procedure time was significantly shorter in group 2 (32 +/- 11.5 min versus 25 +/- 11 min; p &lt; 0.001). The mean follow up duration was 49.5 +/- 19.2 months (range: 24-84 months). At the last follow up examination, the MVA was shown to be significantly larger in group 2 than in group 1 (1.65 +/- 0.3 versus 1.42 +/- 0.34 cm2; p = 0.02), while the MMG was lower (9 +/- 3.6 versus 6.7 +/- 3 mmHg; p = 0.017). Conclusion: The study results suggested that using a +2 mm larger balloon size with single overinflation may represent an alternative and effective therapy, with a shorter procedure time. abstract_id: PUBMED:31659518 Additive value of 3D-echo in prediction of immediate outcome after percutaneous balloon mitral valvuloplasty. Background: Results of percutaneous balloon mitral valvuloplasty (BMV) are basically dependent on suitable patient selection. Currently used two-dimensional (2D) echocardiography (2DE) scores have many limitations. Three-dimensional (3D) echocardiography (3DE)-based scores were developed for better patient selection and outcome prediction. We aimed to compare between 3D-Anwar and 2D-Wilkins scores in mitral assessment for BMV, and investigate the additive value of 3DE in prediction of immediate post-procedural outcome. Fifty patients with rheumatic mitral stenosis and candidates for BMV were included. Patients were subjected to 2D- and real-time 3D-transthoracic echocardiography (TTE) before and immediately after BMV for assessing MV area (MVA), 2D-Wilkins and 3D-Anwar score, commissural splitting, and mitral regurgitation (MR). Transesophageal echocardiography (TEE) was also undertaken immediately before and intra-procedural. Percutaneous BMV was performed by either multi-track or Inoue balloon technique. Results: The 2DE underestimated post-procedural MVA than 3DE (p = 0.008). Patients with post-procedural suboptimal MVA or significant MR had higher 3D-Anwar score compared to 2D-Wilkins score (p = 0.008 and p = 0.03 respectively). The 3D-Anwar score showed a negative correlation with post-procedural MVA (r = - 0.48, p = 0.001). Receiver operating characteristic (ROC) curve analysis for both scores revealed superior prediction of suboptimal results by 3D-Anwar score (p &lt; 0.0001). The 3DE showed better post-procedural posterior-commissural splitting than 2DE (p = 0.004). Results of both multi-track and Inoue balloon were comparable except for favorable posterior-commissural splitting by multi-track balloon (p = 0.04). Conclusion: The 3DE gave valuable additive data before BMV that may predict immediate post-procedural outcome and suboptimal results. abstract_id: PUBMED:7502011 Mitral valvuloplasty using the Inoue balloon Percutaneous mitral balloon valvuloplasty (PMBV) is an accepted alternative treatment to open and closed mitral commissurotomy or mitral valvular replacement. The Inoue technique has become standard in most centers. In our first 24 percutaneous balloon mitral valvuloplasties by the Inoue technique, 23 procedures were technically successful. The mean age of the patients was 53 (24-75) years. There were 22 women. Four patients had a history of closed mitral commissurotomy, one of previous mitral balloon valvuloplasty, and one of aortic metallic valve replacement. The mean echocardiographic mitral Wilkins score was 7.3 (range 4-13). PMBV resulted in significant improvement of hemodynamic values. The mean mitral pressure gradient fell from 12 +/- 5 to 5 +/- 3 mm Hg (p = 0.0001) and the cardiac index increased from 2.7 +/- 0.7 to 3.0 +/- 0.8 l/min/m2. The valve area by the Gorlin formula increased from 1.2 +/- 0.3 to 2.1 +/- 0.6 cm2 (p = 0.0001). Doppler and planimetric echocardiography data were in keeping with hemodynamic data. Mitral valve regurgitation increased by more than 1 grade in 3 patients, 2 of whom subsequently underwent valve replacement. No tamponade occurred with the Inoue technique. There was 1 fatal outcome following tamponade and emergency heart surgery after mitral valvuloplasty with a Trefoil balloon employed in a subsequent intervention due to impossibility of placing the Inoue balloon. Left-to-right shunting at the atrial level after the intervention was not significant in any patient. 21 patients (88%) had improvement in their functional class. One of the patients with unchanged functional class had late onset of severe mitral regurgitation, another had a technical failure with the Inoue technique, and in 1 patient with calcified valve leaflets significant mitral stenosis persisted. At 3 to 15 months follow-up echocardiography was performed in 19 patients: mitral valve areas had not changed significantly compared to post-interventional values. One patient had a new mitral regurgitation compared with the situation immediately after PMBV. Mitral balloon valvuloplasty by the Inoue technique is an effective treatment with low risk in patients with symptomatic mitral stenosis. abstract_id: PUBMED:23197310 Concurrent antegrade transseptal Inoue-balloon mitral and aortic valvuloplasty. Background: The Inoue balloon has been in use for many years for mitral valvuloplasty. Aortic valvuloplasty using the Inoue balloon via transseptal approach was developed in the hope of providing better results with less potential vascular access complications. Methods And Results: In this study, we present our experience in percutaneous valvuloplasty using the Inoue balloon in 14 patients with combined rheumatic mitral and aortic stenosis (AS) in a single stage procedure via antegrade transseptal approach. The study group was characterized by relatively young age (mean 37.5 ± 9.6 years). Aortic followed by mitral valvuloplasty via antegrade approach resulted in a fall of transaortic peak pressure gradient (PG) from 59.1 ± 11.2 mm Hg to 25.3 ± 12.5 mm Hg (P = 0.012) and mean from 49.0 ± 10.9 mm Hg to 16.6 ± 9.8 mm Hg (P = 0.043). Aortic valve areas increased significantly from 0.70 ± 0.25 cm(2) to 1.41 ± 0.48 cm(2) (P = 0.042). Mean transmitral PG decreased from 14.9 ± 2.1 mm Hg to 5.3 ± 1.5 with increase of mitral valve areas from 1.08 ± 0.45 to 1.92 ± 0.51 cm(2). The procedures were well tolerated without development of significant valvular regurgitation or thromboembolism. During follow-up, 2 patients died due to lung cancer and sudden death at months 48 and 100. Five patients received delayed surgery after mean duration of 73.4 ± 39.7 months. Conclusions: Concurrent antegrade, transseptal Inoue-balloon aortic and mitral valvuloplasty, is feasible and safe, and provides excellent immediate results as one-stage procedure. The study results also suggest that balloon aortic valvuloplasty can be more durable in younger patients with rheumatic AS than in elderly patients with degenerative, AS. However, the modified aortic valvuloplasty technique can be utilized only as bridging procedure to aortic valve replacement or recently developed transcatheter aortic-valve implantation in unstable hemodynamic status, and as a palliative procedure before noncardiac surgery. abstract_id: PUBMED:35353943 A novel "proximal first" Inoue balloon catheter for retrograde aortic valvuloplasty: Initial case report. The Inoue balloon, invented for percutaneous transseptal mitral commissurotomy for mitral stenosis, is initially dilated in the distal portion and then the proximal portion, forming an hourglass shape that stabilizes the balloon at the mitral valve orifice with a pulling action. The device has been successfully applied to antegrade aortic valvuloplasty; the hourglass shape stabilizes the balloon across the aortic valve without rapid ventricular pacing. Subsequently, an Inoue balloon was developed for retrograde aortic valvuloplasty using the same design as the antegrade balloon. The hourglass-shaped balloon, however, has difficulty maintaining stability across the aortic valve because the distal portion of the balloon, which inflates first, directly receives systolic pressure from the left ventricle. A novel Inoue balloon was invented to overcome this issue by altering the manner of inflation: the proximal portion inflates first, followed by the distal portion. Theoretically, the proximal portion, which initially inflates just above the aortic valve, remains secure across the valve by applying a pressing force that counteracts the systolic forward pressure. Furthermore, a radiopaque marker is placed at the center of the balloon to enable rapid and precise balloon adjustment. We present a case involving a 65-year-old female hemodialysis patient with severe symptomatic aortic stenosis who we successfully treated with balloon aortic valvuloplasty using the novel Inoue balloon followed by transcatheter aortic valve implantation. The present case demonstrates the feasibility and effectiveness of the novel "proximal first" Inoue balloon for retrograde balloon aortic valvuloplasty for severe aortic stenosis. abstract_id: PUBMED:28511424 Mechanism and Immediate Outcome of Significant Mitral Regurgitation Following Balloon Mitral Valvuloplasty with JOMIVA Balloon. Introduction: Balloon Mitral Valvuloplasty (BMV) with inoue balloon is the most common technique being followed worldwide. Over the wire BMV is a modified technique with Joseph Mitral Valvuloplasty (JOMIVA) balloon and is being followed in certain centres. We hypothesized that, the incidence and mechanism of Mitral Regurgitation (MR) is likely to be different from that of inoue balloon. Aim: To assess the mechanism and immediate clinical outcome of significant MR following BMV with JOMIVA balloon retrospectively. Materials And Methods: We retrospectively analyzed the outcome of 48 patients who developed moderate to severe MR out of 249 patients who underwent BMV in our institute. We analyzed the echocardiographic and clinical parameters of these patients. Results: Nineteen (7.6%) patients developed severe MR and 29 (11.2%) patients developed moderate MR. Commisural separation resulting in MR was the most common cause which was contributing to 73.6 % and 85.7% of patients with moderate and severe MR respectively. Leaflet tear was the second most common cause which contributed to 15.7% and 14.2% of patients with severe and moderate MR respectively. Chordal rupture contributed to 10.5% of patients with severe MR. Six (31.6%) patients with severe MR developed worsening breathlessness among them one had to be referred for mitral valve replacement during index hospitalization and the rest could be managed medically. Patients with moderate MR remained asymptomatic and stable. Conclusion: Severe MR following JOMIVA BMV results most commonly due to wide separation of commisures. JOMIVA balloon is less likely to cause damage to subvalvular structures than inoue balloon. Most patients who develop severe MR will not require emergency mitral valve replacement. Moderate MR is well tolerated clinically. abstract_id: PUBMED:8154999 Mitral valvuloplasty with Inoue balloon catheter. Analysis of 112 cases Purpose: To present immediate results of percutaneous mitral valvuloplasty (PVM) with Inoue balloon for the treatment of mitral stenosis. Methods: One hundred and twelve consecutive patients mean age 31.26 +/- 9.5 years, 82% females, the majority in functional class III/IV (69%) underwent percutaneous mitral valvuloplasty (PVM) with Inoue balloon. The procedure was undertaken by the transseptal technique. Results: PMV achieved significant reduction in mean left atrial pressure pre 24.57 +/- 5.37 post 12.36 +/- 4.82, mitral gradient pre 15.31 +/- 4.28 post 3.48 +/- 2.09 and increased mitral valve area pre 0.88 +/- 0.12 post 2.0 +/- 0.28 (P &lt; 0.001). An increase of at least 25% of the mitral valve area with a final result greater than 1.5 cm2 was obtained in 96% of the cases. Cardiac tamponade occurred in five cases (4.4%). In 4 a successful treatment was accomplished in the catheterization laboratory. Significant mitral regurgitation was seen in one case (0.9%) and there was one death (0.9%). Conclusion: The results of this series attest to the effectiveness of the PMV with Inoue balloon in selected cases of mitral stenosis. abstract_id: PUBMED:22109970 Mitral valvuloplasty with the Inoue balloon tracked over an arteriovenous wire. Mitral stenosis is associated with significant morbidity and mortality. Percutaneous balloon mitral valvuloplasty is a standard treatment option for severe mitral stenosis, and in some patients may be the only feasible option. In patients with suitable valve anatomy and morphology, and with requisite physician expertise the procedure has a high success rate and good long-term outcomes. Nonetheless, even in appropriately selected patients the procedure can pose significant technical challenges. The transeptal antegrade approach using the Inoue Balloon system may be associated with difficulty and rarely with failure to cross the stenotic mitral valve. We report a case of initial Inoue balloon failure to cross followed by successful Inoue mitral valvuloplasty tracked over an arteriovenous wire loop. abstract_id: PUBMED:8853139 Inoue balloon deformity and rupture during percutaneous balloon valvuloplasty. In the management of mitral stenosis, similar long-term results can be obtained by using either an Inoue balloon catheter or a double-balloon technique for percutaneous balloon valvuloplasty. There have been few reports concerning any deformity of an Inoue balloon. From January 1988 to June 1995, 263 procedures of either mitral or tricuspid valvuloplasty have been performed in this center. The Inoue balloon catheter technique was used for 245 procedures. A deformity of the Inoue balloon catheter was noted in 4 (1.6%) and actual rupture of deformed balloon occurred in one (0.4%). All deformities were found at the distal portion of the Inoue balloon. Valvular insufficiency became more severe after valvuloplasty in two cases. Following rupture of the balloon, neither arterial embolization nor perforation of the cardiac chambers developed. In conclusion, a deformity of the Inoue balloon, although rare, can develop during percutaneous balloon valvuloplasty. The deformity may portend balloon rupture if additional maximal dilatations are undertaken. abstract_id: PUBMED:28582067 Percutaneous Mitral Balloon Valvuloplasty for Patients with Rheumatic Mitral Stenosis. Percutaneous balloon dilatation of stenotic cardiac valves is used for the treatment of pulmonic, mitral, aortic, and tricuspid stenosis. Percutaneous mitral balloon valvuloplasty (PMV) has been used successfully as an alternative to open or closed surgical mitral commissurotomy in the treatment of symptomatic rheumatic mitral stenosis. PMV produces good immediate hemodynamic outcome, low complication rates, and clinical improvement in the majority of patients. PMV is safe and effective and provides clinical and hemodynamic improvement in rheumatic mitral stenosis. PMV is the preferred form of therapy for relief of mitral stenosis for a selected group of patients with symptomatic mitral stenosis. Answer: The use of an Inoue balloon larger than suggested by guidelines for percutaneous balloon mitral valvuloplasty (PBMV) has been evaluated in a study where patients with symptomatic rheumatic mitral stenosis were randomly allocated to two groups: one using a stepwise method with the recommended balloon size and another using a single-inflation method with a balloon size that was +2 mm larger. The results suggested that using a larger balloon size with single overinflation may represent an alternative and effective therapy, with a shorter procedure time. The mitral valve area (MVA) was significantly larger and the mean transmitral pressure gradient (MMG) was lower in the group with the larger balloon at the last follow-up examination. Only one patient in the group with the larger balloon underwent surgery due to severe mitral regurgitation, indicating that the larger balloon size did not significantly increase the risk of this complication (PUBMED:24383377). Therefore, based on the study's results, using an Inoue balloon larger than suggested by guidelines could be considered as an alternative approach for PBMV, especially since it was associated with a shorter procedure time and better long-term MVA outcomes. However, it is important to note that this is based on a single study, and further research may be needed to confirm these findings and to evaluate the safety and efficacy of this approach in a larger patient population. Additionally, individual patient anatomy and the experience of the operating physician should be taken into account when considering the use of a larger balloon size.
Instruction: Appendicitis in northern aboriginal children: does delay in definitive treatment affect outcome? Abstracts: abstract_id: PUBMED:20438920 Appendicitis in northern aboriginal children: does delay in definitive treatment affect outcome? Background: The treatment of northern aboriginal children (NAC) is often complicated by distance from a treating facility. We sought to compare outcomes of NAC requiring transfer with appendicitis to those who presented locally. We hypothesized that NAC with appendicitis experienced higher rates of perforation and increased length of stay (LOS). Methods: A retrospective chart review of 210 appendectomies was performed. Charts were reviewed for age, sex, weight, days of symptoms before presentation, time of transfer, leukocyte count (white blood cell count), usage of antibiotics prior to transfer, time to operation, type of procedure and findings, pathology, postoperative outcomes, and LOS. Results: Sixty-eight children were NAC, whereas 142 were local. The average transfer times for NAC was 10 hours (range, 4-20 hours). The two groups had similar ages (11.1 vs 10.7 years), time to presentation (1.64 vs 1.85 days), and LOS (2.91 vs 2.90 days). Significantly higher perforation rates (44 vs 28%; P = .02), higher white blood cell count (17.9 vs 16.0; P = .02), and longer times to operation after arrival (10.3 vs 7.0 hours; P = .0002) were noted in NAC. Postoperative complications were similar between groups. Forty-seven (69%) NAC received antibiotics prior to transfer, which did not affect rate of rupture. Conclusion: NAC with appendicitis experience longer transfer times and higher perforation rates than local children without a difference in length of stay or complications. Pretransfer antibiotics do not reduce perforation rates but may impact complications. We endorse their use if a delay in transfer is anticipated. abstract_id: PUBMED:18367138 Timing of intervention does not affect outcome in acute appendicitis in a large community practice. Background: Surgical management of acute appendicitis remains one of the most frequent problems faced by gastrointestinal surgeons. Traditional management has emphasized urgent surgical care. Recent literature suggests delayed surgery for acute appendicitis does not affect outcome. The outcomes of patients undergoing urgent and delayed appendectomy in a large community surgical practice are compared. Methods: All patients undergoing appendectomy between August 2002 and May 2007 were reviewed retrospectively. The data were gathered from a large community surgical practice. Patient demographics, treatment times, and surgical, pathologic, and postsurgical outcomes were documented. Results: A total of 1,198 patients underwent appendectomy (575 female/623 male). The mean time to surgical intervention was 7.1 hours (range, 1-24 h). The percentage of patients undergoing laparoscopy versus open versus surgical conversion was 63%, 33%, and 4%, respectively. The percentage of patients with acute appendicitis versus perforated acute appendicitis versus negative exploration was 77%, 14%, and 8.5%, respectively. Postoperative wound or intra-abdominal septic complications were observed in 5.3% and 2.6% of patients, respectively. There was no correlation between the duration of symptoms or time to surgical intervention and surgical approach, pathologic outcome, length of stay, or postoperative septic complications. Conclusions: Outcome variables documented in this study were independent of duration of symptoms or time to surgical intervention. This would suggest that short delays in surgical intervention for acute appendicitis are well tolerated. Outcome is related more clearly to the severity of the acute appendicitis at presentation. abstract_id: PUBMED:29395151 Outcome of initially nonoperative treatment for acute simple appendicitis in children. Purpose: To compare the outcome of initially nonoperative treatment with immediate appendectomy for simple appendicitis in children. Methods: Between September 2012 and June 2014 children aged 7-17 years with a radiologically confirmed simple appendicitis were invited to participate in a multicentre prospective cohort study in which they were treated with an initially nonoperative treatment strategy; nonparticipants underwent immediate appendectomy. In October 2015, their rates of complications and subsequent appendectomies, and health-related quality of life (HRQOL) were assessed. Results: In this period, 25 children were treated with an initially nonoperative treatment strategy and 19 with immediate appendectomy; median (range) follow-up was 25 (16-36) and 26 (17-34) months, respectively. The percentage [95%CI] of patients experiencing complications in the initially nonoperative group and the immediate appendectomy group was 12 [4-30]% and 11 [3-31]%, respectively. In total 6/25 children (24%) underwent an appendectomy; none of the 6 patients operated subsequently experienced any postappendectomy complications. Overall, HRQOL in the nonoperative treatment group was similar to that of healthy peers. Conclusions: Outcome of initially nonoperative treatment for acute simple appendicitis in children is similar to the outcome in those who undergo immediate appendectomy. Initially nonoperative management seems to be able to avoid appendectomy in 3 out of 4 children. Level Of Evidence: 2 (prospective comparative study). This research did not receive any specific grant from funding agencies in the public, commercial or not-for-profit sectors. abstract_id: PUBMED:10359176 The impact of intraoperative culture on treatment and outcome in children with perforated appendicitis. Background: Most protocols for the operative treatment of perforated appendicitis use a routine culture. Although isolated studies suggest that routine culture may not be necessary, these recommendations generally are not based on objective outcome data. Methods: The authors reviewed the records of 308 children who underwent operative treatment for perforated appendicitis between 1988 and 1998 to determine if information gained from routine culture changes the management or improves outcome. Inclusion criteria included either gross or microscopic evidence of appendiceal perforation. Results: Mean patient age was 7.5 years, 51% were boys, and there was no mortality. The majority of children (96%) underwent culture that was positive for either aerobes (21%), anaerobes (19%), or both (57%). Antibiotics were changed in only 16% of the patients in response to culture results. The use of empiric antibiotics, as compared with modified antibiotics, was associated with a lower incidence of infectious complication, shorter fever duration, and decreased length of hospitalization. We also investigated the relationship between culture isolates and antibiotic regimens with regard to outcome. The utilization of antibiotics suitable for the respective culture isolate or organism sensitivity was associated with an increased incidence of infectious complication and longer duration of both fever and length of hospitalization. Finally, the initial culture correlated poorly with subsequent intraabdominal culture (positive predictive value, 11%). Conclusion: These outcome data strongly suggest that the practice of obtaining routine cultures can be abandoned, and empiric broad spectrum antibiotic coverage directed at likely organisms is completely adequate for treatment of perforated appendicitis in children. abstract_id: PUBMED:17701439 Computed tomography in the diagnosis of acute appendicitis: definitive or detrimental? Objectives: Utilization of computed tomography (CT) scans in patients with presumed appendicitis was evaluated at a single institution to determine the sensitivity of this diagnostic test and its effect on clinical outcome. Methods: Adult patients (age &gt; 17 years) with appendicitis were identified from hospital records. Findings at surgery, including the incidence of perforation, were correlated with imaging results. Results: During a 3-year period, 411 patients underwent appendectomy for presumed acute appendicitis at our institution. Of these patients, 256 (62%) underwent preoperative CT, and the remaining 155 (38%) patients did not have imaging before the surgery. The time interval between arrival in the emergency room to time in the operating room was longer for patients who had preoperative imaging (8.2 +/- 0.3 h) compared to those who did not (5.1 +/- 0.2 h, p &lt; 0.001). Moreover, this possible delay in intervention was associated with a higher rate of appendiceal perforation in the CT group (17 versus 8%, p = 0.017). Conclusions: Preoperative CT scanning in patients with presumed appendicitis should be used selectively as widespread utilization may adversely affect outcomes. The potential negative impact of CT imaging includes a delay in operative intervention and a potentially higher perforation rate. abstract_id: PUBMED:24004383 Pediatric surgeon vs general surgeon: does subspecialty training affect the outcome of appendicitis? Background: The absence of pediatric surgeons in many centers results in restriction of patient access to pediatric subspecialty care. The aim of this study was to compare the outcomes of children treated for appendicitis by pediatric surgeons (PS) and by general surgeons (GS). Methods: This was a retrospective review of the charts of all consecutive patients &lt;16 years old who underwent appendectomy during 2 years The primary outcome measure was the overall rate of complications. Secondary outcome measures included length of hospital stay (LOS), symptom duration, time from emergency department diagnosis to surgery, and readmission rate within 30 days. Results: A total of 94 patients (PS group, n = 66; GS group, n = 28) were included. PS patients were younger. For patients with complicated appendicitis, complications were significantly more prevalent in the GS group (57% vs 15%; P = 0.0001). Median LOS was not significantly different between the two groups for complicated appendicitis, but patients with non-complicated appendicitis had a significant longer LOS when treated by PS (3.74 ± 1.5 vs 2.57 ± 1.21 days; P = 0.0041). Patients in the PS group had a prolonged use of antibiotics (2 vs 4 days; P = 0.001), and longer LOS (3 vs 4 days; P = 0.0018). Conclusions: Overall complication rates were similar between PS and GS. Complications were significantly more prevalent in patients with complicated appendicitis who were treated by GS. abstract_id: PUBMED:23070046 Non-operative treatment of right-sided colonic diverticulitis has good long-term outcome: a review of 226 patients. Introduction: Data highlighting the long-term outcome following an initial episode of right-sided colonic diverticulitis is lacking. This study aims to evaluate and follow up on all patients with right-sided colonic diverticulitis. Methods: A retrospective review of all patients who were discharged with a diagnosis of right-sided colonic diverticulitis from January 2003 to April 2008 was performed. Results: A total of 226 patients, with a median age of 49 (range, 16-93) years, were admitted for acute right-sided colonic diverticulitis. The majority of the patients (n = 198, 87.6 %) had mild diverticulitis (Hinchey Ia and Ib). Seventy-three (32.3 %) patients underwent emergency surgery. The indications of surgery were predominantly suspected appendicitis (n = 50, 22.1 %) and perforated diverticulitis (n = 16, 7.1 %). Right hemicolectomy was performed in 32 (43.8 %) patients, while appendectomy, with or without diverticulectomy, was performed in the rest (n = 41, 56.2 %). There were seven patients who underwent elective right hemicolectomy after their acute admissions.Over a median duration of 64 (12-95) months, there were only nine patients who were readmitted 12 times for recurrent diverticulitis at a median duration of 17 (1-48) months from the index admission. The freedom from failure (recurrent attacks or definitive surgery (right hemicolectomy)) at 60 months was 92.0 % (95 % Confidence interval 86.1 %-97.9 %). Conclusion: Right-sided diverticulitis is commonly encountered in the Asian population and often gets misdiagnosed as acute appendicitis. If successfully managed conservatively, the long-term outcome is excellent. abstract_id: PUBMED:14570372 Appendicitis, 2002: relationship between payors and outcome. As the status of health-care insurance changes in the United States, studies have indicated that uninsured patients are less likely to receive timely and quality health care. Previous studies of appendicitis have shown that insurance status may effect the stage of presentation and outcome. However, these studies were based on databases lacking information regarding stage of presentation, timeliness of diagnosis and treatment, and character of hospitalization (length of stay, duration of antibiotic therapy, hospital costs). We accomplished a case control study, retrospective analysis of 975 patients treated for acute appendicitis between January 1996 and December 1999. Times to operation, number of preoperative outpatient visits, number of studies, severity of presentation, length of antibiotics and hospital stay, and hospital costs were analyzed [analysis of variance (ANOVA) techniques, P &lt; 0.05 significant]. We sought answers to the following: (1) Did insurance status affect the timeliness of diagnosis and treatment? (2) Did insurance status affect the stage of presentation? (3) Did insurance status affect hospitalization, as measured by length of stay, duration of antibiotic therapy, and hospital costs? (4) Did age affect outcome independent of insurance status? There were no correlations between insurance status and timeliness of diagnosis or severity of presentation. Length of stay and hospital costs were also not different between insurance categories. Pediatric patients (&lt; 12 years old) and the elderly (&gt; 65 years old) presented with more advanced appendicitis, independent of insurance category. In contrast to previously published data, the treatment of acute appendicitis is not affected by insurance coverage in the sample community. Age and timeliness of presentation were the only factors correlating to outcomes. abstract_id: PUBMED:26448784 Etiology, treatment outcome and prognostic factors among patients with secondary peritonitis at Bugando Medical Centre, Mwanza, Tanzania. Introduction: Secondary peritonitis due to perforation of the gastrointestinal tract is one of the most common surgical emergencies all over the world and is associated with significantly morbidity and mortality. Previous studies conducted at Bugando Medical Centre (BMC) were retrospective and each was focused on single etiology; therefore there was an obvious need to evaluate the etiologies, treatment outcome and their prognostic factors altogether. Methods: This was a descriptive cross-sectional study involving patients with secondary peritonitis admitted at BMC from May 2014 to April 2015. Sociodemographic and clinical characteristics among consented patients were collected using questionnaires. Peritoneal aspirate, biopsy and blood were collected perioperatively and processed using standard operating procedures. Analysis was done using STATA version 11 software. Results: The study enrolled 97 patients with the female to male ratio of 1:1.8 and approximately 41.2 % (40/97) were in their third and fourth decades of life. Only 3 (3.09 %) patients arrived to the hospital within 24 hours of onset of illness, 26 (26.80 %) patients presented with shock and HIV seropositivity among all patients was 13.40 % (13/97). The common etiologies of secondary peritonitis were perforated appendicitis 23 (23.71 %), peptic ulcer disease 18 (18.56 %), ischemia 18 (18.56 %) and typhoidal perforation 15 (15.46 %). Of the 97 patients, 35 (36.08 %) had complications and 15 (15.46 %) died. Presence of premorbid illness and post-operative complication were found to be associated with death (p values = 0.004 and &lt;0.001 respectively). Conclusions: The most common etiologies of secondary peritonitis at BMC are perforated appendicitis, peptic ulcer disease, ischemia and typhoidal perforation. Premorbid illness and postoperative complications in this setting are associated with death and as the matter of fact proper screening on admission should be done to identify patients with premorbid illness and confer prompt management. abstract_id: PUBMED:23905565 The influence of age, duration of symptoms and duration of operation on outcome after appendicitis in children. Introduction: The aim of the study was to evaluate the impact of any perioperative parameters on the outcome of treatment for appendicitis. Material And Methods: The study included 108 consecutive children with appendicitis. Data were retrieved from files using the codes for appendectomy of the Nordic Classification of Surgical Procedures and the diagnosis codes for appendicitis from the International Classification of Dis-eases (ICD) 10. A non-satisfactory outcome was defined as a post-operative length of stay in hospital ≥ 5 days and/or readmission due to complications. Results: Significantly more patients with a non-satisfactory outcome had complicated appendicitis (73%) compared with those with a satisfactory outcome (25%). A total of 78% of children &lt; 6 years and 44% of children &gt; 10 years had a non-satisfactory outcome. The duration of symptoms before operation was mean 2.8 days for children with a non-satisfactory outcome and 2.7 days for those with complicated appendicitis compared with 1.5 days for children with a satisfactory outcome and 1.6 days for those with simple appendicitis. The median difference was two days in the younger patients. Surgical time was significantly shorter in the group of patients with a satisfactory outcome and in those with simple appendicitis than in the other groups. Conclusion: Complicated appendicitis and a non-satisfactory outcome in children after operation are associated with a long preoperative duration of symptoms, young age and long surgical time. A cut-off age has not been established, but young children might benefit from direct referral and access to hospitalization in a regional or tertiary paediatric surgical centre. Funding: not relevant. Trial Registration: not relevant. Answer: The study on appendicitis in northern aboriginal children (NAC) sought to compare outcomes between NAC requiring transfer for appendicitis treatment and those who presented locally. The hypothesis was that NAC with appendicitis experienced higher rates of perforation and increased length of stay (LOS) due to delays in definitive treatment. The retrospective chart review included 210 appendectomies, with 68 being NAC and 142 local children. The findings showed that NAC had significantly higher perforation rates (44% vs 28%; P = .02), higher white blood cell count (17.9 vs 16.0; P = .02), and longer times to operation after arrival (10.3 vs 7.0 hours; P = .0002). Despite these differences, the LOS was similar between the two groups, and postoperative complications were also similar. Notably, pretransfer antibiotics did not affect the rate of rupture but may impact complications. The study concluded that NAC with appendicitis experience longer transfer times and higher perforation rates than local children, without a difference in LOS or complications. The use of pretransfer antibiotics is endorsed if a delay in transfer is anticipated, although they do not reduce perforation rates (PUBMED:20438920). This finding is consistent with the broader literature on appendicitis treatment, which suggests that short delays in surgical intervention for acute appendicitis are generally well tolerated and that outcome variables are independent of the duration of symptoms or time to surgical intervention (PUBMED:18367138). Additionally, nonoperative treatment for acute simple appendicitis in children has been found to have similar outcomes to immediate appendectomy, with the potential to avoid appendectomy in three out of four children (PUBMED:29395151). However, it is important to note that the specific context of NAC and the associated transfer times may present unique challenges not reflected in the general population.
Instruction: Is early correction of congenital ventricular septal defect a better option in a developing country? Abstracts: abstract_id: PUBMED:20419985 Is early correction of congenital ventricular septal defect a better option in a developing country? Objective: To assess and evaluate the effectiveness of early ventricular septal defect (VSD) repair in a developing country by comparing the outcomes in two age groups. Methods: A total of 39 patients underwent VSD repair at a tertiary care hospital in Karachi over a period of 2.5 years. Patients were divided into Group 1 (&lt;2 years) and Group 2 (2-18 yrs). Perioperative and postoperative variables were reviewed retrospectively for each patient and compared with the development of complications and the effect on postoperative recovery times. The data was analyzed using Mann Whitney test for continuous variables and Fischer's exact test for categorical variables. Results: Mean age at surgery was 10.1 +/- 7.0 months and 108.5 +/- 54.0 months for both groups, respectively. Two patients died. Procedure time was not affected by decreasing age (p = 0.622) but Cardio pulmonary Bypass duration (p = 0.040) and Aortic Cross Clamp time (p = 0.063) were significantly affected. No associated significance was obtained when hospital stay (p = 0.085) was observed. Increased duration of ventilation (p = 0.000) and length of ICU stay (p = 0.003) were highly significant for the younger age group. Presence of residual VSD (p = 0.025) was also significantly affected by lower age. There was no significant difference in the number of patients with complications (p = 1.000) among the 2 groups. Conclusion: Age does not increase morbidity rates after VSD repair although postoperative recovery times are increased with decreasing age. Hence, there is no justification in delaying VSD repair. abstract_id: PUBMED:12522788 Parental consanguinity and congenital heart malformations in a developing country. The association between isolated congenital heart defects and consanguinity was examined in 759 Lebanese patients with different types of congenital heart malformations. The subjects were patients of the Children's Cardiac Registry Center (CCRC) at the American University of Beirut Medical Center. The proportion of first-cousin marriages among cardiac subjects was compared to that of the National Collaborative Perinatal Neonatal Network (NCPNN), after adjusting for the subjects' geographic distribution, and to the highest proportion reported by NCPNN. In general, the proportion of overall parental consanguinity and first-cousin matings among CCRC subjects (34.7 and 20.2%, respectively) were significantly higher than the highest proportion of first-cousin marriages reported by NCPNN (Bekaa subjects; 13.2%) (P &lt; 0.0001). Comparison with the NCPNN-adjusted first-cousin mating proportion revealed a significantly increased rate of consanguinity in all categories of cardiac malformations except great vessel and coronary artery lesions (P &lt; 0.05). The following lesions were significantly associated with increased parental consanguinity: aortic anomalies (aortic insufficiency, aortic stenosis, bicuspid aortic valve), atrial septal defect, double-outlet right ventricle, pulmonary atresia, patent ductus arteriosus, pulmonic stenosis, tetralogy of Fallot, and ventricular septal defect (P &lt; 0.05). Higher maternal education was the only variable that was negatively correlated with parental consanguinity (P = 0.037). Our study emphasizes the role of homozygous recessive genes in the causation of different types of isolated congenital heart malformations, known to follow a multifactorial pattern of inheritance. There is an urgent need for educating the public on the deleterious effects of inbreeding, especially in developing countries with high overall consanguinity rates and limited financial resources. abstract_id: PUBMED:31403007 Clinical Screening for Congenital Heart Disease in Newborns at a Tertiary Care Hospital of a Developing Country. Objective: To screen all newborns admitted to a tertiary care hospital to rule out congenital heart disease before discharge and to find out the utility of pulse oximetry to detect congenital heart disease. Methodology: This prospective study was done at Aga Khan University Hospital from January 2014 to December 2014 in 1,650 newborns over a period of 12 months. Pulse oximetry and clinical examination were done. Persistent oxygen saturation less than 95% was considered as positive pulse oximetry. Newborns who had positive pulse oximetry or abnormal clinical examinations findings were subjected to echocardiography. Results: Pulse oximetry was performed on 1,650 newborns, out of which 25 (1.5%) had congenital heart disease. Positive pulse oximetry cases were 16 (0.97%), out of which 10 had only positive pulse oximetry (negative clinical examination). Positive clinical examination cases were 45 (2.7%), out of which 39 cases had only positive clinical examinations (negative pulse oximetry). Six newborns had both positive pulse oximetry and positive clinical examination. Out of the 25 diagnosed cases of congenital heart disease, ventricular septal defect (VSD) was the most common congenital heart disease, followed by patent ductus arteriosus (PDA). The sensitivity, specificity, positive predictive value, and negative predictive value of pulse oximetry were 32%, 99.5%, 50%, and 98.9% respectively. Conclusion: In the community setting of a developing country, a combination of pulse oximetry screening and clinical examination are better at detecting congenital heart defects than either test alone. abstract_id: PUBMED:27589674 Congenital heart disease at Laos Children's Hospital: Two year experience. Background: Management of congenital heart disease (CHD) in developing countries is challenging because of limited access to health-care facilities and socioeconomic limitations. The aim of this study was to describe the recent experience with CHD at Children's Hospital, Vientiane, Laos, the only pediatric referral hospital in the country. Methods: From July 2013 to November 2015, 1009 echocardiograms were carried out in 797 individuals who visited Children's Hospital, in whom CHD was identified in 213. Demographic characteristics, echocardiography and age at first CHD diagnosis on echocardiogram were compared by residential area. Results: Among the 213 patients, the most frequent anomalies, in descending order, were ventricular septal defect, atrial septal defect, patent ductus arteriosus, and tetralogy of Fallot. Moderate or severe CHD requiring surgery or intervention was detected in 137 patients; median age at initial diagnosis was 6.0 months (IQR, 1.5-29.8). Among those with moderate or severe CHD, 89 patients were from rural areas and 48 from the capital area. The proportion of patients from rural areas older than 1 and 3 years at diagnosis was 46.1% and 32.6%, respectively. In contrast, patients from the capital area were diagnosed at a significantly earlier age: 16.7% at age &gt;1 year and 6.2% at age &gt;3 years (both P &lt; 0.001 vs rural areas). Conclusions: Diagnosis of CHD requiring surgery is delayed in Laos, especially in rural areas. Better education and training of local medical staff, and greater access to medical advice are required to improve CHD care in Laos. abstract_id: PUBMED:34053740 Surgery for severe congenital heart diseases in children from developing nations. Background: Children with severe congenital heart disease (CHD) are rarely treated in developing countries and have very little to no chance to survive in their local environment. Mécénat Chirurgie Cardiaque (MCC) flies to France children with CHD from developing countries. This report focuses on the early, mid, and late outcomes of 531 children with severe CHD sent to MCC for surgery from 1996 to 2019. Methods: The inclusion criteria were based on diagnosis and not on procedure. MCC is present in 66 countries and has developed a robust staff, including 12 permanent employees and 700 volunteers, with 350 host families based in France, 120 local correspondents, and 100 local physicians. Since 1996, MCC has organized a basic training of local pediatric cardiologists yearly, offering a free 1-month training course. Over time, MCC could count on a pool of doctors trained in basic pediatric cardiology. Flights were secured by the Aviation Sans Frontieres Foundation. Nine French centers performed the surgeries. A robust follow-up was conducted in all the nations where MCC operates. Results: The most frequent pathologies were single ventricle (n = 126), double-outlet right ventricle (n = 116), pulmonary atresia with ventricular septal defect (n = 68), transposition of the great arteries with ventricular septal defect and transposition of the great arteries with intact ventricular septum (n = 61), arterial trunk (n = 39), transposition of the great arteries with ventricular septal defect and left ventricle outflow tract obstruction (n = 35), complete atrioventricular septal defect (n = 18), congenitally corrected transposition of the great arteries (n = 16), and so on. The median age was 5.4 years (range, 1 month-26 years). The mean perioperative mortality was 5.5% (29 out of 531) (95% confidence limit, 3.5%-7.4%). The follow-up was 91.3%, with a mean follow-up of 5.1 years. The global actuarial survival at 5, 10, and 15 years was, respectively, 85%, 83%, and 74%. There was a significant higher late mortality for patients surviving only with a Blalock-Taussig shunt (P = .001). Conclusions: Operating on 531 children with severe CHD from developing nations was achieved with satisfactory early and long-term results. Children with severe CHD are rarely operated on in developing nations. Programs like MCC's offer a viable option to save these children born with severe CHD. abstract_id: PUBMED:11552275 Early and late results of total correction of congenital cardiac anomalies in infancy. Objective: We evaluated long-term results of surgical correction of congenital cardiac anomalies in infancy. Methods: We reviewed cases of 856 patients who underwent complete correction of major cardiac anomalies in the first year of life during last 24 years at our institution, and analyzed results. Surgery involved ventricular septal defect (n = 453), tetralogy of Fallot (n = 92), atrioventricular canal defect (n = 93), and complete transposition of the great arteries (n = 218). Results: Operative mortality was 2.2% in ventricular septal defect, 0% in tetralogy of Fallot, 8.6% in atrioventricular canal defect, and 4.1% in complete transposition of the great arteries. Freedom from reoperation at 20 years was 96.5 +/- 2.0% and actuarial survival was 94.2 +/- 1.3% in ventricular septal defect. Freedom from reoperation at 15 years was 90.5 +/- 3.9% in tetralogy of Fallot and 86.6 +/- 4.4% in atrioventricular canal defect. Actuarial survival at 15 years was 97.8 +/- 1.6% in tetralogy of Fallot, 85.7 +/- 3.7% in atrioventricular canal defect, and 89.9 +/- 2.2% in complete transposition of the great arteries. Actuarial survival in complete transposition of the great arteries was significantly better in arterial than in atrial switch operations. Conclusions: Total correction of ventricular septal defect, tetralogy of Fallot, atrioventricular canal defect, and complete transposition of the great arteries in infancy was conducted safely, but the incidence of reoperation in late follow-up must be reduced. abstract_id: PUBMED:31222391 Ten-year trend in prevalence and outcome of Down syndrome with congenital heart disease in a middle-income country. Limited data are available on the survival of patients with Down syndrome and congenital heart disease (CHD) from middle-income countries. This retrospective cohort study was performed to determine the trends in the prevalence and survival of such patients born from January 2006 to December 2015 in Malaysia. Among 754 patients with Down syndrome, 414 (55%) had CHD, and no significant trend was observed during the 10 years. Of these 414 patients, 30% had lesions that closed spontaneously, 35% underwent surgery/intervention, 9% died before surgery/intervention, and 10% were treated with comfort care. The overall mortality rate was 23%, the median age at death was 7.6 months, and no significant changes occurred over time. The early and late post-surgery/intervention mortality rates were 0.7% and 9.0%, respectively. Most deaths were of non-cardiac causes. The overall 1-, 5-, and 10-year survival rates were 85.5%, 74.6%, and 72.9%, respectively. Patients with severe lesions, persistent pulmonary hypertension of the newborn, atrioventricular septal defect, and pulmonary hypertension had low survival at 1 year of age.Conclusion: The prevalence of CHD in patients with Down syndrome is similar between Malaysia and high-income countries. The lower survival rate is attributed to limited expertise and resources which limit timely surgery. What is Known: • The survival of patients with Down syndrome with congenital heart disease (CHD) has improved in high-income countries. However, little is known about the survival of patients with Down syndrome with CHD from middle-income countries. • In the Caucasian population, atrioventricular septal defect is the most common type of CHD associated with Down syndrome. What is New: • In middle-income countries, the prevalence of CHD is the same as in high-income countries, but with a lower survival rate. • In the Asian population, ventricular septal defect is the most common type of CHD in patients with Down syndrome. abstract_id: PUBMED:50046 The total correction of congenital heart disease in infants. If radical correction of congenital heart disease can be performed with a low risk in the first year of life the advantages are obvious. Total correction avoids the latent risks of the underlying lesion and the risks of palliative surgery, and relieves the parents and family of the psychological pressure of a major illness. Palliative surgery, although offering survival, may include the long term problems of the palliative operation itself, and the possibility that cases may be lost to further treatment or follow-up examination after a successful palliative operation. For these reasons, to be desirable, a palliative operation should offer a survival chance that is at least 10% better than the corrective procedure in that patient at that time. That is, any corrective procedure with a mortality rate less than 10% is to be perferred at any age to a palliative operation. An accurate assessment of risk demands a complete diagnostic study, and a knowledge of the natural history of that form of congenital heart disease. The problems of infants with congenital heart disease are not primarily caused by age of size. The main problem is that of natural selection. abstract_id: PUBMED:15119701 Patterns of congenital heart disease in unoperated adults: a 20-year experience in a developing country. Background: The number of patients with congenital cardiac disease reaching adulthood is increasing steadily. Many adults with such disease face both medical and surgical difficulties. Hypothesis: This retrospective study was undertaken to assess the frequency and outcome of congenital heart disease (CHD) in unoperated adults. Methods: The charts of all patients with unoperated CHD, who were admitted to a tertiary care center in Beirut, Lebanon, between 1980 and 2000 were reviewed. Of these, 206 patients (52% men, age at admittance 18-71 years [32.8 +/- 13.3 years]) with a diagnosis of CHD were evaluated. Atrial septal defect (ASD) was the most common cardiac malformation with a relative frequency of 53%, followed by ventricular septal defects (11%), tetralogy of Fallot (11%), aortic anomalies (7%), pulmonary stenosis (6%), and Ebstein anomaly (4%). Most patients were symptomatic upon presentation, with dyspnea on exertion being the most common presenting symptom. Twenty-seven patients (13%) had cyanotic CHD. Of 179 acyanotic patients, 113 (63%), and 17 of 27 cyanotic patients (63%) underwent surgical intervention. In-hospital surgical complications for the acyanotic group included cerebrovascular accident (2%) and heart block (1%). Total surgical mortality was 4 of 130 (3%). One patient with tetralogy of Fallot presented with endocarditis and died. Conclusion: Atrial septal defect is the most common defect reported in our experience; however, it occurs more frequently than that reported in the literature. Although most patients were symptomatic on presentation, their functional status was stable. Accordingly, their hospital course, whether managed medically or surgically, held a relatively low complication rate. This could be attributed to the uncomplicated nature of pathologies in our series. The surgical mortality and in-hospital complications were slighter higher than those reported for similar lesions if repaired during childhood. This study reflects the relative frequency of various cardiac malformations in selected patients with "grown up" congenital heart disease (GUCH) and their natural survival pattern. abstract_id: PUBMED:14696898 Bidirectional cavopulmonary shunt for cyanotic heart disease: surgical experience from a developing country. Objective: The importance of bidirectional cavopulmonary anastomosis for palliation of complex cyanotic congenital heart disease is widely recognized. This study was conducted to highlight our surgical experience with this procedure in a developing country. Methods: A retrospective study was conducted using medical records at the Aga Khan University Hospital, Karachi, Pakistan. Clinical findings at presentation, anatomical defects seen on transthoracic echocardiography, pre-operative McGoon index, cardiopulmonary bypass time, use of cardioplegia, post-procedure oxygen saturations and complications were evaluated. Results: A total of 8 patients underwent bidirectional cavopulmonary anastomosis. There were 6 males (75%) and 2 females (25%). Ages and weights at operation averaged 5.7 +/- 3.7 years (range 2-14 years) and 18.6 +/- 10.4 kg (range 8.5-35.5 kg) respectively. The most common symptoms were the presence of cyanosis in all (100%) patients followed by recurrent respiratory tract infections in 3 (37.5%) patients. Transthoracic echocardiography revealed 6 (75%) patients with atrial septal defects, 5 (62.5%) with tricuspid atresia, 3 (37.5%) with ventricular septal defects, 3 (37.5%) with malposition of great vessels, 2 (25%) with pulmonary stenosis and 2 (25%) with double inlet left ventricles. The mean pre-procedure McGoon index was 2.1 +/- 0.5 (range 1.37-2.80). All patients received cardioplegia. Cardiopulmonary bypass was used in all patients for a mean time of 154.1 +/- 83.6 minutes (range 60-298 minutes). All patients were ventilated for a mean period of 1.5 +/- 0.7 days (range 1-3 days). The ICU stay was 3.0 +/- 0.6 days (range 2-5 days) with a total hospital stay of 9.8 +/- 3.8 days (range 7-18 days). The mean post-procedure oxygen saturation was 82.6 +/- 3.5% (range 76-86%). The most common post-operative complication was supraventricular arrhythmia in 2 (25.0%) patients. There were no intra-operative or early (within 7 days of procedure) deaths. One patient developed pulmonary artery hypertension and died 23 months later due to cardiac arrest. Conclusion: Patients tolerated the procedure well. After a mean follow-up of 10 months, 6 patients were assessed to be in New York Heart Association (NYHA) functional class I and one patient in NYHA class II. Clinical and post-procedural data gathered from our experience confirms the safety of bidirectional cavopulmonary anastomosis. Answer: Based on the information provided in the abstracts, early correction of congenital ventricular septal defect (VSD) appears to be a viable option in a developing country. A study conducted at a tertiary care hospital in Karachi compared the outcomes of VSD repair in two age groups: Group 1 (<2 years) and Group 2 (2-18 years). The results indicated that while age does not increase morbidity rates after VSD repair, postoperative recovery times are increased with decreasing age. However, there was no justification found in delaying VSD repair, suggesting that early correction is a reasonable approach (PUBMED:20419985). Additionally, the prevalence of congenital heart disease (CHD), including VSD, is significant in developing countries, and delayed diagnosis and treatment can be common, especially in rural areas (PUBMED:27589674). Early detection and intervention are crucial, as indicated by a study that found a combination of pulse oximetry screening and clinical examination to be effective in detecting CHD before discharge from a tertiary care hospital (PUBMED:31403007). Moreover, a report on children with severe CHD from developing nations treated by Mécénat Chirurgie Cardiaque (MCC) in France showed satisfactory early and long-term results, emphasizing the importance of timely surgical intervention for severe CHD, including VSD (PUBMED:34053740). In summary, while there are challenges associated with managing CHD in developing countries, such as limited access to healthcare facilities and socioeconomic limitations, the evidence suggests that early correction of VSD is beneficial and does not increase morbidity rates. Delaying repair may lead to increased postoperative recovery times, and early intervention can potentially improve outcomes for patients with VSD in developing countries.
Instruction: Impact of coronary heart disease on health functioning in an aging population: are there differences according to socioeconomic position? Abstracts: abstract_id: PUBMED:18256343 Impact of coronary heart disease on health functioning in an aging population: are there differences according to socioeconomic position? Objective: To examine differences in the effect of coronary heart disease (CHD) on health functioning according to socioeconomic position. Research on social inequality in health has tended to concentrate on differences in disease prevalence and mortality rather than on the impact of disease on functioning. As outcomes improve for those with CHD, it is important to know how their health is affected in the long term. Methods: The analysis uses questionnaire data from phase 3 (1991-1993) to phase 7 (2003-2005) of the Whitehall II Study of civil servants (n = 8292). Differences between those in higher and lower employment grades in the relationship between CHD and physical and mental health functioning were measured according to the Short Form 36 General Health Survey (SF-36). A growth curve model of change in SF-36 physical and mental health from five repeated-measures over the 12-year period was then estimated. Results: The differences in SF-36 health between those with and without preexisting CHD depended on employment grade. For those with CHD, physical health was initially poorer in lower grades than in higher ones; this difference persisted throughout. The mental health of respondents with CHD in the lowest grades deteriorated over time whereas for members of the higher grades, the prevailing trend was for improving mental health. Conclusions: CHD has a more detrimental effect on physical and mental health functioning among those in more disadvantaged socioeconomic positions. abstract_id: PUBMED:15840615 The role of cognitive ability (intelligence) in explaining the association between socioeconomic position and health: evidence from the Whitehall II prospective cohort study. Associations among cognitive ability, socioeconomic position, and health have been interpreted to imply that cognitive ability could explain social inequalities in health. The authors test this hypothesis by examining three questions: Is cognitive ability related to health? To what extent does it explain social inequalities in health? Do measures of socioeconomic position and cognitive ability have independent associations with health? Relative indices of inequality were used to estimate associations, using data from the Whitehall II study (baseline, 1985-1988), a British prospective cohort study (4,158 men and 1,680 women). Cognitive ability was significantly related to coronary heart disease, physical functioning, and self-rated health in both sexes and additionally to mental functioning in men. It explained some of the relation between socioeconomic position and health: 17% for coronary heart disease, 33% for physical functioning, 12% for mental functioning, and 39% for self-rated health. In analysis simultaneously adjusted for all measures of socioeconomic position, cognitive ability retained an independent association only with physical functioning in women. These results suggest that, although cognitive ability is related to health, it does not explain social inequalities in health. abstract_id: PUBMED:25935617 Family history of premature myocardial infarction, life course socioeconomic position and coronary heart disease mortality--A Cohort of Norway (CONOR) study. Background/objectives: To investigate self-reported family history (FH) of premature myocardial infarction (MI) in first-degree relatives as a risk factor for coronary heart disease (CHD) mortality, and assess whether any observed effect could be explained by current or life course socioeconomic position. Methods: 130,066 participants from Cohort of Norway were examined during 1994-2003. A subgroup (n=84,631) had additional life course socioeconomic data. Using Cox proportional hazard analyses, we calculated hazard ratios (HR) for CHD mortality, assessed by linkages to the Norwegian Cause of Death Registry through 2009. For subgroup analyses, we created an index of life course socioeconomic position, and assessed its role as a potential confounder in the association of FH with CHD. Results: For men, MI in parents and siblings were both a significant risk factor for CHD mortality after adjusting for established risk factors and current socioeconomic conditions; the highest risk was with MI in siblings (HR: 1.44 [1.19-1.75]). For women, FH constituted significant risk after similar adjustment only for those with MI in parents plus siblings (HR: 1.78 [1.16-2.73]). Adjusting for current and life course socioeconomic conditions only marginally lowered the estimates, and those with FH did not have worse life course socioeconomic position than those without. Conclusions: FH of premature MI is an independent risk factor for CHD mortality that differs in magnitude of effect by the sex of the index person and type of familial relationship. Life course socioeconomic position has little impact on the association between FH and CHD, suggesting the effect is not confounded by this. abstract_id: PUBMED:15256527 Socioeconomic trajectories across the life course and health outcomes in midlife: evidence for the accumulation hypothesis? Background: Recent research in social epidemiology has established the importance of considering the accumulation of advantage and disadvantage across the life course when examining adult health outcomes. This paper examines (1) accumulation across trichotomous categories of socioeconomic position (SEP), and (2) accumulation in analysis stratified by adult SEP. Methods: Data are from the Whitehall II study. Each participant was categorized as having high (0), intermediate (1), or low (2) SEP at three time points in the life course, leading to 27 socioeconomic trajectories. These trajectories were summarized to yield a scale ranging from 0 (high SEP at all three time points) to 6 (low SEP at all three time points). Logistic regression was used to examine odds of incident coronary heart disease (CHD), poor mental and physical functioning, and minor psychiatric disorder. Results: There was a graded linear relationship between accumulation of socioeconomic exposure and health. Men with a score of 6 had increased odds of CHD (2.53, 95% CI: 1.3, 5.1), poor physical functioning (2.19, 95% CI: 1.4, 4.1), and poor mental functioning (2.60, 95% CI: 1.4, 4.9) compared with men with a score of 0. In women there was an accumulation effect for CHD and physical functioning. No cumulative effect of SEP on minor psychiatric disorder was observed. The effects of accumulation were weaker in analyses stratified by adult SEP, with early deprivation followed by high adult SEP particularly detrimental for CHD. Conclusions: The health effects of socioeconomic disadvantage accumulate over the life course. In addition to accumulation effects, analysis stratified by adult SEP also provided support for the critical period and the pathway model. abstract_id: PUBMED:25437893 Novel coronary heart disease risk factors at 60-64 years and life course socioeconomic position: the 1946 British birth cohort. Social disadvantage across the life course is associated with a greater risk of coronary heart disease (CHD) and with established CHD risk factors, but less is known about whether novel CHD risk factors show the same patterns. The Medical Research Council National Survey of Health and Development was used to investigate associations between occupational socioeconomic position during childhood, early adulthood and middle age and markers of inflammation (C-reactive protein, interleukin-6), endothelial function (E-selectin, tissue-plasminogen activator), adipocyte function (leptin, adiponectin) and pancreatic beta cell function (proinsulin) measured at 60-64 years. Life course models representing sensitive periods, accumulation of risk and social mobility were compared with a saturated model to ascertain the nature of the relationship between social class across the life course and each of these novel CHD risk factors. For interleukin-6 and leptin, low childhood socioeconomic position alone was associated with high risk factor levels at 60-64 years, while for C-reactive protein and proinsulin, cumulative effects of low socioeconomic position in both childhood and early adulthood were associated with higher (adverse) risk factor levels at 60-64 years. No associations were observed between socioeconomic position at any life period with either endothelial marker or adiponectin. Associations for C-reactive protein, interleukin-6, leptin and proinsulin were reduced considerably by adjustment for body mass index and, to a lesser extent, cigarette smoking. In conclusion, socioeconomic position in early life is an important determinant of several novel CHD risk factors. Body mass index may be an important mediator of these relationships. abstract_id: PUBMED:36068504 Family functioning and health-related quality of life of inpatients with coronary heart disease: a cross-sectional study in Lanzhou city, China. Background: A key outcome in coronary heart disease (CHD) is Health Related Quality of Life (HRQoL), and family functioning is important in the management of CHD. But few studies have examined both together, and little is known about them among inpatients with CHD in less developed areas of China. Therefore, this study aimed to assess the HRQoL and family functioning status of inpatients with CHD in Lanzhou from Northwest China, and identify the factors that affect their HRQoL. Methods: A cross‑sectional study was conducted in 224 CHD inpatients at one major hospital. Sociodemographic data and disease information of CHD inpatients were collected by face-to-face using a structured questionnaire and data were also obtained from patient medical records. HRQoL was measured using the Sickness Impact Profile (SIP). Family functioning was measured using the family APGAR index. Multiple binary logistic regression analysis (MBLRA) was used to explore potential risk factors associated with HRQoL, and Pearson's correlations were used to assess the relationship between family functioning and HRQoL. Results: The overall, physical and psychosocial SIP scores were 25.03 ± 8.52, 18.61 ± 9.90 and 28.08 ± 9.64, respectively. The total family APGAR score was 6.11 ± 2.45. MBLRA found older age, poorer cardiac function and more severe disease were associated with poorer HRQoL, while better family functioning, higher monthly income, and urban living were associated with better HRQoL. Family functioning was weakly to moderately correlated with total and psychosocial HRQoL. Conclusions: Older and less affluent inpatients with lower educational level, less family support and more severe CHD have poorest quality of life, and health care providers should consider interventions to support them. abstract_id: PUBMED:16452104 Does IQ explain socioeconomic inequalities in health? Evidence from a population based cohort study in the west of Scotland. Objective: To test the hypothesis that IQ is a fundamental cause of socioeconomic inequalities in health. Design: Cross sectional and prospective cohort study, in which indicators of IQ were assessed by written test and socioeconomic position by self report. Setting: West of Scotland. Participants: 1347 people (739 women) aged 56 in 1987. Main Outcome Measures: Total mortality and coronary heart disease mortality (ascertained between 1987 and 2004); respiratory function, self reported minor psychiatric morbidity, long term illness, and self perceived health (all assessed in 1988). Results: In sex adjusted analyses, indices of socioeconomic position (childhood and current social class, education, income, and area deprivation) were significantly associated with each health outcome. Thus the greatest risk of ill health and mortality was evident in the most socioeconomically disadvantaged groups, as expected. After adjustment for IQ, a marked attenuation in risk occurred for poor mental health (range of attenuation in risk ratio across the five socioeconomic indicators: 15-58%), long term illness (25-53%), poor self perceived health (41-56%), respiratory function (44-66%), coronary heart disease mortality (31-111%), and total mortality (45-131%). Despite the clear reduction in the magnitude of these effects after controlling for IQ, in half of the associations examined the risk of ill health in socioeconomically disadvantaged people was still at least twice that of advantaged people. Statistical significance was lost for only 5/25 separate socioeconomic health gradients that showed significant relations in sex adjusted analyses. Conclusions: Scores from the IQ test used here did not completely explain the socioeconomic gradients in health. However, controlling for IQ did lead to a marked reduction in the magnitude of these gradients. Further exploration of the currently scant information about IQ, socioeconomic position, and health is needed. abstract_id: PUBMED:15985645 Socioeconomic position in childhood and early adult life and risk of mortality: a prospective study of the mothers of the 1958 British birth cohort. Objectives: We sought to establish whether women's childhood socioeconomic position influenced their risk of mortality separately from the effects of adult socioeconomic position. Methods: We examined 11855 British women aged 14 to 49 years, with mortality follow-up over a 45-year period. Results: Trends according to childhood social class were observed for all-cause mortality, circulatory disease, coronary heart disease, respiratory disease, chronic obstructive pulmonary disease, stroke, lung cancer, and stomach cancer, with higher death rates among members of unskilled manual groups. Associations attenuated after adjustment for adult social class, smoking, and body mass index. No trend was seen for breast cancer or accidents and violence. Adverse social conditions in both childhood and adulthood were associated with higher death rates from coronary heart disease and respiratory disease. Stomach cancer was influenced primarily by childhood conditions and lung cancer by factors in adult life. Conclusions: Socioeconomic position in childhood was associated with adult mortality in a large sample of British women. abstract_id: PUBMED:20008161 Investigating the spatial variability in incidence of coronary heart disease in the Gazel cohort: the impact of area socioeconomic position and mediating role of risk factors. Study Objective: The aim of the study was to improve understanding of the relationships between contextual socioeconomic characteristics and coronary heart disease (CHD) incidence in France. Several authors have suggested that CHD risk factors (diabetes, hypertension, cholesterol, overweight, tobacco consumption) may partly mediate associations between socioeconomic environmental variables and CHD. Studies have assessed the overall mediating role of CHD risk factors, but have never investigated the specific mediating role of each risk factor, not allowing their specific contribution to the area socioeconomic position-CHD association to be disentangled. Design: After assessing geographical variations in CHD incidence and socioeconomic environmental effects on CHD using a multilevel Cox model, the extent to which this contextual effect was mediated by each of the CHD risk factors was assessed. Participants: Data of the French GAZEL cohort (n=19,808) were used. Main Results: After adjustment for several individual socioeconomic indicators, it was found, in men from highly urbanised environments, that CHD incidence increased with decreasing socioeconomic position of the residential environment. After individual-level adjustment, a higher risk of obesity, smoking and cholesterol was observed in the most deprived residential environments. When risk factors were introduced into the model, a modest decrease was observed in the magnitude of the association between the socioeconomic contextual variable and CHD. Risk factors that contributed most to the decrease of the association were smoking and cholesterol. Conclusions: Classic risk factors, although some of them more than others, mediated a modest part of the association between area socioeconomic position and CHD. abstract_id: PUBMED:16100318 Adverse socioeconomic position across the lifecourse increases coronary heart disease risk cumulatively: findings from the British women's heart and health study. Objective: To examine the associations of childhood and adult measurements of socioeconomic position with coronary heart disease (CHD) risk. Methods: Cross sectional and prospective analysis of a cohort of 4286 British women who were aged 60-79 years at baseline. Among these women there were 694 prevalent cases of CHD and 182 new incident cases among 13 217 person years of follow up of women who were free of CHD at baseline. Results: All measurements of socioeconomic position were associated with increased prevalent and incident CHD in simple age adjusted models. There was a cumulative effect, on prevalent and incident CHD, of socioeconomic position across the lifecourse. This effect was not fully explained by adult CHD risk factors. The adjusted odds ratio of prevalent CHD for each additional adverse (out of 10) lifecourse socioeconomic indicator was 1.11 (95% confidence interval: 1.06, 1.16). The magnitude of the effect of lifecourse socioeconomic position was the same in women who were lifelong non-smokers as in those who had been or were smokers. Conclusion: Adverse socioeconomic position across the lifecourse increases CHD risk cumulatively and this effect is not fully explained by adult risk factors. Specifically in this cohort of women cigarette smoking does not seem to explain the association between adverse lifecourse socioeconomic position and CHD risk. Answer: The impact of coronary heart disease (CHD) on health functioning in an aging population does indeed differ according to socioeconomic position. Research has shown that CHD has a more detrimental effect on both physical and mental health functioning among those in more disadvantaged socioeconomic positions (PUBMED:18256343). For individuals with CHD, physical health was initially poorer in lower employment grades compared to higher ones, and this difference persisted over time. Additionally, the mental health of respondents with CHD in the lowest grades deteriorated over time, whereas for those in higher grades, mental health tended to improve (PUBMED:18256343). Moreover, cognitive ability, which is related to health, does not fully explain social inequalities in health. While cognitive ability is significantly related to CHD and physical functioning, and it explains some of the relationship between socioeconomic position and health, it does not completely account for the disparities observed (PUBMED:15840615). This suggests that other factors associated with socioeconomic position also play a role in the impact of CHD on health functioning. Furthermore, family history of premature myocardial infarction (MI) is an independent risk factor for CHD mortality, and life course socioeconomic position has little impact on this association, indicating that the effect is not confounded by socioeconomic factors (PUBMED:25935617). This suggests that the influence of socioeconomic position on CHD outcomes is complex and may involve multiple pathways. The accumulation of socioeconomic disadvantage over the life course also has a significant impact on health outcomes, including CHD. Individuals with a history of low socioeconomic position at various life stages have increased odds of CHD and poorer physical and mental functioning (PUBMED:15256527). Additionally, early life socioeconomic position is an important determinant of several novel CHD risk factors, and body mass index may be an important mediator of these relationships (PUBMED:25437893). In summary, the impact of CHD on health functioning in an aging population is influenced by socioeconomic position, with those in lower socioeconomic positions experiencing more adverse effects on their physical and mental health. This relationship is not fully explained by cognitive ability or other individual risk factors, suggesting that broader social determinants of health play a significant role.
Instruction: Is adrenal tumor size related to evolution time or does it represent a biological difference? Abstracts: abstract_id: PUBMED:18357352 Is adrenal tumor size related to evolution time or does it represent a biological difference? Background: Adrenal tumor (AT) malignancy has been related to tumor size. Since laparoscopic surgery is being used, smaller adrenal tumors are being excised. Aim: To evaluate eventual clinical and histological differences between adrenal tumors smaller than 4 cm. and those larger than 6 cm. Patients And Methods: Retrospective review of pathological reports and clinical records of patients operated for adrenal tumors, dividing them in two groups. Group 1 had 29 patients aged 52 +/- 13 years with AT &lt; 4 cm operated during the period 2000-2005, and Group 2 was formed by 52 patients aged 46 +/-18 years with AT &gt;6 cm operated between 1984-2005- Tumors between 4 and 6 cm were not included in the study to establish clear cut differences between groups. Results: Tumors were functional in 40 and 41% of cases in groups 1 and 2 respectively. Fifty percent of functional tumors of group 1 were pheochromocytomas and the rest secreted aldosterone. In group 2, 66% of tumors were pheochromocytomas and no aldosterone secreting tumors were found. Fifty two and eight percent of tumors in Groups 1 and 2 were adenomas, respectively (p &lt;0.001). Nineteen tumors of group 2 were malignant, compared with one of group 1 (p &lt;0.001). Conclusions: The tumor size of adrenal cortical tumors may represent biological differences, suggesting two different tumor populations. At time of diagnosis adrenal carcinomas are almost always larger than 6 cm. abstract_id: PUBMED:32968553 Robot Assisted Laparoscopic Adrenalectomy: Does Size Matter? Objective: Open adrenalectomy (OA) is considered to be the standard care for large adrenal tumors. Minimally invasive surgery (MIS) using laparoscopic technique is considered for many patients in the modern era. Robot assisted laparoscopic adrenalectomy (RALA) can be an extremely useful tool which will negate the disadvantage of laparoscopic method. The aim of the present study is to determine whether adrenal tumor size and laterality have an impact on patients undergoing RALA with respect to perioperative and postoperative outcomes. Methods: During the study period, 38 patients who underwent RALA in a tertiary care center were considered for retrospectively analysis. The study populations were subdivided into distinctive groups based on the tumor size (&lt;5 cm and ≥5 cm, &lt;8 cm and ≥8 cm), and side (right and left side). For all the subgroups, perioperative and postoperative outcomes were analyzed. Perioperative and postoperative outcomes were assessed between patient groups, group a) &lt;5 cm and ≥5 cm tumor, group b) &lt;8 cm and ≥8 cm, and group c) laterality (right vs left). Results: None of the patients showed any differences. In the current study, the conversion rate, readmission, and mortality were not observed. No major complications were noted. Conclusion: RALA appears to be an extremely viable alternative to MIS using laparoscopic technique. The operative time, console time, blood loss, complication rates, and stay were extremely minimal irrespective of the size or laterality of the adrenal tumor. abstract_id: PUBMED:25048685 The size of the primary tumor and age at initial diagnosis are independent predictors of the metastatic behavior and survival of patients with SDHB-related pheochromocytoma and paraganglioma: a retrospective cohort study. Background: Succinate dehydrogenase subunit B (SDHB) mutations are associated with aggressive pheochromocytoma (PHEO)/paraganglioma (PGL) behavior, often resulting in metastatic disease and fatal outcomes. These tumors are often larger, extra-adrenal, and contain lower catecholamine concentrations than other hereditary PHEOs/PGLs. This study evaluated the size and age at diagnosis of primary SDHB-related PHEOs/PGLs as independent predictors of their metastatic behavior and outcome (survival). Methods: One hundred six patients with SDHB mutation-related PHEO/PGL were included in this retrospective study. The recorded largest diameters, locations, and patient ages at initial diagnosis of SDHB-related primary tumors were analyzed in the context of time to metastasis and patient survival. Results: First, the development of metastatic disease in patients with primary tumors ≥4.5 cm was significantly earlier than in patients with smaller tumors (P = 0.003). Second, patients with primary tumors larger than 5.5 cm also had worse overall survival than patients with smaller tumors (P = 0.008). Third, age at initial diagnosis was found to be an independent predictor of patient survival (PHEOs: P = 0.041; PGLs: P &lt; 0.001). Fourth, we did not observe a significant difference in survival based on the specific SDHB mutations or patient sex. Conclusion: Receiver operating characteristic curves established 4.5 cm as the best value to dichotomize the primary SDHB-related PHEO/PGL in order to evaluate the development of metastatic disease and 5.5 cm as the best value for survival prediction. Subsequently, the size of the primary tumor was found as an age-independent predictor of patient survival and metastases development in PGL. In both PHEO and PGL, age at diagnosis was found to be a size-independent predictor of patient survival. No significant difference was found in metastases development or patient survival between males and females or among specific SDHB mutations. This data further extends and supports previous recommendations that carriers with SDHB mutations must undergo early and regular evaluations to detect PHEO/PGL in order to achieve the best clinical outcome. abstract_id: PUBMED:37715982 Correlation between the size of pheochromocytoma and the level of metanephrines. Objective. Pheochromocytomas (PHEO) and paraganglioma (PGLs) are rare neuroendocrine catecholamine-producing tumors that arise from the chromaffin cells of either the adrenal medulla or extra-adrenal paraganglionic tissues. Despite the recent advances in imaging technologies, biochemical evidence of excessive catecholamine production by the tumor is considered the most important test for the diagnosis of these tumors. The aim of the present study is to investigate the role of the catecholamine metabolites (normetanephrine and metanephrine) levels in the diagnosis of PHEO/PGLs and to evaluate if their levels correlate with the size of these tumors. Patients and Methods. Twenty-five patients were included in the study during the time period of 10 years. Their data were compared with another set of 25 patients to obtain the sensitivity and specificity of metanephrine and normetanephrine in the diagnosis of PHEO/PGLs. The tumor size was reviewed in every patient to obtain the correlation coefficient between the tumor sizes and the plasma/24-hour urinary metanephrine levels. Results. The sensitivity and specificity rates for plasma metanephrine were 80-92% and 92-96%, respectively; while for 24-hour urinary metanephrine were 80-90% and 95-100%, respectively. We found a strong positive relationship between the tumor size and the plasma levels of normetanephrine (r=0.518, p&lt;0.01), and metanephrine (r=0.577, p&lt;0.01). While the relation with the 24-hour urinary concentrations of normetanephrine (r=0.384, p=0.01) and 24-h urinary meta-nephrine (r=0.138, p&lt;0.01) was low. Conclusion. The determination of plasma and 24-hour urinary levels of metanephrines is a reliable test for the diagnosis of PHEO, as they are continuously produced by the tumor cells in contrast to catecholamines. abstract_id: PUBMED:20150023 Change in adrenal mass size as a predictor of a malignant tumor. Objective: To assess the value of adrenal mass absolute growth, growth rate, and percentage growth rate on serial imaging for distinguishing benign from malignant adrenal masses. Methods: We retrospectively reviewed the Cleveland Clinic medical record data on 136 adrenalectomies or biopsies in 132 patients with 2 imaging studies performed more than 2 months apart (during 1997 to 2008). Results: There were 111 benign (81.6%) and 25 malignant (18.4%) adrenal masses. With use of receiver operating characteristic curve analysis, all 3 aforementioned growth measures showed similar levels of discrimination for the entire study group as well as for the subgroups with 3 to 12 months of follow-up (n = 75 masses) and noncontrast computed tomography Hounsfield units &gt;10 or not reported (n = 111 masses). After adjustment for other factors, the 3 growth measures remained statistically significant predictors of a malignant tumor. The absolute growth cutoff value of 0.8 cm had the highest sum of sensitivity and specificity of 72% and 81.1%, respectively. We could not identify an adrenal mass growth cutoff value to provide 100% sensitivity or specificity to confirm or exclude the presence of a malignant lesion. In 3 patients with metastatic lesions, no growth or a decrease in mass size during a period of 4 to 36 months was observed. Conclusion: In this study, the largest with surgical histopathology findings as the "gold standard" for diagnosis, change in adrenal mass size was a significant predictor of a malignant tumor. Nevertheless, we could not identify an adrenal mass growth cutoff value for reliable confirmation or exclusion of a malignant lesion. Change in adrenal mass size should be used in conjunction with other imaging and clinical characteristics when surgical resection is being considered. abstract_id: PUBMED:18336879 Laparoscopic adrenalectomy for adrenal masses: does size matter? Objectives: To examine the impact of adrenal tumor size on perioperative morbidity and postoperative outcomes in patients undergoing laparoscopic adrenalectomy. Methods: A total of 227 laparoscopic adrenalectomies were divided in three groups according to size as estimated by pathologic specimen maximum diameter: less than 6 cm (group 1, n = 140), between 6 and 7.9 cm (group 2, n = 47), and equal to or larger than 8 cm (group 3, n = 40). We prospectively recorded and analyzed clinical and pathologic data. Results: Average operative time was 60 minutes (range, 50 to 90 minutes) for group 1, 75 minutes (range, 65 to 105 minutes) for group 2, and 80 minutes (range, 65 to 120 minutes) for group 3. Estimated blood loss, median (interquartile range) was 50 mL (range, 20 to 100 mL), 100 mL (range, 48 to 225 mL), and 100 mL (range, 50 to 475 mL) for groups 1, 2, and 3, respectively. We observed a total of 10, 4, and 4 complications in groups 1, 2, and 3, respectively. Average hospital stay was 2 days (range, 2 to 3 days), 2 days (range, 2 to 3 days), and 3 days (range, 2 to 4 days), respectively, for groups 1, 2, and 3. Operative time, average blood loss, and mean hospital stay were significantly higher (P &lt;or=0.05) for group 3 compared with group 1. Conclusions: Laparoscopic adrenalectomy in large adrenal masses (8 cm or greater) is associated with significantly longer operative time, increased blood loss, and longer hospital stay, without affecting perioperative morbidity. abstract_id: PUBMED:12429302 Does adrenal mass size really affect safety and effectiveness of laparoscopic adrenalectomy? Objectives: To evaluate the effectiveness and safety of laparoscopic adrenalectomy with regard to adrenal mass size, as well as to consider its clinical and pathologic patterns. Laparoscopy is today considered the first-choice treatment of many adrenal diseases, although its use is still controversial for large adrenal masses and incidentally found adrenal cortical carcinoma. Methods: A total of 125 patients underwent lateral transperitoneal laparoscopic adrenalectomy. The indications were either functioning or nonfunctioning adrenal masses, without any radiologic evidence of involvement of the surrounding structures. The correlation between the size and the operative times, estimated blood loss, incidence of intraoperative and postoperative complications, and length of hospital stay were studied with Pearson's correlation coefficient, Fisher's exact test, and the chi-square test. The analysis of variance test was used to evaluate any possible correlation between the size and clinicopathologic features and the results. Results: A slight correlation was observed between the size and operative time (P = 0.004), but no correlation was observed between the size and the other parameters. Statistical analysis showed a significant correlation between the clinicopathologic patterns (nonfunctioning benign adrenal masses, Conn's adenoma, Cushing's adenoma, pheochromocytoma, adrenal cortical cancer, and other tumor metastasis) and the operative time (P = 0.011), but not with the other parameters. Conclusions: Laparoscopic adrenalectomy is also effective and safe for large lesions. The results of our series confirms that the risk of encountering an incidental adrenal cortical cancer is significantly increased for large lesions, and therefore, in these cases, additional attention is required to observe oncologic surgical principles. abstract_id: PUBMED:29478297 Laparoscopic adrenalectomy using the lateral retroperitoneal approach: Is it a safe and feasible treatment option for pheochromocytomas larger than 6 cm? Objectives: To evaluate the surgical feasibility of laparoscopic adrenalectomy using the lateral retroperitoneal approach for the treatment of large pheochromocytomas, and to identify the preoperative risk factors for intraoperative hypertension. Methods: We retrospectively reviewed 51 patients who underwent laparoscopic adrenalectomy using the lateral retroperitoneal approach for the treatment of pheochromocytomas. Patient characteristics and perioperative outcomes were analyzed and compared between the two study groups based on tumor size: group A (n = 27, ≤6 cm) and group B (n = 24, ˃6 cm). Results: There was no significant difference in preoperative characteristics between the two groups except for tumor size (P = 0.001) and urinary metanephrine (P = 0.011). Group B patients required longer operating time (P = 0.008), had a greater estimated blood loss (P = 0.001) and hemoglobin change (P = 0.002). However, no significant differences were observed in perioperative complications and mortality. Multivariate analysis showed that symptomatic pheochromocytomas (P = 0.004) and tumor size (P = 0.007) were significant risk factors for intraoperative hypertension. Conclusions: Laparoscopic adrenalectomy using the lateral retroperitoneal approach for pheochromocytomas can be regarded as a treatment option, even for tumors measuring &gt;6 cm. Symptomatic pheochromocytomas and large tumor size seem to represent risk factors for intraoperative hypertension. abstract_id: PUBMED:22674108 Management of adrenal incidentaloma: size still matters. A 56-year-old man was found to have an adrenal incidentaloma on a CT scan of the abdomen. Clinically and biochemically, the mass was not functional. MRI scan revealed a heterogeneously enhancing, T2-hyperintense, right-sided adrenal mass (4.5×6.5 cm). Meta-iodo-benzylguanidine scan was normal, making a diagnosis of pheochromocytoma unlikely. As the mass was larger that 4 cm, it was excised and histopathological examination revealed a rare, composite tumour: benign adrenal adenoma with haemangiomatous and myelolipomatous components. This case highlights the difficulties encountered by a clinician faced with investigating a potentially malignant adrenal mass (based on size) and correlates radiological findings with a rare histopathological specimen. abstract_id: PUBMED:2194510 Time-related changes in plasma adrenal steroids during treatment with spironolactone in primary aldosteronism. Time-related changes in plasma levels of aldosterone, deoxycorticosterone, corticosterone, and cortisol were studied during treatment with spironolactone in 8 patients with primary aldosteronism due to adenomas. Plasma renin activity (PRA), serum sodium and serum potassium were also measured. The patients were treated with spironolactone, 75 to 225 mg daily, and blood samples were withdrawn on days 7, 14, and 21 to 28 of drug administration. Plasma aldosterone concentrations (PAC) were not altered by spironolactone; however, significant increases were observed in plasma deoxycorticosterone on days 21 to 28 and in plasma corticosterone and cortisol on days 14 and 21 to 28. The suppressed PRA values were markedly increased on days 21 to 28 and low levels of serum potassium returned to the normal range on day 7. The lack of increase in PAC, despite a remarkable rise in PRA and serum potassium, suggests biosynthetic inhibition of spironolactone at the sites of 18-hydroxylation and/or 18-oxidation, because of the elevation of deoxycorticosterone and corticosterone. Answer: Adrenal tumor size may represent a biological difference rather than being solely related to evolution time. A study comparing clinical and histological differences between adrenal tumors smaller than 4 cm and those larger than 6 cm found that at the time of diagnosis, adrenal carcinomas are almost always larger than 6 cm, suggesting two different tumor populations (PUBMED:18357352). This indicates that larger tumors may inherently have different biological characteristics compared to smaller ones. Additionally, the size of the primary tumor and age at initial diagnosis are independent predictors of metastatic behavior and survival in patients with SDHB-related pheochromocytoma and paraganglioma, with larger tumors being associated with earlier development of metastatic disease and worse overall survival (PUBMED:25048685). This further supports the notion that tumor size is indicative of biological differences that can impact clinical outcomes. Moreover, the correlation between the size of pheochromocytoma and the level of metanephrines suggests that larger tumors may produce higher levels of catecholamine metabolites, which could reflect a difference in tumor activity or biology (PUBMED:37715982). While the size of an adrenal mass is a significant predictor of a malignant tumor, no specific growth cutoff value has been identified that can reliably confirm or exclude the presence of a malignant lesion, indicating that size alone is not the sole determinant of malignancy (PUBMED:20150023). However, the fact that larger adrenal masses are associated with increased operative time, blood loss, and hospital stay during laparoscopic adrenalectomy (PUBMED:18336879) suggests that size may be related to the complexity of the tumor and its management. In summary, adrenal tumor size is likely to represent a biological difference, as it is associated with different clinical behaviors, biochemical profiles, and surgical outcomes. However, it is important to consider tumor size in conjunction with other clinical and imaging characteristics when evaluating adrenal tumors and deciding on management strategies.
Instruction: Is Intraoperative Parathyroid Hormone Monitoring Warranted in Cases of 4D-CT/Ultrasound Localized Single Adenomas? Abstracts: abstract_id: PUBMED:26124265 Is Intraoperative Parathyroid Hormone Monitoring Warranted in Cases of 4D-CT/Ultrasound Localized Single Adenomas? Objective: To analyze the utility of intraoperative parathyroid hormone (IOPTH) monitoring for patients with primary hyperparathyroidism who had evidence of single-gland disease on preoperative imaging with modified 4-dimensional computed tomography that was done in conjunction with ultrasonography (Mod 4D-CT/US). Study Design: Case series with chart review. Setting: Tertiary care university medical center. Subjects And Methods: Patients were drawn from consecutive directed parathyroidectomies performed between December 2001 and June 2013 by the senior authors. All patients had primary hyperparathyroidism and underwent a Mod 4D-CT/US study that showed findings on both studies that were consistent with a single adenoma. The modified Miami criteria were used for IOPTH monitoring (parathyroid hormone decrease by &gt;50% and into the normal range). Results: Of 356 patients who underwent parathyroid surgery, 206 had a single gland localized on the Mod 4D-CT and the US studies. IOPTH monitoring was used in 172 cases, of which 169 had adequate clinical follow-up to assess the surgical outcome. Twenty-one patients (12.4%) had IOPTH values that did not meet modified Miami criteria after removal of one gland, of which 7 were found to have multigland disease (4.1%). Three patients (1.8%) had persistent primary hyperparathyroidism despite an IOPTH that met modified Miami criteria. Conclusions: Although IOPTH monitoring correctly identifies a small percentage of patients with multigland disease, some patients will be subjected to unnecessary neck explorations that can result in difficult intraoperative decisions, such as whether to remove normal or equivocal-sized glands when they are encountered. abstract_id: PUBMED:34400112 Preoperative matching studies in the diagnosis of parathyroid adenoma for primary hyperparathyroidism: Can we avoid intraoperative PTH monitoring? Introduction: We aim to determine the utility of intraoperative parathyroid hormone (IOPTH) monitoring in patients with matching preoperative ultrasound and mibi SPECT for primary hyperparathyroidism for a single adenoma. Methods: All patients who underwent minimally invasive parathyroidectomy (MIP) for pseudohypoparathyroidism (PHP) for a single parathyroid adenoma, were included. An Ultrasound and mibi SPECT were performed in all patients. We defined matching studies when both coincided in the localization of the adenoma. IOPTH was performed in all patients and analyzed in three occasions: a baseline measurement at the anesthetic induction, immediately before, and 15 min after gland excision. Success was defined during the third measurement as a drop of IOPTH of at least 50% compared to the previous maximum value after gland excision. Demographics, intraoperative, postoperative variables and the utility of IOPTH monitoring were analyzed. Results: A total of 218 MIP were performed. The average age was 60.1 years and 85% were female. Preoperative ultrasound and mibi SPECT coincided 100%. When the adenoma was localized, 15 min after its excision, IOPTH did not decrease in 9 patients (4.2%. OR 1.9%-7.69%); all of them underwent a bilateral neck exploration. The added-value of IOPTH accuracy for disease cure was 3.6%. There was a 99% of cure rate. The mean surgical time was 66.4 min and the waiting time for the third IOPTH result was 31 min. Performing IOPTH monitoring made the surgery about twice more expensive. Conclusions: Preoperative matching ultrasound and mibi SPECT for parathyroid adenoma localization in PHP, could avoid IOPTH monitoring in minimally invasive parathyroidectomies. abstract_id: PUBMED:34146444 Utility of intraoperative digital scintigraphy in radioguided parathyroidectomy. Background: Intraoperative scintigraphy (IoS) has been proposed as a tool for real-time intraoperative decision-making regarding parathyroid adenoma localization and confirmation of excision. Methods: Retrospective review of patients who underwent minimally invasive parathyroidectomies with scintigraphy performed intraoperatively. Preoperative neck ultrasound, 4D computed tomography, as well as intraoperative parathyroid hormone (IOPTH) and gamma probe measurements were conducted per standard practice. IoS images were obtained prior to and following parathyroid excision. Cases were reviewed to determine accuracy of IoS for localizing parathyroid pathology and confirming successful excision. Results: Fifty-six cases met the inclusion criteria. Twenty-nine patients (51.8%) showed confirmation of excision of an abnormal gland on post-excision IoS. There were no significant differences in IOPTH reduction and postoperative laboratory values between patients with IoS-identified resolution and those without IoS-identified resolution. Conclusions: With low accuracy in correctly localizing abnormal glands and confirming their excision, there is no appreciable benefit of IoS at this time. abstract_id: PUBMED:26248963 Preoperative 4D CT Localization of Nonlocalizing Parathyroid Adenomas by Ultrasound and SPECT-CT. Objective: To evaluate 4-dimensional (4D) computed tomography (CT) for the localization of parathyroid adenomas previously considered nonlocalizing on ultrasound and single-photon emission CT with CT scanning (SPECT-CT). To measure radiation exposure associated with 4D-CT and compared it with SPECT-CT. Study Design: Case series with chart review. Setting: University tertiary hospital. Subjects And Methods: Nineteen adults with primary hyperparathyroidism who underwent preoperative 4D CT from November 2013 through July 2014 after nonlocalizing preoperative ultrasound and technetium-99m SPECT-CT scans. Sensitivity, specificity, predictive values, and accuracy of 4D CT were evaluated. Results: Nineteen patients (16 women and 3 men) were included with a mean age of 66 years (range, 39-80 years). Mean preoperative parathyroid hormone level was 108.5 pg/mL (range, 59.3-220.9 pg/mL), and mean weight of the excised gland was 350 mg (range, 83-797 mg). 4D CT sensitivity and specificity for localization to the patient's correct side of the neck were 84.2% and 81.8%, respectively; accuracy was 82.9%. The sensitivity for localizing adenomas to the correct quadrant was 76.5% and 91.5%, respectively; accuracy was 88.2%. 4D CT radiation exposure was significantly less than the radiation associated with SPECT-CT (13.8 vs 18.4 mSv, P = 0.04). Conclusion: 4D CT localizes parathyroid adenomas with relatively high sensitivity and specificity and allows for the localization of some adenomas not observed on other sestamibi-based scans. 4D CT was also associated with less radiation exposure when compared with SPECT-CT based on our study protocol. 4D CT may be considered as first- or second-line imaging for localizing parathyroid adenomas in the setting of primary hyperparathyroidism. abstract_id: PUBMED:19885701 The value of intraoperative parathyroid hormone monitoring in localized primary hyperparathyroidism: a cost analysis. Background: Minimally invasive parathyroidectomy (MIP) is the preferred approach to primary hyperparathyroidism (PHPT) when a single adenoma can be localized preoperatively. The added value of intraoperative parathyroid hormone (IOPTH) monitoring remains debated because its ability to prevent failed parathyroidectomy due to unrecognized multiple gland disease (MGD) must be balanced against assay-related costs. We used a decision tree and cost analysis model to examine IOPTH monitoring in localized PHPT. Methods: Literature review identified 17 studies involving 4,280 unique patients, permitting estimation of base case costs and probabilities. Sensitivity analyses were performed to evaluate the uncertainty of the assumptions associated with IOPTH monitoring and surgical outcomes. IOPTH cost, MGD rate, and reoperation cost were varied to evaluate potential cost savings from IOPTH. Results: The base case assumption was that in well-localized PHPT, IOPTH monitoring would increase the success rate of MIP from 96.3 to 98.8%. The cost of IOPTH varied with operating room time used. IOPTH reduced overall treatment costs only when total assay-related costs fell below $110 per case. Inaccurate localization and high reoperation cost both independently increased the value of IOPTH monitoring. The IOPTH strategy was cost saving when the rate of unrecognized MGD exceeded 6% or if the cost of reoperation exceeded $12,000 (compared with initial MIP cost of $3733). Setting the positive predictive value of IOPTH at 100% and reducing the false-negative rate to 0% did not substantially alter these findings. Conclusions: Institution-specific factors influence the value of IOPTH. In this model, IOPTH increased the cure rate marginally while incurring approximately 4% additional cost. abstract_id: PUBMED:12140628 Minimally invasive video-assisted parathyroidectomy and intraoperative parathyroid hormone monitoring. The first 36 cases and some pitfalls. Background: The success of parathyroid surgery depends on the identification and removal of all hyperactive parathyroid tissue. At this writing, bilateral cervical exploration and identification of all parathyroid glands represent the operative standard for primary hyperparathyroidism (pHPT). However, improved preoperative localization techniques and the availability of intraoperative parathyroid hormone monitoring prepare the way for minimally invasive procedures. Methods: Patients with pHPT and one unequivocally enlarged parathyroid gland on preoperative ultrasound and 99mTc-SestaMIBI scintigraphy underwent minimally invasive video-assisted parathyroidectomy by an anterior approach. Intraoperatively, a rapid chemiluminescense immunoassay was used to measure intact parathyroid hormone (iPTH) levels shortly before and then 5, 10, and 15 min after excision of the adenoma. The operation was considered successful when more than a 50% decrease in preexcision iPTH levels was observed after 5 min. Results: Between October 1999 and November 2001, 36 of 82 patients with pHPT were eligible for a minimally invasive approach. A conversion to open surgery became necessary in five patients because of technical problems. In three cases, intraoperative iPTH monitoring showed no sufficient decrease in iPTH values. In these cases, subsequent cervical exploration showed one double adenoma and two hyperplasias, respectively. In two patients we had difficulty interpreting intraoperative iPTH values, resulting in persistent pHPT. Conclusions: Despite the use of high-resolution ultrasound and 99mTc-SestaMIBI scintigraphy, the presence of multiple glandular disease cannot be ruled out completely. Intraoperative iPTH monitoring to ensure operative success is indispensible for a minimally invasive approach. Despite our problems with iPTH monitoring in two patients, we believe that in selected cases, minimally invasive parathyroidectomy represents an attractive alternative to conventional surgery. abstract_id: PUBMED:28017299 Is intraoperative parathyroid hormone monitoring necessary for primary hyperparathyroidism with concordant preoperative imaging? Background: The purpose of this study was to determine if intraoperative parathyroid hormone (PTH) monitoring is necessary in patients with concordant ultrasound and sestamibi imaging. Methods: Clinical factors, intraoperative findings and outcome were correlated with imaging results in patients with primary hyperparathyroidism who underwent parathyroidectomy with concordant ultrasound and sestamibi imaging. Results: There were 127 patients with primary hyperparathyroidism and concordant imaging who underwent parathyroidectomy. Seven patients (5.5%) had intraoperative findings that were discordant with imaging: 2 (1.6%) had an adenoma at a different location, 2 (1.6%) had double adenomas and 3 (2.4%) had asymmetric hyperplasia. Gland weight and preoperative PTH levels were greater for patients with concordant operative and imaging findings (p &lt; 0.05). Conclusion: Six percent of patients with concordant ultrasound and sestamibi imaging had unexpected intraoperative findings. Intraoperative PTH monitoring remains a necessary adjunct even with concordant imaging to ensure identification of abnormal parathyroid glands and cure of hyperparathyroidism. abstract_id: PUBMED:11383071 Intraoperative monitoring of intact parathyroid hormone (iPTH) in surgery of primary hyperparathyroidism with a new rapid test Objectives: Intraoperative differentiation between adenoma and hyperplasia during surgery for primary hyperparathyroidism (pHPT) is sometimes difficult, but essential for good results. The aim of our study was to evaluate a new highly sensitive electrochemiluminescence immunoassay (ECLIA) for intraoperative monitoring of intact parathyroid hormone (iPTH) following parathyroidectomy as an adjunct for identification of solitary adenoma in patients with pHPT. Methods: Thirty consecutive patients with pHPT (2 with recurrent pHPT) were examined following a standardized protocol: Immediately before and 5, 10 and 15 min following parathyroidectomy of the enlarged gland, iPTH was measured with a new ECLIA (Roche-Diagnostics, Mannheim, Germany). The results were available within 15-20 min. Besides 20 conventional bilateral neck explorations, parathyroidectomy was carried out in a minimally invasive video-assisted technique (MI-VAP) in 10 patients. Results: Among the 30 patients we found 24 with solitary adenoma (80%), 5 with hyperplasia (17%) and one with a double adenoma (3%). Five minutes after removal of a solitary adenoma the level of iPTH had decreased by 65 (12)% [mean (+/- SD)], after 10 min by 76 (8)% and after 15 min by 81 (8)%. All patients with multiple gland disease could be clearly identified, as iPTH after 15 min did not fall below 50% of basal value. Only after removal of all hyperplastic glands did iPTH decrease to the normal range. Sensitivity and specificity for prediction of a solitary adenoma were 92% and 100% (decline of iPTH more than 50% from baseline value 5 min after parathyroidectomy). In one patient with recurrent pHPT intraoperative sampling from different sites in both internal jugular veins could predict the quadrant of the enlarged gland. Correlation (r) between the results of the quick and the conventional assay, which requires 24 h of incubation, was 0.955. All patients had normal or low calcium levels postoperatively. Conclusions: (1) Intraoperative monitoring of iPTH with this new quick assay allows safe identification of patients with solitary adenoma during surgery for pHPT. (2) It represents a valuable adjunct to surgical skill not only in primary operations for pHPT but especially in cases of recurrent surgery for pHPT. (3) With this test available minimally invasive techniques for parathyroidectomy may be employed in cases of preoperatively localized adenoma (ultrasound, sesta-mibi scan), avoiding bilateral neck exploration with its higher potential for complications. abstract_id: PUBMED:31332766 Surgical treatment in patients with single and multiple gland primary hyperparathyroidism with the use of intraoperative parathyroid hormone monitoring: extensive single-center experience. Purpose: To evaluate a single-center extensive experience and effectiveness in surgical treatment of primary hyperparathyroidism (pHPT) with the use of rapid intraoperative parathyroid hormone (ioPTH) monitoring in patients with single gland (SGpH) or multiple gland (MGpH) disease. Methods: This retrospective, single-center cohort study included 214 patients with pHPT treated from January 2010 to June 2017. In total, 172 patients fulfilled the inclusion criteria having at least one preoperative localization image study and measurement of ioPTH. Statistical analysis was made by the chi-square test and Student's t tests. Results: Of the 172 patients, 146 were women (85%) and 26 men (15%), with a mean age of 56.9 years; 153 (89%) had SGpH and 19 (11%) MGpH. The mean follow-up was 41.8 months. A total of 153 surgical procedures were performed as minimal invasive parathyroidectomy (MIP) based on a SGpH diagnosis; operative success was achieved in 150 cases (98%), according to ioPTH concentrations. The remainder (19 procedures) were performed as bilateral neck exploration (BNE) based on a MGpH diagnosis; operative success was achieved in 16 cases (84%). ioPTH correctly modified the initially planned operation in 26.3% of patients with MGpH. Conclusions: ioPTH enables the surgical treatment of patients with pHPT with focused approaches and excellent results, as it helps the surgeon to identify cases of MGpH. ioPTH adds value to cases where preoperative imaging failed to detect the affected gland or the results are inconclusive. According to the literature, its application seems to be of marginal benefit in cases in which there are two concordant preoperative imaging studies. abstract_id: PUBMED:36246803 Analysis of the successful clinical treatment of 140 patients with parathyroid adenoma: A retrospective study. Background: Parathyroid adenoma (PA) sometimes recurs after surgery, how to improve the surgical success rate of PA is the key to the treatment of this disease. Aim: To investigate the clinical features, diagnosis, and surgical treatment of patients with PA. Methods: Patients who were pathologically confirmed with PA and had undergone surgery for the first time between January 2010 and December 2017 at the Beijing Shijitan Hospital affiliated to Capital Medical University were included in the study. The clinical features, localization diagnosis, and surgical treatment of these patients were analyzed. Results: Of the 140 patients, 32 were male and 108 were female; 132 cases had one adenoma, and 8 had two adenomas. In addition, 114 cases had clinical symptoms, among which 51, 28, 23, 8, and 4 had urinary system, skeletal system, digestive system, neuromuscular system, and neuropsychiatric symptoms, respectively, while 26 cases had no obvious symptoms. The median level of preoperative parathyroid hormone (PTH) was 201.0 pg/mL. The positive detection rate of technetium-99m sestamibi (Tc-99m MIBI) single-photon emission computed tomography/computed tomography (SPECT/CT), ultrasound examination, and the combined use of Tc-99m MIBI SPECT/CT and ultrasound examination was 92.9%, 85.5%, and 96.4%, respectively. Open surgery was performed in all patients, and PTH was monitored during surgery. The success rate of surgery was 98.6%. After surgery, 21 cases developed hypocalcemia, 1 case developed temporary hoarseness, and 19 cases had transient hypoparathyroidism but there was no permanent hypoparathyroidism, postoperative hemorrhage, or hematoma in the surgical area. Conclusion: For patients with clinically unexplained skeletal system, urinary system, and neuropsychiatric symptoms, the possibility of PA should be considered. Imaging examinations such as ultrasound and Tc-99m MIBI SPECT/CT could be integrated before surgery to obtain accurate localization diagnosis. Precise preoperative localization, intraoperative PTH monitoring, and delicate surgery to protect the integrity of the PA capsule ensure a minimally invasive and successful surgery. Answer: Intraoperative parathyroid hormone (IOPTH) monitoring in cases of 4D CT/ultrasound localized single adenomas has been a subject of debate. Some studies suggest that IOPTH monitoring may not be necessary in all cases, especially when preoperative imaging is concordant. For instance, one study found that preoperative matching ultrasound and mibi SPECT for parathyroid adenoma localization in primary hyperparathyroidism could potentially avoid IOPTH monitoring in minimally invasive parathyroidectomies, as the added value of IOPTH accuracy for disease cure was only 3.6% with a 99% cure rate (PUBMED:34400112). Another study concluded that IOPTH monitoring remains a necessary adjunct even with concordant imaging to ensure identification of abnormal parathyroid glands and cure of hyperparathyroidism, as 6% of patients with concordant ultrasound and sestamibi imaging had unexpected intraoperative findings (PUBMED:28017299). However, other studies highlight the utility of IOPTH monitoring. One study reported that IOPTH monitoring correctly identifies a small percentage of patients with multigland disease, but some patients will be subjected to unnecessary neck explorations, which can result in difficult intraoperative decisions (PUBMED:26124265). Another study emphasized that IOPTH enables the surgical treatment of patients with primary hyperparathyroidism with focused approaches and excellent results, as it helps the surgeon to identify cases of multiple gland disease (PUBMED:31332766). In summary, while IOPTH monitoring can identify a small percentage of patients with multigland disease and assist in surgical decision-making, its routine use in cases with concordant preoperative imaging of single adenomas may not always be necessary and could lead to additional costs and unnecessary explorations. The decision to use IOPTH monitoring should be individualized based on the likelihood of multigland disease, the accuracy of preoperative imaging, and institutional factors such as the cost of IOPTH and the cost of potential reoperations (PUBMED:19885701).
Instruction: Do clinical and behavioural correlates of obese patients seeking bariatric surgery differ from those of individuals involved in conservative weight loss programme? Abstracts: abstract_id: PUBMED:23650988 Do clinical and behavioural correlates of obese patients seeking bariatric surgery differ from those of individuals involved in conservative weight loss programme? Background: Clinical practice has suggested that, in severely obese patients seeking bariatric surgery, clinical conditions, behavioural characteristics and psychological status might all differ from those of their counterparts starting conventional conservative therapy. Methods: Two groups of obese patients with closely similar body mass values were considered. The first group included individuals voluntarily and spontaneously seeking biliopancreatic diversion and the second group comprised patients at the beginning of a weight loss programme. After anthropometric and metabolic evaluation, the patients underwent an alimentary interview; eating behaviour and psychological status were assessed by Three Factor Eating Questionnaire and by Toronto Alexithymia Scale (TAS). Results: Among bariatric candidates, a greater number of individuals with type 2 diabetes and dyslipidaemia and high tendency to disinhibition and susceptibility to hunger scores was observed, whereas the other aspects of eating pattern were essentially similar. In the two groups, no difference in TAS score and or number of patients with alexithymic traits was observed. Finally, a logistic regression model showed that only age and metabolic derangement predicted the bariatric option, whereas eating behaviour or psychological status did not influence individual therapeutic choice. Conclusions: Independently of the degree of obesity, bariatric surgery was requested by the more metabolically deranged patients, whereas, in the surgical candidates, the eating pattern and psychological conditions were very similar to those of obese persons at the beginning of a conservative weight loss programme. These results suggest a highly realistic and practical attitude in severely obese patients towards obesity and bariatric surgery. abstract_id: PUBMED:32331997 Sleep disturbance and insomnia in individuals seeking bariatric surgery. Background: Sleep disturbance is well established in individuals with obesity, and the relationship between poor sleep and obesity is supported by population, longitudinal, experimental, and intervention studies. However, the prevalence and characteristics of poor sleep in individuals seeking bariatric surgery have thus far been poorly examined. Objectives: We sought to characterize self-reported sleep parameters in individuals seeking bariatric surgery and to compare these data with controls. Setting: Two Academic Medical Centers, United States, and an online survey of healthy controls. Method: Individuals seeking bariatric surgery (n = 427) completed presurgical psychological evaluations at 2 comprehensive bariatric surgery programs. Data on medical co-morbidities and from self-report questionnaires on sleep quality, insomnia, anxiety, and depression were abstracted from charts. Data from controls (n = 180) were collected using an online survey tool and compared with bariatric cases. Results: Across study sites, 40.4% of bariatric cases took at least 30 minutes to fall asleep, 46.7% had insufficient total sleep time (&lt;6.5 hr), 65.1% reported general poor sleep quality, and 30.8% reported clinically significant insomnia symptoms. Approximately 20% of the variance in poor sleep quality and insomnia was explained by body mass index, obstructive sleep apnea, anxiety, and depression. Cases and controls were similar, although bariatric cases reported significantly poorer sleep efficiency. Conclusions: Our results suggest that similar to a control population, the majority of patients seeking bariatric surgery are experiencing sleep difficulties. Presurgical assessment and treatment of sleep problems may be beneficial to patients and may help improve weight loss treatment outcomes. Optimally, assessment would include 1 of the 2 self-report questionnaires used herein, and treatment would involve Cognitive Behavioral Therapy for Insomnia. Future research assessing sleep patterns with objective measurement tools and evaluating the impact of sleep on postsurgical outcomes is warranted. abstract_id: PUBMED:38323880 Risk factors for multimodal conservative approach failure before bariatric surgery. Background: A multimodal conservative approach (MCA) is internationally recommended prior to bariatric surgery for the majority of patients. This research aimed to identify risk factors for therapy failure within the MCA. Methods: This study was conducted in a German bariatric obesity centre. Patients who underwent a 3- to 6-month preoperative MCA from November 2019 to October 2020 were prospectively recorded and analysed. MCA included dietary and movement therapy, psychological and endocrinological referral and connection to a patient support group. The change in BMI was considered the main parameter for therapeutic success. Explanatory factors were evaluated for this outcome of interest and included physical and mental well-being (SF-36), perceived stress (DASS-21), social and economic situation and educational background (SOEP). Results: Out of 413 eligible patients, 209 were analysed. Overall, patients gained BMI (average gain 0.23 kg/m² (s.d. 1.44 kg/m²)) during the preoperative MCA. Diverse social, demographic and economic and health properties did not influence the course of the therapy. There was a significant positive correlation between successful loss of BMI and physical well-being (β = 0.03, 95% c.i. 0.01 to 0.04, P &lt; 0.0001) and a negative correlation between mental well-being (β = -0.02, 95% c.i. -0.004 to -0.001, P = 0.001) as well as self-perceived stress (β = -0.05, 95% c.i. -0.09 to -0.01, P = 0.019), although these effects were small and did not affect the overall trend of gaining BMI during MCA. Conclusion: The current therapeutic approach of preoperative conservative treatment within the multimodal concept of bariatric treatment fails to achieve its intended outcome. A different conservative weight loss programme from the pre-bariatric setting, which should focus on a surgical preparation, is therefore recommended. abstract_id: PUBMED:34521092 Attitude Matters! How Attitude towards Bariatric Surgery Influences the Effects of Behavioural Weight Loss Treatment. Introduction: Multidisciplinary obesity services at university hospitals usually treat patients with more complex and severe obesity. In addition, patients with Class 3 obesity, in particular, have different attitudes regarding the choices of therapy. Methods: This explorative study investigated the effect of patient attitudes towards bariatric surgery on body weight change (primary outcome) and psychological improvement (secondary outcomes: quality of life, depression, anxiety, and eating behaviour) in a 6-month moderate behavioural weight loss (BWL) programme in a university outpatient setting. Results: 297 patients with mostly Class 3 obesity participated in the programme. The patients did not yet have any indications for bariatric surgery. Of the participants, 37% had a positive attitude towards bariatric surgery (POS), whereas 38% had a negative attitude (NEG). The drop-out rate was 8%. NEG participants lost significantly more body weight than the POS participants (intention-to-treat population: 4.5 [SD: 6.3] kg versus 0.4 [SD: 5.8] kg; p &lt; 0.001). In both subgroups, anxiety, depression, the mental score for quality of life, and eating behaviour improved. Conclusion: A BWL treatment in a clinical setting identified 2 distinct groups with different attitudes towards bariatric surgery that were associated with different body weight change outcomes. These groups may require differently targeted programmes to achieve the best body weight loss results. abstract_id: PUBMED:23294936 Post-operative behavioural management in bariatric surgery: a systematic review and meta-analysis of randomized controlled trials. Recent research has provided evidence that bariatric surgery maximizes long-term weight loss in patients with severe obesity. However, a substantial number of patients experience poor weight loss outcome and weight regain over time. Post-operative behavioural management may facilitate long-term weight control in bariatric surgery population. The objective of this systematic review and meta-analysis was to determine the effects of post-operative behavioural management on weight loss following bariatric surgery. Eligible articles were systematically searched in electronic databases. Among the 414 citations, five randomized controlled trials, two prospective and eight retrospective cohort trials analysing behavioural lifestyle interventions and support groups fulfilled the inclusion criteria. The main finding is that behavioural management had a positive effect on weight loss following surgery. In 13 studies, patients receiving behavioural management had greater weight loss than patients receiving usual care or no treatment. A meta-analysis of five randomized controlled trials suggests greater weight loss in patients with behavioural lifestyle interventions compared with control groups. Post-operative behavioural management has the potential to facilitate optimal weight loss following bariatric surgery, but conclusions were limited by the small and heterogeneous samples of studies. A more rigorous empirical evaluation on its clinical significance is warranted to improve effectiveness of bariatric surgery. abstract_id: PUBMED:18316272 Behavioral characteristics of severely obese patients seeking bariatric surgery: cross-sectional study with alimentary interview. The authors' aim in this study was to gain insight on the eating behaviors of severely obese patients seeking bariatric surgery. The authors compared anthropometric and alimentary interview data on 50 patients applying for biliopancreatic diversion with data obtained from 50 severely obese individuals enrolling in a behavior modification weight-loss program. The severely obese patients seeking bariatric surgery were metabolically more compromised than were their counterparts enrolled in the conservative treatment group, whereas the latter more often reported compromised eating behaviors. These unexpected results could reflect changes in the widespread attitudes toward bariatric surgery-that unlike in the past, it is now considered a safe and effective method to treat a serious disease. abstract_id: PUBMED:27779770 Elective surgery to save my life: rethinking the "choice" in bariatric surgery. Aims: The aim of this study was to explicate the processes by which a patient's choice to undergo bariatric surgery is made to feel like a medical necessity, to explore the ways widespread societal stigmatization of weight and bariatric surgery shapes patient and provider discourse about surgery and to discuss patient rationalizations of the choice to have surgery. Background: Severe obesity is currently highly stigmatized. Bariatric surgery has become an increasingly used option for individuals seeking to lose drastic amounts of weight. The surgery itself, however, remains stigmatized across many diverse settings. Design: This research design is centred on an ethnographic study of bariatric surgery patients who undergo surgery at a particular bariatric clinic in the American Southwest. Methods: Data collection included repeated ethnographic interviews with 35 individuals enrolled in the bariatric programme over the past 5 years. The interviews were supplemented by extensive participant observation, starting in 2014 to date. Thematic analysis of fieldnotes and transcribed interviews followed. Results: People who have bariatric surgery for weight-loss may trade one type of stigma for another. Thus, individuals who qualify for bariatric surgery based on weight alone may be reluctant to explore the surgery as a viable option. This research also shows that younger women are more likely to face the effects of weight-related stigma, which has an impact on their motivations for undergoing bariatric surgery. Conclusion: Stigma - both weight-related and surgery-related - needs to be addressed at a larger level, in terms of policy and in clinical bariatric programmes. abstract_id: PUBMED:21593805 Clinical correlates of the weight bias internalization scale in a sample of obese adolescents seeking bariatric surgery. The aim of this study was to evaluate psychometric properties and clinical correlates of the Weight Bias Internalization Scale (WBIS) in a sample of obese adolescents seeking bariatric surgery. Sixty five adolescents enrolled in a bariatric surgery program at a large, urban medical center completed psychiatric evaluations, self-report questionnaires including the WBIS and other measures of psychopathology and physical assessments. The WBIS had high internal consistency (Cronbach's α = 0.92). As in previous research with adults, the one underlying factor structure was replicated and 10 of the original 11 items were retained. The scale had significant partial correlations with depression (r = 0.19), anxiety (r = 0.465), social, and behavioral problems (r = 0.364), quality of life (r = -0.480), and eating (r = 0.579), shape (r = 0.815), and weight concerns (r = 0.545), controlling for BMI. However, WBIS scores did not predict current or past psychiatric diagnosis or treatment or past suicidal ideation. Overall, the WBIS had excellent psychometric properties in a sample of obese treatment-seeking adolescents and correlated significantly with levels of psychopathology. These findings suggest that the WBIS could be a useful tool for healthcare providers to assess internalized weight bias among treatment-seeking obese youth. Assessment of internalized weight bias among this clinical population has the potential to identify adolescents who might benefit from information on coping with weight stigma, which in turn may augment weight loss efforts. abstract_id: PUBMED:19221665 Development of bariatric surgery: the effectiveness of a multi-disciplinary weight management programme in Hong Kong. Introduction: In Hong Kong, obesity and its associated co-morbidities are increasingly becoming a health and societal burden. Conservative weight management therapy is ineffective in maintaining substantial weight loss in severely obese patients and more invasive interventions are required to achieve sustainable weight loss. Materials And Methods: Invasive bariatric procedures were introduced to Hong Kong in 2002. Severely obese patients will be seen in a combined obesity clinic where multi-disciplinary assessment was carried out before interventional therapy. Patients will be allocated to various bariatric surgeries such as laparoscopic adjustable gastric banding (LAGB), laparoscopic sleeve gastrectomy (LSG) or laparoscopic gastric bypass (LGB) when operative criteria were met. For selected patients who were not suitable or refused surgical treatment, endoscopic placement of intragastric balloon (IGB) will be used as an alternative weight control option. Multi-disciplinary approach was also applied during perioperative period to accomplish different clinical needs for the individual patient. Results: Over 500 patients were seen in our unit seeking advice on severe obesity. Two hundred and twenty-five patients received interventional therapy which included LAGB (n = 57), LSG (n = 71), LGB (n = 7) and IGB programme (n = 120). Thirty patients (25%) received second bariatric surgery after IGB removal. Adverse events occurred in 20 patients (7.8%) and there was no operative mortality. At 2 years, the mean percentage of excessive weight loss (%EWL) for LAGB, LSG and LGB are 34%, 51% and 61%, respectively. In those patients who received IGB alone, the mean %EWL at removal and 6 months after removal were 44% and 34%, respectively. Conclusions: Through a multi-disciplinary weight management programme with different specialties and various bariatric procedures, favourable results can be achieved in patients with severe obesity. abstract_id: PUBMED:38018285 Bariatric surgery stigma from the perspective of patients: A scoping review. Aims: To explore the experiences and consequences of bariatric surgery stigma from the perspective of bariatric surgery patients and to identify knowledge gaps in the literature. Design: A scoping review. Data Sources: Studies published between December 2002 and January 2022 were identified from the following databases: PubMed, Web of Science, PsycINFO and MEDLINE. Review Methods: We screened 3092 records and included 28 studies. Findings were grouped thematically using the health stigma discrimination framework. Results: Patients experienced several types of stigmas, including perceived, experienced, anticipated and internalized stigma, related to undergoing bariatric surgery. Patients were confronted with negative comments and judgement from others when they disclosed their decision to have surgery or when they revealed that they had undergone surgery. These experiences led to conflicts in the decision-making process, such as delaying the choice for surgery, seeking surgery abroad or opting out. Patients who internalized stigma often reported feelings of shame and embarrassment for choosing surgery and felt the need for secrecy or selective disclosure. Stigma experiences were influenced by gender and differed between different subgroups of patients. Conclusion: The stigma surrounding bariatric surgery has detrimental consequences for eligible individuals and bariatric surgery patients. However, the evidence from patients' perspective remains limited. More research into patient's experiences is needed to improve patient care and further educate healthcare professionals. In addition, to better understand the nature and implications of bariatric surgery stigma, future research should be founded on stigma theories and distinguish between the different types of stigmas. Impact: This scoping review contributes to a better understanding of the implications the stigma surrounding bariatric surgery has for bariatric surgery patients before and after surgery. The knowledge generated by this review can inform improvements in the education and management of people living with obesity and bariatric surgery patients. Reporting Method: The scoping review adhered to the Preferred Reporting Items for Systematic Reviews and Meta-Analysis Extension for Scoping Reviews checklist. Patient Or Public Contribution: No Patient or Public Contribution. Protocol Registration: A protocol was not registered for this scoping review. Answer: Yes, clinical and behavioural correlates of obese patients seeking bariatric surgery do differ from those of individuals involved in conservative weight loss programmes. A study comparing two groups of obese patients with similar body mass values—one seeking bariatric surgery and the other starting a conservative weight loss programme—found that bariatric candidates had a greater number of individuals with type 2 diabetes and dyslipidaemia. They also had higher tendencies towards disinhibition and susceptibility to hunger scores. However, other aspects of the eating pattern and psychological conditions, such as alexithymic traits measured by the Toronto Alexithymia Scale (TAS), were essentially similar between the two groups. Logistic regression analysis showed that age and metabolic derangement predicted the choice of bariatric surgery, whereas eating behaviour or psychological status did not influence the therapeutic choice (PUBMED:23650988). Additionally, another study found that the majority of patients seeking bariatric surgery experienced sleep difficulties, with a significant proportion reporting poor sleep quality and clinically significant insomnia symptoms. This suggests that sleep disturbance is a common issue among individuals seeking bariatric surgery (PUBMED:32331997). Furthermore, an explorative study identified two distinct groups within a moderate behavioural weight loss programme based on their attitudes towards bariatric surgery. Those with a negative attitude towards surgery lost significantly more body weight than those with a positive attitude, indicating that attitude towards bariatric surgery can influence weight loss outcomes in conservative treatment settings (PUBMED:34521092). In contrast, a study on the effectiveness of a multi-disciplinary weight management programme in Hong Kong showed that conservative weight management therapy often fails to maintain substantial weight loss in severely obese patients, suggesting the need for more invasive interventions like bariatric surgery to achieve sustainable weight loss (PUBMED:19221665). Overall, these findings suggest that while there are some similarities in psychological conditions between the two groups, there are also distinct differences in clinical conditions, behavioural characteristics, and attitudes that influence the choice of treatment modality.
Instruction: Maternal periconceptional vitamins: interactions with selected factors and congenital anomalies? Abstracts: abstract_id: PUBMED:36595654 The relationship between maternal periconceptional micronutrient supplementation and non-syndromic cleft lip/palate in offspring. Background: This study aimed to explore the relationship between maternal periconceptional supplementation with folic acid only (FAO) or with multiple micronutrients containing folic acid (MMFA) and non-syndromic cleft lip/palate in offspring. Method: The data came from a prenatal health care system and a birth defects surveillance system in Beijing, China, from 2013 to 2018. Information on maternal FAO/MMFA supplementation was collected by questionnaire in the first trimester, and data on cleft lip/palate were collected at delivery or termination of pregnancy. Inverse probability weighting (IPW) by the propensity score to adjust for the confounders and Poisson regression model was used to estimate risk ratios (RRs) and their 95% confidence intervals (CIs). Results: A total of 63,969 participants were included in the study. Compared to the no-supplementation group, the adjusted RR for the supplementation group was 0.51 (95% CI: 0.40, 0.64). And the adjusted RRs for FAO and MMFA compared to the no-supplementation group were 0.56 (95% CI: 0.40, 0.76) and 0.48 (95% CI: 0.35, 0.65), respectively. Compared to supplement FAO and MMFA with less than 8 days out of 10 days, the adjusted RRs for FAO and MMFA with 8 or more days out of 10 days were 1.17 (95% CI: 0.78, 1.75), and 2.05 (95% CI: 1.37, 3.31), respectively. Conclusion: Maternal supplementation with micronutrients, either FAO or MMFA, during the periconceptional period can reduce the risk for non-syndromic cleft lip/palate in offspring. However, women should be more cautious with MMFA supplementation. abstract_id: PUBMED:25132072 Maternal periconceptional alcohol consumption and congenital limb deficiencies. Background: Women of childbearing age report high rates of alcohol consumption, which may result in alcohol exposure during early pregnancy. Epidemiological research on congenital limb deficiencies (LDs) and periconceptional exposure to alcohol is inconclusive. Methods: Data from the National Birth Defects Prevention Study (NBDPS) were examined for associations between LDs and patterns of maternal periconceptional (1 month before conception through the first trimester) alcohol consumption among LD case (n = 906) and unaffected control (n = 8352) pregnancies with expected delivery dates from 10/1997 through 12/2007. Adjusted odds ratios (aORs) and 95% confidence intervals were estimated from unconditional logistic regression analysis for all LDs combined, specific LD subtypes (preaxial/terminal transverse), and LD anatomic groups (upper/lower limbs); interactions with folic acid (FA) supplementation were tested. Results: When compared with nondrinkers, inverse associations were found between all LDs combined, preaxial, and upper LDs and any reported periconceptional alcohol consumption (aORs ranged from 0.56-0.83), drinking without binging (aORs: 0.53-0.75), and binge drinking (≥4 drinks/occasion) (aORs: 0.64-0.94); however, none of the binge drinking aORs were statistically significant. Stratification by alcohol type showed inverse associations between all LDs combined, preaxial, transverse, and upper and lower LDs for drinking without binging of wine only (aORs: 0.39-0.67) and between all LDs combined and upper LDs for drinking without binging of combinations of alcohol (aORs: 0.63-0.87). FA did not modify observed associations. Conclusion: Maternal periconceptional alcohol consumption did not emerge as a teratogen for selected LDs in the NBDPS. Future studies should evaluate additional rare LDs among more highly exposed populations. abstract_id: PUBMED:23871272 Maternal periconceptional occupational exposure to pesticides and selected musculoskeletal birth defects. This population-based U.S. study investigated the association between major musculoskeletal malformations and periconceptional maternal occupational pesticide exposure for a wide range of occupations. We conducted a multi-site case-control analysis using data from the National Birth Defects Prevention Study among employed women with due dates from October 1, 1997 through December 31, 2002. Cases included 871 live-born, stillborn, or electively terminated fetuses with isolated craniosynostosis, gastroschisis, diaphragmatic hernia, or transverse limb deficiencies. Controls included 2857 live-born infants without major malformations. Using self-reported maternal occupational information, an industrial hygienist used a job-exposure matrix and expert opinion to evaluate the potential for exposure to insecticides, herbicides or fungicides for each job held during one month pre-conception through three months post-conception. Exposures analyzed included any exposure (yes/no) to pesticides, to insecticides only, to both insecticides and herbicides (I+H) and to insecticides, herbicides and fungicides (I+H+F). We used logistic regression to evaluate the association between exposures and defects, controlling for infant and maternal risk factors. Occupational exposure to I+H+F was associated with gastroschisis among infants of women aged 20 years or older (adjusted odds ratio [aOR]=1.88; 95% confidence interval [CI]: 1.16-3.05), but not for women under age 20 (aOR=0.48; 95% CI: 0.20-1.16). We found no significant associations for the other defects. Additional research is needed to validate these findings in a separate population. abstract_id: PUBMED:31949041 Maternal occupational exposure to solvents and gastroschisis in offspring - National Birth Defects Prevention Study 1997-2011. Objectives: The aim of this study was to assess the association between maternal occupational exposure to solvents and gastroschisis in offspring. Methods: We used data from the National Birth Defects Prevention Study, a large population-based case-control study of major birth defects conducted in 10 US states from 1997 to 2011. Infants with gastroschisis were ascertained by active birth defects surveillance systems. Control infants without major birth defects were selected from vital records or birth hospital records. Self-reported maternal occupational histories were collected by telephone interview. Industrial hygienists reviewed this information to estimate exposure to aromatic, chlorinated and petroleum-based solvents from 1 month before conception through the first trimester of pregnancy. Cumulative exposure to solvents was estimated for the same period accounting for estimated exposure intensity and frequency, job duration and hours worked per week. ORs and 95% CIs were estimated to assess the association between exposure to any solvents or solvent classes, and gastroschisis risk. Results: Among 879 cases and 7817 controls, the overall prevalence of periconceptional solvent exposure was 7.3% and 7.4%, respectively. Exposure to any solvent versus no exposure to solvents was not associated with gastroschisis after adjusting for maternal age (OR 1.00, 95% CI 0.75 to 1.32), nor was an association noted for solvent classes. There was no exposure-response relationship between estimated cumulative solvent exposure and gastroschisis after adjusting for maternal age. Conclusion: Our study found no association between maternal occupational solvent exposure and gastroschisis in offspring. Further research is needed to understand risk factors for gastroschisis. abstract_id: PUBMED:22903798 Analysis of selected maternal exposures and non-syndromic atrioventricular septal defects in the National Birth Defects Prevention Study, 1997-2005. Although the descriptive epidemiology of atrioventricular septal defects (AVSDs), a group of serious congenital heart defects (CHDs), has been recently reported, non-genetic risk factors have not been consistently identified. Using data (1997-2005) from the National Birth Defects Prevention Study, an ongoing multisite population-based case-control study, the association between selected non-genetic factors and non-syndromic AVSDs was examined. Data on periconceptional exposures to such factors were collected by telephone interview from 187 mothers of AVSD case infants and 6,703 mothers of unaffected infants. Adjusted odds ratios (aORs) and 95% confidence intervals (CIs) were estimated from logistic regression models. Mothers who reported cigarette smoking during the periconceptional period were more likely to have infants with AVSDs compared with non-smokers, independent of maternal age, periconceptional alcohol consumption, infant gestational age, family history of CHDs, and study site (aOR 1.5, 95% CI 1.1-2.4). The association was strongest in mothers who smoked more than 25 cigarettes/day. In addition, mothers with periconceptional passive smoke exposure were more likely to have infants with AVSDs than unexposed mothers, independent of maternal age, active periconceptional smoking, infant gestational age, and family history of CHDs (aOR 1.4, 95% CI 1.0-2.0). No associations were observed between AVSDs and maternal history of a urinary tract infection or pelvic inflammatory disease, maternal use of a wide variety of medications, maternal occupational exposure, parental drug use, or maternal alcohol consumption. If the results of this preliminary study can be replicated, minimizing maternal active and passive smoke exposure may decrease the incidence of AVSDs. abstract_id: PUBMED:21656900 Do foreign- and U.S.-born mothers across racial/ethnic groups have a similar risk profile for selected sociodemographic and periconceptional factors? Background: We examined differences in selected pregnancy-related risk factors, including maternal sociodemographic characteristics, health-related conditions, and periconceptional behavioral factors, among foreign-born versus U.S.-born control mothers across race/ethnic groups. Methods: We used data from the National Birth Defects Prevention Study, and calculated odds ratios (ORs) and 95% confidence intervals (CIs) of the risk factors, for foreign-born Hispanic, non-Hispanic white, non-Hispanic black, and Asian/Pacific Islander (API) mothers, compared to their U.S.-born counterparts. Results: Across all race/ethnic groups, foreign-born mothers were older and had lower odds of obesity compared to their U.S.-born counterparts. With the exception of foreign-born black mothers, foreign-born mothers from other race/ethnic groups had significantly lower odds of binge drinking during the periconceptional period. Compared to U.S.-born, foreign-born Hispanic mothers had twice the odds of gestational diabetes (OR = 2.23; 95% CI = 1.36-3.66). Certain health behaviors were less prevalent in foreign-born black mothers (e.g., folic acid use; OR = 0.54; 95% CI = 0.31-0.96) and foreign-born API mothers (e.g., cigarette smoking; OR = 0.10; 95% CI = 0.02-0.48). Conclusions: Significant differences in pregnancy related risk factors during the periconceptional period and throughout pregnancy were observed between maternal nativity groups and across race/ethnicity. Prevention efforts for both prepregnancy and after conception should be designed and delivered according to maternal nativity for each racial/ethnic group. abstract_id: PUBMED:38342385 Associations between maternal periconceptional alcohol consumption and risk of omphalocele among offspring, National Birth Defects Prevention Study, 1997-2011. Objective: Previous studies of alcohol consumption during pregnancy and omphalocele have produced mixed results. We updated an earlier analysis of National Birth Defects Prevention Study (NBDPS) data, adding six years of participants, to examine associations between maternal alcohol consumption and omphalocele. Methods: NBDPS was a multi-site, population-based case-control study in the United States. Cases were identified from birth defect surveillance programs in 10 states; controls were liveborn infants without a birth defect randomly selected from the same catchment areas. Mothers self-reported alcohol consumption during the periconceptional period (one month before through the third gestational month) via telephone interview. Our study included mothers of 410 omphalocele cases and 11,219 controls with estimated dates of delivery (EDDs) during 1997-2011. We used logistic regression to estimate adjusted odds ratios (AORs) and 95% confidence intervals (CIs) for periconceptional alcohol consumption and omphalocele. We performed a probabilistic bias analysis to evaluate the impact of alcohol exposure misclassification on our results. Results: Overall, 44% of case and 38% of control mothers reported periconceptional alcohol consumption; 22% and 17%, respectively, reported binge drinking. Any maternal periconceptional alcohol consumption was associated with modestly increased odds of omphalocele (AOR 1.35, 95% CI 1.09, 1.68), as was binge drinking (AOR 1.47, 95% CI 1.08, 2.01). Our bias analysis yielded estimates further from the null. Conclusions: We observed modest associations between maternal periconceptional alcohol consumption and omphalocele. Based on our bias analysis, studies of alcohol and birth defects not accounting for exposure misclassification may underestimate associations. abstract_id: PUBMED:12410002 Maternal periconceptional vitamins: interactions with selected factors and congenital anomalies? Background: The mechanisms by which folic acid may contribute to reductions in risk of several congenital anomalies are unknown. The data gap includes a lack of information on possible effect modification between maternal folic acid use and other maternal exposures. We hypothesized that effects of congenital anomalies associated with maternal fever, cigarette smoking or alcohol use would be modified by intake of vitamins. Methods: We explored case-control data that showed risk reductions among infants and fetuses whose mothers consumed vitamins. Data were from California deliveries of infants and fetuses in the period 1987-1989. Maternal telephone interviews were completed for 207 (87%) conotruncal cases, 489 (85%) orofacial cleft cases, 265 (84%) neural tube defect cases, 165 (82%) limb anomaly cases, and 734 controls (nonmalformed infants). Results: Considering women who reported vitamin use and no periconceptional fever as referents, for each anomaly group we observed elevated effects for the combinations of maternal vitamin use/fever, no use/no fever and no use/fever. Effects were most elevated for the combination of no vitamin use and fever. Adjusted for maternal body mass index, education and race/ethnicity, odds ratios were 2.4 (95% confidence inter-val = 1.0-5.9) for conotruncal defects, 2.9 (1.4-5.8) for cleft lip with or without cleft palate, 1.3 (0.4-3.9) for cleft palate, 3.1 (1.4-6.8) for neural tube defects, and 2.6 (1.0-6.4) for limb-deficiency defects. These interactions were further investigated relative to maternal use of fever-reducing medications. Effects tended to be highest among those women who did not use vitamins, had fevers, and did not use fever-reducing medications. Compared with women who used vitamins and did not smoke periconceptionally, anomaly risks tended to be highest among women who did not use vitamins and smoked. No specific pattern emerged involving alcohol intake. Conclusions: These data further suggest that the underlying mechanisms of folic acid associated with congenital anomalies may be complex. abstract_id: PUBMED:24313669 Self-reported maternal cigarette smoke exposure during the periconceptional period and the risk for omphalocoele. Background: We investigated whether maternal exposure to cigarette smoke was associated with omphalocoele and whether periconceptional folic acid modified the association. Methods: : We analysed data from the National Birth Defects Prevention Study on omphalocoele case (n = 301) and control (n = 8135) mothers for infants born from 1997 through 2007. Mothers who reported active smoking or exposure to second-hand smoke during the periconceptional period (1 month before conception to 3 months after) were considered exposed. Those who reported use of folic acid supplements during the same period were considered supplement users. Odds ratios and 95% confidence intervals were estimated using multivariable logistic regression adjusted for alcohol use, preconception body mass index, and race/ethnicity. Results: One hundred fifteen (38.2%) case and 2592 (31.9%) control mothers reported exposure to cigarette smoke during the periconceptional period. Adjusted odds ratios [95% confidence intervals] were 1.19 [0.94, 1.53] for any smoke exposure, 0.87 [0.54, 1.40] for active smoking, 1.38 [1.00, 1.90] for second-hand smoke exposure, and 1.16 [0.80, 1.67] for both exposures combined. No dose-response relationship was observed. Folic acid-containing supplements did not reduce the risk for omphalocoele among women with active or second-hand smoke exposure. Conclusions: Self-reported active maternal smoking, with or without exposure to second-hand smoke, during the periconceptional period was not associated with omphalocoele. In contrast, there was a possible association with periconceptional exposure to second-hand smoke. abstract_id: PUBMED:35437856 Maternal periconceptional alcohol consumption and gastroschisis in the National Birth Defects Prevention Study, 1997-2011. Background: Gastroschisis is particularly prevalent among offspring of young women and has increased over recent decades. Although previous studies suggest that maternal alcohol consumption is associated with increased gastroschisis risk, none have explored whether maternal age modifies that association. Objective: The objective of the study was to evaluate associations between self-reported maternal periconceptional alcohol consumption (1 month prior through the third month after conception) and risk of gastroschisis among offspring, by maternal age. Methods: We used data from the National Birth Defects Prevention Study (NBDPS), a multi-site population-based case-control study. The analysis included 1450 gastroschisis cases and 11,829 unaffected liveborn controls delivered during 1997-2011 in ten US states. We estimated adjusted odds ratios (aOR) and 95% confidence intervals (CI) for the individual and joint effects of alcohol consumption and young maternal age at delivery (&lt;25 years vs ≥25 years) on gastroschisis risk. We estimated the relative excess risk due to interaction (RERI) to quantify additive interaction. Results: Periconceptional alcohol consumption was common regardless of maternal age (women &lt;25 years: cases 38.8%, controls 29.3%; women ≥25: cases 43.5%, controls 39.5%). Compared with women ≥25 years who did not consume alcohol, we observed increased risk of gastroschisis among women &lt;25 years, with higher estimates among those who consumed alcohol (women &lt;25 years who did not consume alcohol. aOR 5.90, 95% CI 4.89, 7.11; women &lt;25 years who did consume alcohol: aOR 8.21, 95% CI 6.69, 10.07). Alcohol consumption among women ≥25 years was not associated with gastroschisis (aOR 1.12, 95% CI 0.88, 1.42). This suggests super-additive interaction between alcohol consumption and maternal age (RERI -2.19, 95% CI 1.02, 3.36). Conclusions: Periconceptional alcohol consumption may disproportionately increase risk of gastroschisis among young mothers. Our findings support public health recommendations to abstain from alcohol consumption during pregnancy. Answer: The study presented in abstract PUBMED:12410002 investigated the interactions between maternal periconceptional vitamin intake, specifically folic acid, and other maternal exposures such as fever, cigarette smoking, or alcohol use, in relation to the risk of several congenital anomalies. The data analyzed were from California deliveries of infants and fetuses during 1987-1989, with interviews conducted for mothers of infants with conotruncal defects, orofacial clefts, neural tube defects, limb anomalies, and controls. The study found that the effects of congenital anomalies were elevated for combinations of maternal vitamin use with fever, no vitamin use with no fever, and no vitamin use with fever, with the highest effects observed for the combination of no vitamin use and fever. Adjusted odds ratios indicated increased risks for conotruncal defects, cleft lip with or without cleft palate, neural tube defects, and limb-deficiency defects among mothers who did not use vitamins and had fevers, especially when not using fever-reducing medications. Additionally, the study observed that risks of anomalies tended to be highest among women who did not use vitamins and smoked during the periconceptional period. However, no specific pattern emerged involving alcohol intake. The study concluded that the mechanisms by which folic acid may contribute to reductions in the risk of several congenital anomalies may be complex, and that the interactions between maternal folic acid use and other maternal exposures warrant further investigation. This study suggests that while periconceptional vitamin intake, particularly folic acid, is generally associated with a reduced risk of certain congenital anomalies, the presence of other risk factors such as fever and cigarette smoking during the periconceptional period may modify this protective effect. It highlights the importance of considering multiple factors when assessing the risk of congenital anomalies and the potential benefits of maternal vitamin supplementation.
Instruction: Undetectable Thyroglobulin Levels in Poorly Differentiated Thyroid Carcinoma Patients Free of Macroscopic Disease After Initial Treatment: Are They Useful? Abstracts: abstract_id: PUBMED:25893415 Undetectable Thyroglobulin Levels in Poorly Differentiated Thyroid Carcinoma Patients Free of Macroscopic Disease After Initial Treatment: Are They Useful? Background: Predictive role of undetectable thyroglobulin (Tg) in patients with poorly differentiated thyroid carcinoma (PDTC) is unclear. Our goal was to report on Tg levels following total thyroidectomy and adjuvant RAI in PDTC patients and to correlate Tg levels with recurrence. Methods: Forty patients with PDTC with no distant metastases at presentation (M0) and managed by total thyroidectomy and adjuvant RAI were identified from a database of 91 PDTC patients. Of these, 31 patients had Tg values recorded and formed the basis of our analysis. A nonstimulated Tg level &lt;1 ng/ml was used as a cutoff point for undetectable Tg levels. Association of patient and tumor characteristics with Tg levels was examined by χ (2) test. Recurrence-free survival (RFS) stratified by postop Tg level was calculated by Kaplan-Meier method and compared by log-rank test. Results: Twenty patients had undetectable Tg (&lt;1 ng/ml) and 11 had detectable Tg (≥1 ng/ml; range 2-129 ng/ml) following surgery. After adjuvant RAI, 24 patients had undetectable Tg (&lt;1 ng/ml) and 7 had detectable Tg (≥1 ng/ml; range 1-57 ng/ml). Patients with undetectable Tg were less likely to have pathologically positive margins compared to those with detectable Tg (33 vs. 72 % respectively; p = 0.03). Patients with undetectable Tg levels had better 5-year regional control and distant control than patients with detectable Tg level (5-year regional recurrence-free survival 96 vs. 69 %; p = 0.03; 5-year distant recurrence-free survival 96 vs. 46 %, p = 0.11). Conclusion: Postoperative thyroglobulin levels in subset of patients with PDTC appear to have predictive value for recurrence. Patients with undetectable Tg have a low rate of recurrence. abstract_id: PUBMED:30574462 Poorly Differentiated Thyroid Carcinoma Patients with Detectable Thyroglobulin Levels after Initial Treatment Show an Increase in Mortality and Disease Recurrence. Purpose: The role of thyroglobulin (Tg) in predicting death and recurrence risk in patients with poorly differentiated thyroid carcinoma (PDTC) is not well established. We aimed to analyze Tg levels following total thyroidectomy and adjuvant radioiodine treatment (RAI) in PDTC patients and correlate Tg levels with survival and recurrence. Methods: A retrospective analysis was conducted on 101 patients with PDTC who were treated between 1986 and 2010. Among them, 38 had no distant metastases at presentation, were managed by total thyroidectomy and adjuvant RAI, and had negative anti-Tg antibodies. An unstimulated Tg level &lt; 1 ng/mL was used as a cut-off point for undetectable Tg levels. Association of patient and tumor characteristics with Tg levels was examined by χ2 test. Overall survival, disease-specific survival (DSS), and recurrence-free survival (RFS), stratified by Tg levels, were calculated by the Kaplan-Meier method and compared by the log-rank test. Results: Compared to patients with undetectable Tg, cases with detectable Tg had a lower probability of achieving free surgical margins (21.7 vs. 46.7%; p = 0.04), higher node status (73.3 vs. 21.8%; p = 0.005), decreased 5-year DSS (65 vs. 100%; p = 0.009), and worse 5-year RFS (32 vs. 84%, p = 0.010), with a significant number of patients having a recurrence in the first year (50 vs. 12.5%; p = 0.021). Patients with detectable Tg levels also showed worse locoregional (55.6 vs. 90.9%; p = 0.014) and distant control (5-year distant control of 46.9 vs. 91%; p = 0.017). Conclusions: Our results suggest that detectable Tg levels after surgery and RAI in a subset of PDTC patients appear to predict a higher rate of death and recurrence. abstract_id: PUBMED:21183629 Persistent disease and recurrence in differentiated thyroid cancer patients with undetectable postoperative stimulated thyroglobulin level. (131)I is given in differentiated thyroid cancer (DTC) without taking into account thyroglobulin (Tg) levels at the time of ablation, whereas 6-18 months later it is a major criterion for cure. This single-center retrospective study assessed the frequency and risk factors for persistent disease on postablation whole body scan (WBS) and postoperative neck ultrasonography (n-US) and for recurrent disease during the subsequent follow-up, in patients with DTC and undetectable TSH-stimulated Tg level (TSH-Tg) in the absence of Tg antibodies (TgAb) at the time of ablation. Among 1031 patients ablated, 242 (23%) consecutive patients were included. Persistent disease occurred in eight cases (3%) (seven abnormal WBS and one abnormal n-US), all with initial neck lymph node metastases (N1). N1 was a major risk factor for persistent disease. Among 203 patients with normal WBS and a follow-up over 6 months, TSH-Tg 6-18 months after ablation was undetectable in the absence of TgAb in 173 patients, undetectable with TgAb in 1 patient and equal to 1.2 ng/ml in 1 patient. n-US was normal in 152 patients and falsely positive in 3 patients. After a mean follow-up of 4 years, recurrence occurred in two cases (1%), both with aggressive histological variants. The only risk factor for recurrence was an aggressive histological variant (P = 0.03). In conclusion, undetectable postoperative TSH-Tg in the absence of TgAb at the time of ablation is frequent. In these patients, repeating TSH-Tg 6-18 months after ablation is not useful. (131)I ablation could be avoided in the absence of N1 and aggressive histological variant. abstract_id: PUBMED:20843945 Thyroid cancer recurrence in patients clinically free of disease with undetectable or very low serum thyroglobulin values. Design: This was a retrospective clinical study. Setting: The study was conducted at a university-based tertiary cancer hospital. Patients: One hundred seven patients had initial thyroid cancer surgery and subsequent remnant radioiodine ablation. Patients underwent recombinant human TSH (rhTSH)-mediated diagnostic whole-body scan and rhTSH-stimulated thyroglobulin (Tg) measurement before April 2001 if they had no antithyroglobulin antibodies, were clinically free of disease, and had one or more undetectable (≤0.5 ng/ml) or low (0.6-1 ng/ml) basal Tg measurements on levothyroxine. Patients were stratified according to their rhTSH-Tg responses: group 1, Tg 0.5 ng/ml or less (68 patients); group 2, Tg from 0.6 to 2.0 ng/ml (19 patients); and group 3, Tg greater than 2 ng/ml (20 patients). Main Outcome Measures: Tumor recurrence was measured. Results: In group 1, two of 62 patients (3%) with follow-up recurred. In group 2, 63% converted to group 1, whereas two of 19 (11%) converted to group 3 and then recurred. Sixteen of the initial 20 group 3 patients (80%) recurred, including recurrence rates of 69 and 100% for those with an initial rhTSH-Tg greater than 2.0 ng/ml but 5.0 ng/ml or less, and 4.6 ng/ml or greater, respectively. One group 3 patient died of distant metastases. rhTSH-Tg more accurately predicted tumor recurrence than basal Tg. An rhTSH-Tg threshold of 2.5 ng/ml or greater optimally predicted future recurrence with sensitivity, specificity, and negative and positive predictive values of 80, 97, 95, and 84%, respectively. Conclusions: The prevalence of postablation thyroid cancer recurrence is predicted by the rhTSH-Tg response with an optimal Tg threshold of 2.5 ng/ml. Still, recurrent disease occurs in some patients with an initial rhTSH-Tg of 0.5 ng/ml or less. abstract_id: PUBMED:15640901 Usefulness of radioiodine scanning in patients with moderate/high risk differentiated thyroid carcinoma in whom thyroglobulin after thyroxin withdrawal is undetectable after initial treatment We selected 92 patients without antithyroglobulin antibodies (TgAb), in whom thyroglobulin (Tg) after L-thyroxin withdrawal was undetectable (&lt;1 ng/ml) 6-12 months after initial therapy and who were considered to be at moderate / high risk for recurrence by this criteria: age &gt;45 years; tumor size &gt;1.5 cm; and lymph nodes metastases in 43 (46.7%), local invasion in 26 (28.2%) or distant metastases in 23 (25%). Control whole-body scanning was negative in 78.2% of the cases and showed cervical uptake in the others. Cases presenting thyroid bed uptake in the absence of tumor recurrence did not receive radioiodine and Tg remained undetectable one year after the initial evaluation in all. Cervical uptake was not observed in 4/13 cases on repeated scan. In contrast, even in the absence of uptake and with undetectable Tg, 7 patients with recurrence confirmed by ultrasound (US) received surgical treatment. US showed 92.8% sensitivity for the detection of local-regional disease. The present study suggests that even moderate/high-risk patients without TgAb and with undetectable Tg levels (off T4) do not require radioiodine scanning after initial treatment and can be evaluated by cervical US. abstract_id: PUBMED:17952702 Persistent disease in patients with papillary thyroid carcinoma and lymph node metastases after surgery and iodine-131 ablation. Aim: The aim of this study was to assess the efficacy of treatment of patients with papillary thyroid carcinoma (PTC) and lymph node metastases at the time of diagnosis and its influence on the course of the disease. Methods: It is a retrospective review of all 51 patients with PTC and histologically proven lymph node metastases treated with I-131 ablation in our center between January 1990 and January 2003. Patients were considered disease-free if during follow-up thyroglobulin levels were undetectable and scintigraphy with 370 MBq (131)I was negative during thyroid-stimulating hormone stimulation. Staging of patients was in accordance with the 5th edition of the TNM system. Results: After a median follow-up of 84 months, 33 (65%) patients were never free of detectable disease; and 3 of these patients had died of the PTC. In total, 22 patients still showed persistent activity in the neck outside the thyroid bed, which was suspect to be cervical lymph node metastasis on postablation scintigraphy; it was not related to the initial clinical presentation (lymph node metastasis or a thyroid nodule without suspicion of metastatic disease) or to the extent of surgery. Altogether, 34 patients required additional treatment. Patients presenting with clinically overt lymph node metastasis showed a significantly (p = 0.022) lower rate of becoming disease-free than those in whom microscopic lymph node involvement was unexpectedly found upon pathologic examination. There was no significant association of the eventual outcome with the extent of surgical treatment, TNM staging, or age. Conclusions: Patients with lymph node metastasis are considerably less likely to become disease-free. If the initial treatment does not result in a disease-free status, chances are low that additional treatment will succeed in achieving it. abstract_id: PUBMED:11932271 Diagnostic 131-iodine whole-body scan may be avoided in thyroid cancer patients who have undetectable stimulated serum Tg levels after initial treatment. The follow-up of differentiated thyroid cancer after total thyroidectomy and thyroid ablation is commonly based on serum Tg determination and 131-iodine ((131)I) diagnostic whole-body scan (WBS) performed in the hypothyroid state, 6-12 months after thyroid ablation. Based on the greater sensitivity of Tg measurement, with respect to WBS, the diagnostic yield of diagnostic WBS has been questioned in patients who are off L-T(4) therapy and have undetectable Tg levels. The aim of the present retrospective study was to evaluate the diagnostic relevance of (131)I WBS performed after thyroid remnant ablation, in patients with undetectable serum Tg and off thyroid hormone therapy. The study included 315 of 662 consecutive patients (47.6%) treated in our department between 1980 and 1990, who, at the first control WBS after thyroid ablation, had undetectable serum Tg levels in the hypothyroid state. There were 54 men (17%) and 261 women (83%), with a mean age of 40.9 +/- 13.1 yr (range, 12-76), followed for a mean of 12 +/- 2.8 (range, 9-19) yr. The control WBS was negative in 225 (71.4%) patients and positive for persistent areas of thyroid bed uptake, frequently of very low significance, in 90 (28.6%). No local or distant metastases were discovered. At the last follow-up visit (1999-2000), 281 (89.2%) patients showed complete remission, with undetectable serum Tg off L-T(4) and negative WBS. Persistent thyroid bed uptake, with undetectable levels of Tg, was observed in 29 patients (9.2%) studied during L-T(4) withdrawal. Only 2 patients (0.6%) experienced local recurrence (lymph-node metastases) during their follow-up. In conclusion, our data suggest that the presence of undetectable levels of serum Tg off L-T(4) at the time of the first control WBS after initial treatment, is highly predictive of complete and persistent remission. With the exception of detecting persistent thyroid bed uptake in a minority of cases, the control WBS has never given information that could influence the following therapeutic strategy. On this basis, we propose that the diagnostic (131)I WBS may be avoided in patients with undetectable levels of Tg off L-T(4). These patients may be monitored with clinical examination, neck ultrasound, and serum Tg measurements on L-T(4). abstract_id: PUBMED:32886943 Thyroglobulin Antibodies as a Prognostic Factor in Papillary Thyroid Carcinoma Patients with Indeterminate Response After Initial Therapy. The clinical outcome of papillary thyroid carcinoma (PTC) patients with an indeterminate response after initial therapy is reported to be intermediate, between incomplete and excellent responses. This study evaluated the outcomes of PTC patients with indeterminate response after initial therapy. It was further determined whether the indeterminate findings predicted outcomes more precisely. Patients were further classified into 3 groups based on risk of structural persistence/recurrence: Tg group: detectable thyroglobulin, negative antithyroglobulin antibody, regardless nonspecific imaging findings; TgAb group: positive antithyroglobulin antibody, regardless thyroglobulin levels and nonspecific imaging findings, and Image group: nonspecific findings on neck ultrasonography or faint uptake in the thyroid bed on whole-body scan, undetectable thyroglobulin and negative antithyroglobulin antibody. Sixty-six patients aged 44.1±12.7 years were studied, of whom 58 (87.9%) were females. All patients underwent total thyroidectomy, and 52 patients (78.8%) received radioiodine. After 5.7 years (P25-75 2.6-9.75 years) of follow-up, most patients (89.4%) were reclassified as having an excellent response or remained in the indeterminate response to therapy. Structural recurrence/persistence disease was detected in 7 (10.6%) patients. The persistence/recurrence rate in groups were as follow: Tg, 2.63%; TgAb, 31.25%; Image, 8.3% (p=0.007). The 10-years disease-free survival rate in the TgAb group was significantly reduced (p=0.022). Our results suggest that patients with PTC and indeterminate response due to positive serum antithyroglobulin antibody have more risk of development of structural disease. These findings suggest a more individualized follow-up strategy for patients with an indeterminate response. abstract_id: PUBMED:31273418 Poorly differentiated thyroid carcinoma : An underdiagnosed entity. Poorly differentiated thyroid carcinomas (PDTCs) are a rare subtype of thyroid carcinomas that are biologically situated between well-differentiated papillary/follicular thyroid carcinomas and anaplastic thyroid carcinomas (ATCs).The diagnosis of conventional as well as oncocytic poorly differentiated thyroid carcinoma is difficult and often missed in daily routine. The current WHO criteria to allow the diagnosis of PDTCs are based on the results of a consensus meeting held in Turin in 2006. Even a minor poorly differentiated component of only 10%of a given carcinoma significantly affects patient prognosis and the oncocytic subtype may even have a worse outcome. Immunohistochemistry is not much help and is mostly used to exclude a medullary thyroid carcinoma with calcitonin and to establish a follicular cell of origin via thyroglobulin staining.Due to the concept of stepwise dedifferentiation, there is a vast overlap of different molecular alterations like BRAF, RAS, CTNNB1, TP53 and others between different thyroid carcinoma subtypes. A distinctive molecular tumor profile is therefore currently not available.PDTCs have a unique miRNA signature, which separates them from other thyroid carcinomas. The average relapse free survival is less than one year and about 50% of patients die of the disease. Modern tyrosine kinase inhibitors offer in conjunction with powerful molecular diagnostic new chances in these difficult to treat carcinomas. abstract_id: PUBMED:27020734 Significance of Low Levels of Thyroglobulin Autoantibodies Associated with Undetectable Thyroglobulin After Thyroidectomy for Differentiated Thyroid Carcinoma. Background: The management of patients with differentiated thyroid carcinoma (DTC) showing low levels of serum thyroglobulin autoantibodies (TgAb) and undetectable Tg after thyroidectomy is unsettled. This study sought to elucidate the clinical significance of low levels of TgAb and to evaluate their interference with Tg measurement in vitro. Methods: Tg and TgAb levels were correlated with the post-thyroidectomy staging of 177 consecutive DTC patients undergoing (131)I ablation after total thyroidectomy (clinical study). Tg was measured by an immunometric assay (functional sensitivity: 0.1 ng/mL), and TgAb were evaluated by six assays (functional sensitivities: 1.2-96 IU/mL; positive cutoffs: 4-150 IU/mL). The changes in Tg concentration (Tg recovery) of diluted specimens from DTC patients were also measured after incubation with 67 sera from DTC patients with undetectable Tg and low levels of TgAb (in vitro study). DTC sera containing Tg were diluted serially (from 330 to 0.1 ng/mL) and incubated with TgAb samples; Tg was then measured. Results: In the clinical study: all patients had residual thyroid tissue, and 10 had metastatic disease. Depending on the TgAb assay, median Tg values were 7.0-10.9, 0.0-5.3, and 0.0-0.0 ng/mL in patients with undetectable, borderline (between functional sensitivities and positive cutoffs), and positive TgAb, respectively (p &lt; 0.001). An undetectable Tg value was associated with borderline levels of TgAb in five assays. Only two patients with metastatic disease had undetectable Tg; both were TgAb positive by three or more assays. Conversely, no patient with undetectable Tg and undetectable or borderline TgAb by sensitive assays had metastatic disease. In the in vitro study, TgAb interfered significantly with Tg recovery (p &lt; 0.001), but low levels of TgAb did not abolish Tg recovery. Conclusions: While low levels of TgAb do not preclude Tg measurement in vitro, they can be associated with an undetectable Tg in DTC patients with residual thyroid tissue after thyroidectomy. However, the finding of low levels of TgAb by sensitive assays associated with an undetectable Tg rules out metastatic disease. Answer: Undetectable thyroglobulin (Tg) levels in patients with poorly differentiated thyroid carcinoma (PDTC) who are free of macroscopic disease after initial treatment appear to be useful as a prognostic indicator. Studies have shown that patients with PDTC who have undetectable Tg levels after total thyroidectomy and adjuvant radioiodine treatment (RAI) have a better prognosis compared to those with detectable Tg levels. Specifically, patients with undetectable Tg levels were less likely to have pathologically positive margins and had better 5-year regional control and distant control than patients with detectable Tg levels (PUBMED:25893415). Additionally, detectable Tg levels after surgery and RAI in a subset of PDTC patients were associated with a higher rate of death and recurrence (PUBMED:30574462). Furthermore, in differentiated thyroid cancer (DTC) patients with undetectable postoperative stimulated Tg levels, the frequency of persistent disease on postablation whole body scan (WBS) and postoperative neck ultrasonography (n-US) was low, and the risk of recurrence during subsequent follow-up was also low, especially in the absence of neck lymph node metastases (N1) and aggressive histological variants (PUBMED:21183629). This suggests that undetectable Tg levels can be a reassuring sign of a good prognosis in these patients. However, it is important to note that while undetectable Tg levels are generally associated with a lower risk of recurrence, they do not guarantee the absence of disease. There have been instances of thyroid cancer recurrence in patients who were clinically free of disease with undetectable or very low serum Tg values (PUBMED:20843945). Therefore, while undetectable Tg levels are useful, they should not be the sole factor in determining a patient's prognosis or follow-up strategy. Regular monitoring and additional diagnostic methods, such as neck ultrasonography, may still be necessary to ensure early detection of any potential recurrence (PUBMED:15640901, PUBMED:17952702, PUBMED:11932271). In summary, undetectable Tg levels in PDTC patients after initial treatment are useful as a prognostic indicator and are associated with a lower likelihood of recurrence and better disease control. However, they are not infallible, and continued vigilance in patient monitoring is warranted.
Instruction: Reverse shoulder arthroplasty glenoid fixation: is there a benefit in using four instead of two screws? Abstracts: abstract_id: PUBMED:23352547 Reverse shoulder arthroplasty glenoid fixation: is there a benefit in using four instead of two screws? Background: To allow osseous integration to occur and thus provide long-term stability, initial glenoid baseplate fixation must be sufficiently rigid. A major contributing factor to initial rigid fixation is baseplate screw fixation. Current baseplate designs use a 4-screw fixation construct. However, recent literature suggests adequate fixation can be achieved with fewer than 4 screws. The purpose of the present study was to determine whether a 4-screw construct provides more baseplate stability than a 2-screw construct. Methods: A flat-backed glenoid baseplate with 4 screw hole options was implanted into 6 matched pairs of cadaver scapulas using standard surgical technique. Within each pair, 2 screws or 4 screws were implanted in a randomized fashion. A glenosphere was attached allowing cyclic loading in an inferior-to-superior direction and in an anterior-to-posterior direction. Baseplate motion was measured using 4 linear voltage displacement transducers evenly spaced around the glenosphere. Results: There was no statistical difference in the average peak central displacements between fixation with 2 or 4 screws (P = .338). Statistical increases in average peak central displacement with increasing load (P &lt; .001) and with repetitive loading (P &lt; .002) were found. Conclusion: This study demonstrates no statistical difference in baseplate motion between 2-screw and 4-screw constructs. Therefore, using fewer screws could potentially lead to a reduction in operative time, cost, and risk, with no significant negative effect on overall implant baseplate motion. abstract_id: PUBMED:26621211 Primary stability of inferior tilt fixation of the glenoid component in reverse total shoulder arthroplasty: A finite element study. Glenoid component fixation with inferior tilt has been suggested as one of the surgical methods to decrease scapular notching and improve stability, but its clinically beneficial effect remains a concern. We evaluated the influence of inferior tilt fixation of the glenoid component on primary stability in reverse total shoulder arthroplasty by finite element analysis. Finite element models were constructed from cadaveric scapulae of females over the age of 60 years and glenoid components from reverse total shoulder arthroplasty. The relative micromotion at the bone-glenoid component interface, distribution of bone stress under the glenoid component and around the screws, contact area between the bone and screws, and cut surface area of the cancellous bone exposed after glenoid reaming were analyzed and compared between a neutral and 10° inferior tilt fixation of the glenoid component. The 10° inferior tilt fixation demonstrated greater relative micromotion and higher bone stress than the neutral tilt fixation. Eccentric reaming, which is done to produce the inferior tilt fixation of the glenoid component, increased glenoid cancellous bone exposure and decreased bone-screws contact area. Inferior tilt fixation of the glenoid component may adversely affect primary stability and longevity after reverse total shoulder arthroplasty. © 2015 Orthopaedic Research Society. Published by Wiley Periodicals, Inc. J Orthop Res 34:1061-1068, 2016. abstract_id: PUBMED:34497958 Reverse Shoulder Arthroplasty for B2 Glenoid Deformity. In shoulder osteoarthritis, the B2 glenoid presents challenges in treatment because of the excessive retroversion and posterior deficiency of the glenoid. Correction of retroversion and maintenance of a stable joint line with well-fixed implants are essential for the successful treatment of this deformity with arthroplasty. Reverse shoulder arthroplasty offers several key advantages in achieving this goal, including favorable biomechanics, a well-fixed baseplate, and proven success in other applications. Techniques such as eccentric reaming, bone grafting, and baseplate augmentation allow surgeons to tailor treatment to the patient's altered anatomy. Eccentric reaming is favored for correction of small defects or mild version anomalies. Current trends favor bone grafting for larger corrections, though augmented components have shown early promise with the potential for expanded use. With overall promising results reported in the literature, reverse shoulder arthroplasty is a useful tool for treating older patients with B2 glenoid deformities. abstract_id: PUBMED:24328581 Addressing glenoid erosion in reverse total shoulder arthroplasty. Severe glenoid wear is technically problematic, has a higher complication rate, and inferior results in the setting of shoulder arthroplasty. This paper introduces four basic strategies for treating glenoid erosion with a reverse shoulder arthroplasty which include; 1. eccentric reaming, 2. bone grafting of glenoid, 3. reaming and bone grafting, and 4. using augmented baseplates. The benefits and shortcomings of each of these techniques are discussed. The reverse shoulder arthroplasty has many advantages over anatomic shoulder arthroplasty when dealing with severe glenoid defects. Augmented baseplates are new and allow the surgeon to treat various different glenoid defects with preservation of glenoid subchondral bone. abstract_id: PUBMED:28372964 Glenoid bone grafting in primary reverse total shoulder arthroplasty. Background: Severe glenoid bone loss remains a challenge in patients requiring shoulder arthroplasty and may necessitate glenoid bone grafting. The purpose of this study was to determine results, complications, and rates of failure of glenoid bone grafting in primary reverse shoulder arthroplasty. Methods: Forty-one shoulders that underwent primary reverse arthroplasty between 2006 and 2013 with a minimum follow-up of 2 years (mean, 2.8 years; range, 2-6 years) were reviewed. Thirty-four (83%) received corticocancellous grafts and 7 (17%) structural grafts. Results: Active range of motion and pain levels were significantly improved (P &lt; .001), with mean American Shoulder and Elbow Surgeons score of 77, Simple Shoulder Test score of 9, and patient satisfaction of 93% at the most recent follow-up. Preoperative severe glenoid erosion and increasing body mass index were significantly associated with worse American Shoulder and Elbow Surgeons scores (P = .04). On radiographic evaluation, 7 patients (18%) had grade 1 or grade 2 glenoid lucency. Glenoid bone graft incorporation was observed in 31 patients (78%). Twelve patients (30%) suffered from grade 1 or grade 2 scapular notching. All of the patients with structural grafts showed graft incorporation and no signs of glenoid lucency. Conclusion: Although glenoid lucency, glenoid graft resorption, and scapular notching were present at short-term to midterm follow-up, none of the patients needed revision surgery. Primary reverse shoulder arthroplasty with glenoid reconstruction using bone graft relieved pain and restored shoulder function and stability. abstract_id: PUBMED:29505820 Accuracy of patient-specific guided implantation of the glenoid component in reversed shoulder arthroplasty. Background: The aim of this study was to assess the accuracy of patient-specific guided glenoid component implantation in reverse shoulder arthroplasty. Materials And Methods: 32 reverse shoulder arthroplasties were done using preoperative 3D planning and 4 patient-specific guides to prepare the glenoid and position the glenoid component. Baseplate version, inclination and entry point as well as angulation of the screws were compared to the preoperative plan measured on CT by independent observers. Results: The mean deviation in baseplate version from the preoperative plan was 4.4°+3.1° (range, 0.3°-13.7°), in baseplate inclination 5.0°+4.2° (range, 0.1° to 14.5°) and in baseplate entry point 2.4mm+1.4mm (range, 0.4° to 6.3°). The average screw superior-inferior angulation deviation for the superior screw was 2.8°+2.6° (range, 0.0°-10.1°) and 2.8+2.6° in the antero-posterior plane (range, 0.1°-11.6°). For the inferior screw the superior-inferior angle deviation was 5.3°+3.8° (range, 0.1°-15.2°); the antero-posterior angle deviation was 4.1°+3.1° (range, 0.0°-9.8°). Conclusions: Patient-specific instrumentation (PSI) for the glenoid component in reverse shoulder arthroplasty allows the shoulder surgeon to accurately execute the preoperative 3D plan. Level Of Evidence: Level 3. abstract_id: PUBMED:30509606 Factors contributing to glenoid baseplate micromotion in reverse shoulder arthroplasty: a biomechanical study. Background: Reverse shoulder arthroplasty (RSA) is typically performed in patients with cuff tear arthropathy. A common type of RSA baseplate has a central peg and 4 peripheral screws inserting into the glenoid surface. Baseplate failure is a significant postoperative complication that reduces prosthetic longevity and usually requires revision surgery. This study evaluated the contribution of mechanical factors on initial baseplate fixation. Materials And Methods: This study simulated glenoid baseplate loading in a RSA. A half-fractional factorial design was used to test 5 factors: bone density (160 or 400 kg/m3), screw length (18 or 36 mm), number of screws (2 or 4), screw angle (neutral or diverging), and central peg length (13.5 or 23.5 mm). Trials were cyclically loaded at a 60° angle with 500 N for 1000 cycles. Micromotion at 4 peripheral screw positions was analyzed using a multifactorial analysis of variance (P &lt; .05). Results: We found an increase in micromotion with 3 scenarios: (1) lower bone density at all screw positions; (2) shorter central peg length at the inferior, superior and anterior screws; and (3) shorter screw length at the inferior and anterior screws. There were interactions between bone density and screw length at the inferior and anterior screws and between bone density and central peg length at the inferior, superior, and anterior screws. Discussion: Greater bone density, a longer central peg, and longer screws provide improved initial glenoid fixation in an RSA, whereas the number of screws, and the angle of screw insertion do not. These findings may help minimize baseplate failure and revision operations. abstract_id: PUBMED:25435292 Reverse shoulder arthroplasty due to glenoid bone defects. Objective: Reverse shoulder arthroplasty is becoming a useful tool for many diseases of the shoulder. Any severe glenoid bone defect may affect the fixing of the glenoid component. The aim of this paper is to evaluate the medium-term outcomes of reverse shoulder arthroplasty associated with a glenoplasty. Materials And Methods: A retrospective study was conducted on 5 patients from our hospital, selected due to glenoid defects of different etiology. All of them where treated with reverse shoulder arthroplasty associated with glenoplasty with bone graft. Results: The minimum follow-up was one year (mean 30.4 months). All grafts were radiologically integrated, with no signs of resorption or necrosis being observed. At 12 months, the Constant score was 66.75 and the mean EVA score was 1. Discussion: Glenoplasty surgery is technically demanding for restoring original bone size in patients with glenoid structural defects, enabling a reverse shoulder arthroplasty to be implanted. Thus improving both the function and clinical outcomes in selected patients with glenohumeral pathology and providing them with a solution. abstract_id: PUBMED:23531710 Intraoperative CT navigation for glenoid component fixation in reverse shoulder arthroplasty. CT navigation has been shown to improve component positioning in total shoulder arthroplasty. The technique can be useful in achieving strong initial fixation of the metal backed glenoid in reverse shoulder arthroplasty. We report a 61 years male patient who underwent reverse shoulder arthroplasty for rotator cuff arthropathy. CT navigation was used intraoperatively to identify best possible glenoid bone and to maximize the depth of the fixation screws that anchor the metaglene portion of the metal backed glenoid component. Satisfactory positioning of screws and component was achieved without any perforation or iatrogenic fracture in the scapula. CT navigation can help in maximizing the purchase of the fixation screws that dictate the initial stability of the glenoid component in reverse shoulder arthroplasty. The technique can be extended to improve glenoid component position [version and tilt] with the availability of appropriate software. abstract_id: PUBMED:28762021 Retroverted glenoid reconstruction using glenoid plate in reverse shoulder arthroplasty. Purpose: The objective of this study is to evaluate the clinical and radiological results of reverse shoulder arthroplasty (RSA) with glenoid plating in a consecutive series of patients affected by cuff tear glenohumeral arthropathy with glenoid retroversion &gt;15°. We hypothesized that autologous humeral head graft may be better stabilized between the baseplate and the native glenoid surface with the use of a glenoid plate. Methods: Twenty consecutive patients affected by cuff tear arthropathy with glenoid retroversion &gt;15° (B2 or C according to Walch classification) were enrolled in this study. To reconstruct the glenoid, a dedicated plate was used in addition to the standard reverse shoulder baseplate and the glenosphere. Clinical and radiological assessment was performed using constant score (CS), subjective shoulder value (SSV), X-rays and CT scan at 6, 12 and 24 months of follow-up. Healing and resorption of the graft and detection of the glenoid version were assessed. Results: Sixteen patients were available for final follow-up. The mean preoperative retroversion of the glenoid was 24°, while the post-op was 2° (p = 0.002). At 24 months of follow-up, mean CS and SSV were 61 and 70. Respect to preoperative scores, the results were statistically significant (p &lt; 0.001). The last CT scan revealed: a complete healing of the graft in 100% of cases; graft resorption less than 25% in two patients (12.5%); glenoid retroversion of 4°. A negative statistically significant correlation was found between final CS and preoperative glenoid retroversion (0.039). Conclusions: The present study reports the favorable outcomes of retroverted glenoid reconstruction with glenoid plates in RSA, an alternative method to address severe glenoid deficiency. Level Of Evidence: Level IV, case series with no comparison group. Answer: The question of whether using four screws instead of two for glenoid fixation in reverse shoulder arthroplasty (RSA) provides a benefit has been investigated in the literature. According to a study by PUBMED:23352547, there was no statistical difference in baseplate motion between 2-screw and 4-screw constructs. This suggests that using fewer screws could potentially lead to a reduction in operative time, cost, and risk, without a significant negative effect on overall implant baseplate motion. Therefore, based on this study, there may not be a clear benefit in using four screws over two for glenoid fixation in RSA in terms of baseplate stability. However, it is important to consider other factors such as bone quality, patient-specific anatomy, and surgical technique, which may influence the optimal number of screws for each individual case.
Instruction: Does valganciclovir hydrochloride (valcyte) provide effective prophylaxis against cytomegalovirus infection in liver transplant recipients? Abstracts: abstract_id: PUBMED:29020220 Role of Secondary Prophylaxis With Valganciclovir in the Prevention of Recurrent Cytomegalovirus Disease in Solid Organ Transplant Recipients. Background: Cytomegalovirus (CMV) is a major contributor to morbidity and mortality in solid organ transplant recipients (SOTRs). Ganciclovir and valganciclovir are highly effective antiviral drugs with a well-established role in primary prophylaxis and treatment of CMV disease. Our objective in this study was to examine the effect of secondary prophylaxis (SP) on the risk of relapse in SOTRs following an episode of CMV disease. Methods: We performed a retrospective cohort study of SOTRs from 1995 to 2015 and used propensity score-based inverse probability of treatment weighting methodology to control for confounding by indication. A weighted Cox model was created to determine the effect of SP on time to relapse within 1 year of treatment completion. Results: Fifty-two heart, 34 liver, 79 kidney, and 5 liver-kidney transplant recipients who completed treatment for an episode of CMV infection/disease were included. A total of 120 (70.6%) received SP (median duration, 61 days; range, 5-365) and 39 (23%) relapsed. SP was protective against relapse from 0 to 6 weeks following treatment completion (hazard ratio [HR], 0.19; 95% confidence interval [CI], 0.05-0.69). However, after 6 weeks, risk of relapse did not significantly differ between the 2 groups (HR, 1.18; 95% CI, 0.46-2.99). Conclusions: Our findings demonstrate that use of SP following treatment of CMV disease did not confer long-term protection against relapse, although it did delay relapse while patients were receiving antivirals. This suggests that SP has limited clinical utility in the overall prevention of recurrent CMV disease. abstract_id: PUBMED:16213344 Does valganciclovir hydrochloride (valcyte) provide effective prophylaxis against cytomegalovirus infection in liver transplant recipients? Introduction: Cytomegalovirus (CMV) infection after solid organ transplantation is one of the most common viral infections, causing significant morbidity and mortality if not treated promptly. Ganciclovir has proven to be effective for the prophylaxis and treatment of CMV. However, oral absorption of ganciclovir is poor. Recently, oral administration of valganciclovir hydrochloride (Valcyte) has been observed to display 10-fold better absorption than oral ganciclovir. Valganciclovir has increasingly been used as prophylaxis against CMV after solid organ transplantation. The purpose of this study was to examine the efficacy of valganciclovir prophylaxis therapy after primary liver transplantation. Patients And Methods: Between July 2001 and May 2003, 203 consecutive liver transplant recipients, including 129 men and 74 women of overall mean age 53 +/- 11 years, received valganciclovir (900 mg/d or 450 mg every other day depending on renal function) for 3 to 6 months after primary liver transplantation. All patients were followed up for a minimum of 6 months. Mean follow-up was 19 +/- 5.8 months. CMV DNA in peripheral blood was tested using polymerase chain reaction (PCR) amplification. Symptomatic CMV was stratified according to the CMV immunoglobulin (Ig)G status of the donor and recipient at the time of liver transplantation. Donors and recipients were classified preoperatively into groups according to the presence or absence of CMV as follows: group 1 (n = 73; donor CMV+, recipient CMV+); group 2 (n = 41; donor CMV-, recipient CMV+); group 3 (n = 54; donor CMV+, recipient CMV-; high-risk group); and group 4 (n = 35; donor CMV-, recipient CMV-). Results: Twenty-nine patients (14.3%) developed symptomatic CMV disease at 169 +/- 117 days after liver transplantation: group 1, 16.4% versus group 2, 7.3% versus group 3, 25.9% versus group 4, 0%. Of these patients, 5 also had invasive CMV on liver biopsy, which was performed owing to abnormal liver functions. All 29 patients were treated with intravenous ganciclovir. One patient died owing to disseminated CMV, whereas the remaining 28 patients responded to treatment. Interestingly, 8 patients, including 1 who had invasive CMV hepatitis, developed symptomatic CMV within 90 days of liver transplantation even while on prophylactic valganciclovir. Conclusion: Valganciclovir failed to provide adequate prophylaxis following liver transplantation in our patients. The overall rate of CMV in seropositive donors and/or recipients was 17%, and in the high-risk group was 26%. Further prospective studies with measurement of ganciclovir concentrations are needed to elucidate the reasons for this unexpected failure. abstract_id: PUBMED:21083790 Valganciclovir is an effective prophylaxis for cytomegalovirus disease in liver transplant recipients. Objectives: Cytomegalovirus (CMV) infection is responsible for significant morbidity and mortality among solid organ transplant recipients. Prophylaxis using valganciclovir (VGCV) in orthotopic liver transplant (OLT) recipients is not approved by the Food and Drug Administration and its use is controversial. This study aimed to evaluate the effectiveness of VGCV in CMV prophylaxis in OLT recipients. Methods: We carried out a retrospective, single-centre study including all OLT procedures performed during 2005-2008. Patients with early death (at ≤ 30 days), without CMV serology or prophylaxis, or with follow-up of &lt;1 year were excluded. Results: The overall incidence of CMV disease was 6% (n= 9). The ganciclovir (GCV) and VGCV groups had similar incidences of CMV disease (4.6% vs. 7.0%; P= 0.4) and similar distributions of disease presentation (CMV syndrome vs. tissue-invasive CMV; P= 0.4). Incidences of CMV infection, as well as disease presentation, were similar between the high-risk (CMV D+/R-) and non-high-risk groups (P= 0.16). Although acute cellular rejection occurred more frequently in patients who developed CMV disease (P= 0.005), overall survival in these patients did not differ from that in patients who did not develop CMV infection (P= 0.5). Conclusions: Valganciclovir is an effective antiviral for the prevention of CMV disease in liver transplant recipients. Our data support its use in high-risk OLT patients. abstract_id: PUBMED:25018848 Management of cytomegalovirus infection and disease in liver transplant recipients. Cytomegalovirus (CMV) is one of the most common viral pathogens causing clinical disease in liver transplant recipients, and contributing to substantial morbidity and occasional mortality. CMV causes febrile illness often accompanied by bone marrow suppression, and in some cases, invades tissues including the transplanted liver allograft. In addition, CMV has been significantly associated with an increased predisposition to acute and chronic allograft rejection, accelerated hepatitis C recurrence, and other opportunistic infections, as well as reduced overall patient and allograft survival. To negate the adverse effects of CMV infection on transplant outcome, its prevention, whether through antiviral prophylaxis or preemptive therapy, is an essential component to the management of liver transplant recipients. Two recently updated guidelines have suggested that antiviral prophylaxis or preemptive therapy are similarly effective in preventing CMV disease in modest-risk CMV-seropositive liver transplant recipients, while antiviral prophylaxis is the preferred strategy over preemptive therapy for the prevention of CMV disease in high-risk recipients [CMV-seronegative recipients of liver allografts from CMV-seropositive donors (D+/R-)]. However, antiviral prophylaxis has only delayed the onset of CMV disease in many CMV D+/R- liver transplant recipients, and such occurrence of late-onset CMV disease was significantly associated with increased all-cause and infection-related mortality after liver transplantation. Therefore, a search for better strategies for prevention, such as prolonged duration of antiviral prophylaxis, a hybrid approach (antiviral prophylaxis followed by preemptive therapy), or the use of immunologic measures to guide antiviral prophylaxis has been suggested to prevent late-onset CMV disease. The standard treatment of CMV disease consists of intravenous ganciclovir or oral valganciclovir, and if feasible, reduction in pharmacologic immunosuppression. In one clinical trial, oral valganciclovir was as effective as intravenous ganciclovir for the treatment of mild to moderate CMV disease in solid organ (including liver) transplant recipients. The aim of this article is to provide a state-of-the art review of the epidemiology, diagnosis, prevention, and treatment of CMV infection and disease after liver transplantation. abstract_id: PUBMED:32712663 Cost-effectiveness of Preemptive Therapy Versus Prophylaxis in a Randomized Clinical Trial for the Prevention of Cytomegalovirus Disease in Seronegative Liver Transplant Recipients With Seropositive Donors. Background: The relative costs of preemptive therapy (PET) or prophylaxis for the prevention of cytomegalovirus (CMV) disease in high-risk donor CMV-seropositive/recipient-seronegative (D+/R-) liver transplant recipients have not been assessed in the context of a randomized trial. Methods: A decision tree model was constructed based on the probability of outcomes in a randomized controlled trial that compared valganciclovir as PET or prophylaxis for 100 days in 205 D+/R- liver transplant recipients. Itemized costs for each site were obtained from a federal cost transparency database. Total costs included costs of implementation of the strategy and CMV disease treatment-related costs. Net cost per patient was estimated from the decision tree for each strategy. Results: PET was associated with a 10% lower absolute rate of CMV disease (9% vs 19%). The cost of treating a case of CMV disease in our patients was $88 190. Considering cost of implementation of strategy and treatment-related cost for CMV disease, the net cost-savings per patient associated with PET was $8707 compared to prophylaxis. PET remained cost-effective across a range of assumptions (varying costs of monitoring and treatment, and rates of disease). Conclusions: PET is the dominant CMV prevention strategy in that it was associated with lower rates of CMV disease and lower overall costs compared to prophylaxis in D+/R- liver transplant recipients. Costs were driven primarily by more hospitalizations and higher CMV disease-associated costs due to delayed onset postprophylaxis disease in the prophylaxis group. abstract_id: PUBMED:26318605 Prospective long-term study on primary CMV infections in adult liver transplant (D+/R-) patients after valganciclovir prophylaxis. Background: Cytomegalovirus (CMV) can cause severe infections in transplanted patients. To prevent CMV infection, most liver centers use prophylaxis for CMV-seronegative recipients receiving an organ from a seropositive donor (D+/R-). Valganciclovir is mostly given for 3-6 months after transplantation. However, the patients may develop primary CMV infection after the cessation of prophylaxis and late-onset CMV disease may occur. Objectives: A prospective long-term follow-up of CMV (D+/R-) adult liver transplant recipients after 3 months valganciclovir prophylaxis was investigated. Study Design: Of 154 consecutive adult liver recipients transplanted from 2006 to 2009, 20 (13%) were CMV D+/R- and received antiviral prophylaxis up to 3 months after transplantation. After excluding the recipients with incomplete prophylaxis or monitoring, 13 (D+/R-) patients with follow-up of &gt;4 years after the 3-month period of valganciclovir prophylaxis were included in the study.The patients were monitored for CMV by real-time quantitative plasma PCR. Results: No break-through CMV infections were recorded during the prophylaxis period. After cessation of valganciclovir prophylaxis 12/13 (90%) patients demonstrated CMV-DNAemia following a post transplantation mean interval of 165 days (range 95-320). Ten patients with high viral loads (peak viral load mean 81,510, range 1900-648950cps/ml) were successfully treated, 6 with valganciclovir, and 4 with ganciclovir. Two patients with low level CMV-DNAemia (&lt;1000cps/ml) were asymptomatic and not treated. No intragraft infection was seen, but one patient developed gastrointestinal CMV infection verified from ileum biopsy. During long-term follow-up, 3 patients demonstrated low-level viral replication, but no symptomatic recurrences occurred. One patient died of bacterial sepsis, but no patient or graft was lost due to CMV. Conclusions: Primary CMV infections after cessation of prophylaxis were common, but were successfully treated with valganciclovir or ganciclovir. abstract_id: PUBMED:25208322 The role of secondary cytomegalovirus prophylaxis for kidney and liver transplant recipients. Background: The role of secondary cytomegalovirus (CMV) prophylaxis, defined as the continuation of valganciclovir to prevent relapse after the successful treatment of CMV disease, is not well understood. Methods: Cases of CMV disease in patients who underwent kidney or liver transplantation from January 2001 to January 2010 were reviewed to determine if the use of secondary prophylaxis was associated with fewer relapses or other favorable outcomes. Secondary prophylaxis was used at the discretion of each treating clinician, without an institutional protocol. Results: Twenty-two cases of CMV disease in kidney transplant recipients and 20 cases in liver transplant recipients were included. Relapsed CMV disease was significantly more common among kidney transplant recipients (5/22 vs. 0/20, P = 0.049). Of 22 kidney transplant recipients, 16 received secondary prophylaxis. After a mean of 3.7 years, relapsed CMV disease occurred in three of 16 patients who received secondary prophylaxis and in two of six who did not. Among liver transplant recipients, only two of 20 patients received secondary prophylaxis. After a mean of 3.2 years, no relapsed CMV disease occurred. The use of secondary prophylaxis was not significantly associated with fewer episodes of CMV relapse, graft loss, or death. Time to clearance of CMV viremia during treatment was significantly longer in those who relapsed (mean, 30 days vs. 20 days; P = 0.037). Conclusion: These findings suggest that secondary CMV prophylaxis may not provide additional benefit after the successful treatment of CMV disease, particularly among liver transplant recipients. abstract_id: PUBMED:34646907 Safety of Valganciclovir 450 mg 3 Times Weekly for Cytomegalovirus Prophylaxis in Solid Organ Transplant Recipients Requiring Hemodialysis. Background: Valganciclovir is the most commonly used antiviral for cytomegalovirus (CMV) prophylaxis in solid organ transplant recipients. However, there are limited clinical outcomes-supported data available to guide valganciclovir dosing in patients on hemodialysis (HD). This study aimed to assess the safety of our institution's current dosing strategy of valganciclovir 450 mg 3 times weekly post-HD. Methods: This was a single-center retrospective review of all adult nonkidney transplant recipients between May 2016 and June 2018. Patients with end-stage renal disease requiring HD for &gt;28 days posttransplant receiving valganciclovir 450 mg 3 times weekly post-HD were matched with non-HD patients receiving valganciclovir prophylaxis dosed per renal function. The primary endpoints were incidence of leukopenia, neutropenia, and thrombocytopenia while on valganciclovir prophylaxis. Results: A total of 465 nonkidney transplants were performed during the study period, with 37 patients included in the HD group who were matched to 111 control patients in the non-HD group. Liver transplant recipients comprised 84% and 72% of each group, with none being CMV D+/R-. The rates of leukopenia (51.4% vs 51.4%, P = 1.00), severe neutropenia (absolute neutrophil count &lt;500 cells/µL, 15.8% vs 14.0%, P = .85), and thrombocytopenia (24.3% vs 20.7%, P = .64) were similar in both HD and non-HD groups. There were no cases of CMV infection while on valganciclovir prophylaxis in either group. Conclusions: Valganciclovir 450 mg 3 times weekly was found to have similar rates of leukopenia, neutropenia, thrombocytopenia, and CMV infection in comparison to valganciclovir dosed per renal function in non-HD transplant recipients. abstract_id: PUBMED:28921781 Valganciclovir (VGCV) followed by cytomegalovirus (CMV) hyperimmune globulin compared to VGCV for 200 days in abdominal organ transplant recipients at high risk for CMV infection: A prospective, randomized pilot study. Background: With the advent of effective antivirals against cytomegalovirus (CMV), use of CMV hyperimmune globulin (HIG) has decreased. Although antiviral prophylaxis in patients at high risk for CMV is effective, many patients still have late infection, never developing antibodies sufficient to achieve immunity. Utilizing a combination of antiviral and CMV HIG may allow patients to achieve immunity and decrease late CMV infections. Methods: This was a prospective randomized, open-label, pilot study comparing valganciclovir (VGCV) prophylaxis for 200 days vs VGCV for 100 days followed by CMV HIG in abdominal transplant recipients at high risk for CMV. The primary outcome was a comparison of late CMV disease. Results: Forty patients were randomized to VGCV for 200 days (n = 20) or VGCV for 100 days followed by 3 doses of monthly CMV HIG (n = 20). Numerically, more overall CMV infections occurred in the CMV HIG group (45 vs 20%, P = .09). No differences in overall CMV infections or late CMV disease were seen between groups (20% vs 15%, P = 1.00 and 0 vs 0, P = 1.00). All CMV disease occurred within 200 days, with 63% occurring while patients were on VGCV. No differences were found in toxicities, graft function, or rejection between groups. Patients with CMV infection at any time had a higher body weight than those who did not have an infection (82 vs 95 kg, P = .049). Conclusion: Use of CMV HIG sequentially with prophylaxis may be an effective and affordable prophylactic regimen in abdominal transplant recipients at high risk for CMV, and warrants larger prospective study. Increased monitoring for patients with obesity may be warranted. abstract_id: PUBMED:31165537 IL28B rs12979860 T allele protects against CMV disease in liver transplant recipients in the post-prophylaxis and late period. Background: Cytomegalovirus (CMV) disease represents a serious complication in liver transplant (OLT) recipients. CMV prophylaxis reduces incidence of CMV disease in the early post-transplant period (on-prophylaxis disease, OPD) but may postpone its manifestation after the completion of prophylaxis. Post-prophylaxis disease (PPD) incidence after prophylaxis cessation may be modified by genetic factors. Methods: We analyzed impact of IL28B rs1297986 variants on CMV disease incidence in 743 adult OLT recipients receiving universal prophylaxis. Results: One hundred and forty-four (19.4%) patients had at least one CMV disease episode. One hundred and two of them (70.8%) had at least one OPD and 36 (25%) patients had PPD, six (4.2%) patients had both. The rate of IL28B T allele carriers was lower in PPD group (38.9%) in comparison with OPD group (66.7%, P = 0.005) and group without CMV disease (61.4%, P = 0.009). The impact of IL28B genotype on the risk of CMV OPD was significant neither in the allelic (TT + CT vs CC, P = 0.32) nor in the recessive model (TT vs CT + CC, P = 0.79). Contrarily, in the PPD group, T allele (TT + CT vs CC) had a protective effect, OR 0.4 (95% CI 0.2-0.8, P = 0.008). Further risk factors of PPD were age &lt;55 years and valganciclovir prophylaxis, whereas the risk factors of OPD were age &lt;55 years, cyclosporine A therapy and pre-transplant CMV serostatus (donor +/recipient -). Conclusions: IL28B rs12979860 T allele carriers had a lower risk of CMV PPD. Answer: The effectiveness of valganciclovir hydrochloride (Valcyte) as prophylaxis against cytomegalovirus (CMV) infection in liver transplant recipients is supported by some studies but questioned by others. One study found that valganciclovir failed to provide adequate prophylaxis following liver transplantation, with an overall rate of CMV in seropositive donors and/or recipients at 17%, and in the high-risk group at 26% (PUBMED:16213344). However, another study concluded that valganciclovir is an effective antiviral for the prevention of CMV disease in liver transplant recipients, supporting its use in high-risk orthotopic liver transplant (OLT) patients (PUBMED:21083790). Further research suggested that valganciclovir prophylaxis only delayed the onset of CMV disease in many high-risk liver transplant recipients (CMV-seronegative recipients of liver allografts from CMV-seropositive donors), and that late-onset CMV disease was significantly associated with increased mortality after liver transplantation (PUBMED:25018848). A study on cost-effectiveness found that preemptive therapy (PET) was associated with lower rates of CMV disease and lower overall costs compared to prophylaxis in high-risk donor CMV-seropositive/recipient-seronegative (D+/R-) liver transplant recipients (PUBMED:32712663). Another study reported that primary CMV infections after cessation of prophylaxis were common but were successfully treated with valganciclovir or ganciclovir (PUBMED:26318605). In summary, while valganciclovir has been used as prophylaxis against CMV infection in liver transplant recipients, its effectiveness can vary, and it may not provide adequate protection in all cases, particularly in high-risk groups. Alternative strategies, such as preemptive therapy or a combination of antiviral and CMV hyperimmune globulin, may be considered, and further studies are needed to determine the most effective approach for preventing CMV infection in this population.
Instruction: Does the population living in Roma settlements differ in physical activity, smoking and alcohol consumption from the majority population in Slovakia? Abstracts: abstract_id: PUBMED:24847610 Does the population living in Roma settlements differ in physical activity, smoking and alcohol consumption from the majority population in Slovakia? Background: Several studies have revealed a high prevalence of risk factors associated with unhealthy lifestyle among individuals with lower socioeconomic status. In Slovakia, one of the most socially and health-disadvantaged groups is the Roma minority. The aim of this study is to explore differences in physical activity, smoking and alcohol consumption between the population living in Roma settlements and the majority population in Slovakia. Methods: Data from the cross-sectional epidemiological HepaMeta study conducted in Slovakia in 2011 were used. The sample consisted of 452 Roma (mean age = 34.7; 35.2% men) and 403 non-Roma (mean age = 33.5; 45.9% men) respondents. The differences in health-related behaviour between the population living in Roma settlements and the majority population were analysed using logistic models separately for males and females. Results: These data show a clear difference between the population living in Roma settlements and the majority population with regard to leisure-time physical activity (only in women) and smoking, although not alcohol consumption. The prevalence of leisure-time physical activities such as walking or some other type of sport was significantly lower among Roma women than among non-Roma women. Men and women living in Roma settlements are more likely to smoke on a daily basis and they are heavier smokers in comparison with the majority population. HepaMeta study did not find differences in alcohol consumption between the Roma and non-Roma men. However, Roma women reported less frequent recent drinking and binge-drinking of 6 or more doses of alcohol on a single occasion. Conclusion: The higher prevalence of unhealthy lifestyle activities among Roma seem to contribute to these inequalities in cardiovascular diseases morbidity and mortality in comparison with the majority population. abstract_id: PUBMED:29240704 Prevalence of Chlamydia trachomatis Infection and Its Association with Sexual Behaviour and Alcohol Use in the Population Living in Separated and Segregated Roma Settlements in Eastern Slovakia. The aim of the study was to explore sexual behaviour and the occurrence of Chlamydia trachomatis (CT) infection in the population living in Roma settlements compared to the majority population in Slovakia and to assess the association between alcohol use and sexual behaviour within both populations. A cross-sectional population-based Hepa-Meta study was conducted in Slovakia in 2011. The final sample comprised 452 Roma and 403 non-Roma respondents. The occurrence of CT was detected by direct proof of the pathogen by PCR. The association between alcohol use and the prevalence of risky sexual behaviour were assessed using a logistic regression. First intercourse at age 15 or younger was reported by 27.9% of Roma (vs. 4.5% of non-Roma); 93.4% of Roma (vs. 77.9% of non-Roma) used condom inconsistently, 22.8% of Roma (vs. 43.9% of non-Roma) used a condom for protection from unwanted pregnancies and only 8.8% of Roma (vs. 21.8% of non-Roma) due to protection against infectious diseases. However, Roma reported having had five or more sexual partners less often compared to the majority (11.5% of Roma vs. 20.6% of non-Roma). Binge drinking at least once a month was associated with a higher number of sexual partners in both groups, but not with condom non-use. The prevalence of CT infection in the Roma population was higher (3.8%) compared to non-Roma (2.7%); however, the difference was not statistically significant. Our study found no differences in the prevalence of CT infection between Roma and non-Roma despite differences in sexual behaviour. Roma begin their sexual life earlier and have unprotected sex more often, but on the other hand, they seem to be much more restrained in terms of the number of sexual partners compared to the majority population. abstract_id: PUBMED:30927393 Socioeconomic status, health related behaviour, and self-rated health of children living in Roma settlements in Hungary. Objective: The poor health of Roma is well documented, but there is only limited data regarding the health of Roma children. The aim of this study was to describe the socioeconomic status, health related behaviour, and health of children living in segregated Roma settlements, and to compare the data with that of non-Roma children. Methods: In March-April of 2011, a cross-sectional questionnaire-based survey among 11-year-old (211 boys and 252 girls) and 13-year-old (205 boys and 247 girls) children living in Roma settlements was performed (response rate: 91.5%). These data were compared with data from the Health Behaviour in School-Aged Children (HBSC) survey carried out in 2009/2010. Results: The parents of Roma children were substantially less educated and less likely to be actively employed, and Roma children reported lower material welfare than non-Roma ones. The prevalence of consuming sweets and soft drinks at least 5 times per week was 1.5-2 times higher among Roma children. The prevalence of regular intense physical activity was higher at the age of 13 years among Roma boys, while physical inactivity was substantially higher in both age groups among Roma girls. Almost one quarter of Roma children and approximately 14% of non-Roma children had tried smoking at the age of 11. More Roma boys tried alcohol at the age of 11 than non-Roma ones. One in ten Roma children was obese in both age groups. The self-rated health status of Roma children was worse than that of non-Roma children. Conclusions: Children living in Roma settlements reported poorer socioeconomic conditions, higher consumption of sweets and soft drinks, earlier smoking and alcohol initiation, and worse self-rated health, but with some exceptions do not differ in fruit or vegetable consumption and BMI from general child population. To promote health of children living in Roma settlements, a multi-sector approach, special health education, plus social and health promotion programmes are needed. abstract_id: PUBMED:34444153 Living in Roma Settlements in Greece: Self-Perceived Health Status, Chronic Diseases and Associated Social Determinants of Health. We aimed to assess the self-perceived health status and the presence of chronic diseases of adult Roma living in settlements in Greece, and to explore associated social determinants of health. Data were derived from the Hprolipsis Health Survey. Multivariable regression models were applied. In total, 534 adults, 287 women, and 247 men were recruited from twelve Roma settlements in four prefectures. Although 62% of the participants perceived their health status as good/very good, about half of them had been diagnosed with at least one chronic disease. Several structural and intermediary social determinants of health were found to be significantly associated with the health outcomes; prefecture, settlement type, sex, age group, living with a partner, presence of depression symptoms, food insecurity, and alcohol consumption were associated with self-perceived health status; settlement type, sex, age group, presence of anxiety symptoms, food insecurity and number of persons living in the house with the presence of a chronic disease. This is one of the few studies assessing the self-perceived health status and presence of chronic diseases in Roma settlements in Greece and investigating the associated social determinants of health in the world. Community-based participatory action research and health literacy programs are needed to mitigate health inequalities in Roma settlements. abstract_id: PUBMED:26810166 Some aspects of health status of the Gypsy population in Slovakia. Population of the Gypsy people living currently in Slovakia reaches approximately 400,000 people which represents 7.5% of the total Slovak population. The age structure of the Gypsy population is of a considerably progressive type with a high number of children and very low number of old people. This is due to the high natality and high mortality of Gypsies, whose health status is worse than the health status of the majority population. Gypsy people represent an important "old-new" risk group in case of tuberculosis and a more frequent occurrence of other transmitted diseases is observed in them while a higher incidence of genetically conditioned diseases has been confirmed, as well. As for children, a higher frequency of infectious diseases, injuries, intoxications and burns can be found as a result of their environmental conditions. A high level of infant mortality of the Gypsy children is a very negative phenomenon. We can state that the lifestyle of the Gypsy population is unhealthy, characterized by bad dietary habits, drinking alcohol, frequent smoking already in a very young age, higher weight and lower physical activity. All findings confirm that the Gypsy population is threatened by cardiovascular diseases at younger age already. Due to a more difficult approach to the health care, part of the Gypsies consult the physician only when in a serious health state. Health status of the Gypsy seniors has been reported only at minimum due to their low representation in population. Typical feature of this ethnic is, that they take care of their seniors. To prove this, during the author´s twenty seven-year practice he met just three old Gypsy women in social institutions. It is very difficult to address problems related to the Gypsy population; this task requires engagement of all spheres of our society, in particular of the Gypsies themselves, including their organizations (Ref. 50). abstract_id: PUBMED:19910647 Predictors of health-endangering behaviour among Roma and non-Roma adolescents in Slovakia by gender. Background: Roma people are commonly described as having an unhealthy lifestyle-for example, an unhealthy diet, intensive smoking, frequent alcohol consumption and a lack of physical activity. However, data about such health-endangering behaviours among Roma adolescents are scarce and of poor quality. The aim of our study is to assess the occurrence of health-endangering behaviours among Slovak Roma adolescents in comparison to non-Roma adolescents, and to assess the impact of parental education and social desirability on the differences found. Methods: A cross-sectional study among Roma from separated and segregated settlements in the eastern part of Slovakia (n=330; mean age 14.50 years; interview) and non-Roma adolescents (n=722; mean age 14.86 years; questionnaire) was conducted. The effect of ethnicity and parental education on smoking, drunkenness, drug use and physical activity was analysed separately for boys and girls using logistic regression and adjusted for social desirability. Results: Among girls, Roma adolescents had lower rates of smoking, drunkenness and drug use than non-Roma (ORs from 0.14 to 0.60 compared to non-Roma), but had higher rates of physical inactivity. Among boys, drug use was less frequent among Roma adolescents (OR 0.12, 95% CI 0.03 to 0.46); differences for the other health-endangering behaviours were small and statistically insignificant. The effects of parental education and social desirability were small. Conclusions: In contrast to the scarce evidence, Roma had lower rates of substance abuse, especially among girls. Only physical inactivity rates were higher among Roma girls. A challenge in health promotion among Roma is to maintain their relatively low substance use and to promote physical activity. abstract_id: PUBMED:22502722 Does the influence of peers and parents on adolescents' drunkenness differ between Roma and non-Roma adolescents in Slovakia? Background: Roma adolescents have been shown to use less alcohol than non-Roma adolescents. This could be due to the protective influences of peers and parents. Objective: The purpose of this study was to explore differences in the levels of peer and parental influence and their effects on drunkenness between Roma and non-Roma adolescents. Design: Data were obtained in Eastern Slovakia from 330 Roma (mean age=14.50; 48.5% boys) and 722 non-Roma (mean age=14.86; 53.2% boys) primary school pupils. We analysed data on adolescent drunkenness (being drunk at least once in the past four weeks), parental monitoring (parents knowing with whom their children are when they go out) and peer influence (best friend drinking alcohol at least once a week) using logistic regression. Results: Roma adolescents self-reported more parental monitoring and less peer influence when compared with their non-Roma counterparts (p&lt;0.001). Less parental monitoring contributed to the probability of drunkenness only among girls (OR/CI: 4.17/2.00-8.69). This effect of parental monitoring was not modified by ethnicity. Peer influence affected drunkenness in both boys (OR/CI: 3.34/1.91-5.85) and girls (4.84/2.55-9.19), but there was no significant interaction of ethnicity with peer influence. Conclusion: While both boys and girls seem to be sensitive to peer influence, only girls appear to be sensitive to parental monitoring in regard to drunkenness. Stronger parental monitoring and weaker peer influence partially explain the lower prevalence of drunkenness among Roma adolescents, whereas the effects of these factors per level do not vary between Roma and non-Roma adolescents. abstract_id: PUBMED:17660865 Model program for screening oral cancers in the Roma population Oral cancer has been identified as a significant public health threat. It is reported that about 3,800 new cases of oral cancer are diagnosed in Hungary each year with approximately 1,700 associated deaths. Oral cancer is the 6th most common cancer in men. Most oral cancers are preventable; 75% of oral cancers are related to tobacco use, alcohol use, or use of both substances together. While there is insufficient evidence to support or refute the use of visual examination as a method of screening for oral cancer in the general population, screening in high-risk populations is highly recommended. It was presumed that high-risk behavior including tobacco and alcohol use is one of the characteristics of Roma people. The main aim of the study was to elaborate a screening model program for the Roma population to determine risk factors of oral cancer and establish early diagnosis hence to reduce morbidity and mortality. In the program we planned to survey the risk factors in the target population, establish the diagnosis of oral cancer and/or pre-cancer and direct the patients to health care facilities. First we determined the target population in four Hungarian towns with the help of Roma social workers and local public health officers. We assembled a questionnaire on risk factors. Training for Roma social workers and screening personnel was also accomplished. Screening for oral precancerous lesions and cancer and survey the risk factors in the target population were performed at the same time. Patients screened to be positive were referred to specialists. Altogether 1,146 persons, 656 male and 490 female (age 20-77 years, mean 40 years), participated in the screening; 84% of them reported on some kind of complaints. We have got valid data on risk factors in connection with oral cancer. More than fifty percent of participants did not clean their teeth regularly, 75% were smokers, while 45% drunk alcohol regularly. 1,6% of screened participants had oral lesions that did not require referral to a specialist, while 2.3% of the screened subjects had referable oral mucosal lesions including leukoplakia. The overwhelming majority (93%) of participants screened to be positive did not see dentist regularly. As a conclusion, we elaborated a screening model program, which is applicable for disadvantaged (e.g. Roma) population to determine risk factors of oral cancer and establish early diagnosis hence to reduce morbidity and mortality. We surveyed the risk factors in the target population, established the diagnosis of oral cancer and/or pre-cancer lesions and directed the patients to care facilities. We also assisted them to get appropriate long-term care and follow-up. The importance of screening activities targeted on high-risk population was underlined. abstract_id: PUBMED:30660139 Distribution of ADH1B genotypes predisposed to enhanced alcohol consumption in the Czech Roma/Gypsy population. Objective: The aim of the study was to analyse the frequencies of rs1229984 genotypes within the alcohol dehydrogenase (ADH1B) gene in a Gypsies/Roma population and compare them with other populations and with ethanol consumption. Methods: We analysed the ADH1B (rs1229984; Arg47→His; c.143G&gt;A) genotype using the Polymerase Chain Reaction-Restriction Fragment Length Polymorphism (PCR-RFLP) in two ethnically different groups - Gypsies/Roma (N = 301) and Czechs (N = 300) where one day alcohol consumption was recorded. Results: ADH1B genotype/allelic frequencies did not significantly differ between the populations (p = 0.32). The frequency of minor A allele carriers was slightly higher in Gypsies/Roma (14.7%) than in Czechs (11.9%). The prevalence of subjects reporting alcohol intake on the previous day was non-significantly lower in Gypsies/Roma (10.5% vs. 16.4%), as was the amount of alcohol consumed the day before the examination in ethanol consumers (36.1 ± 18.3 g vs. 43.0 ± 27.2 g). Conclusions: The frequency of rs1229984 genotypes in the ADH1B gene within the Gypsies/Roma population corresponds with frequencies obtained in North India/Central Asia, the putative country of this ethnic origin. Our results suggest that the minority Gypsies/Roma population consume slightly less alcohol than the Czech majority population. abstract_id: PUBMED:31990239 Substance use in Roma population in contact with social workers in the Czech Republic: A cross-sectional questionnaire survey. An increasing number of studies have focused on the topic of Roma communities and social exclusion in the Czech Republic, however, substance use has been surveyed only marginally. This paper brings new data on the patterns of substance use among Roma population in contact with social workers (546 respondents). Substance use, including daily smoking and regular excessive alcohol drinking, has been 2-6 times higher among Roma compared to the general population. Current illicit substance use was reported by 1/3 of the respondents (46.7% of males, 17.8% of females) with cannabis (27.1%) and methamphetamine (11.9%) being the most reported substances. Answer: Yes, the population living in Roma settlements does differ in physical activity, smoking, and alcohol consumption from the majority population in Slovakia. According to the HepaMeta study, there is a clear difference between the population living in Roma settlements and the majority population with regard to leisure-time physical activity (only in women) and smoking, although not alcohol consumption. Specifically, the prevalence of leisure-time physical activities such as walking or some other type of sport was significantly lower among Roma women than among non-Roma women. Men and women living in Roma settlements are more likely to smoke on a daily basis and they are heavier smokers in comparison with the majority population. However, the study did not find differences in alcohol consumption between Roma and non-Roma men. Roma women reported less frequent recent drinking and binge-drinking of 6 or more doses of alcohol on a single occasion (PUBMED:24847610). These findings suggest that the higher prevalence of unhealthy lifestyle activities among Roma may contribute to inequalities in cardiovascular diseases morbidity and mortality in comparison with the majority population.
Instruction: Comparison of lifestyle and health among Danish nurses and the Danish female population: is it possible to generalize findings from nurses to the general female population? Abstracts: abstract_id: PUBMED:16324060 Comparison of lifestyle and health among Danish nurses and the Danish female population: is it possible to generalize findings from nurses to the general female population? Aims: To generalize findings from a nursing population to the general female population, the aim of this study was to compare lifestyle and health among Danish female nurses with that among Danish women. Furthermore, the aim was to compare lifestyle and health among Danish female nurses with women belonging to the same socio-economic group as nurses. Methods: Danish females above the age of 44 in 1999 (n = 22,715) were compared with a group of women from the Danish Health and Morbidity Survey conducted in 2000 (n = 4,445). Age standardized risk ratios (SRR) with 95% confidence intervals were calculated to compare the two groups. Results: Nurses smoke less (SRR 0.74, CI: 0.70-0.78) and are more physically active (SRR 2.53, CI: 2.31-2.77) when compared with the general female population. A higher percentage of nurses consume five drinks or more on a weekday compared with the general female population (SRR 2.68, CI: 2.25-3.19). When comparing nurses with women belonging to the same socio-economic group, these differences are also found to be statistically significant, but no differences are found according to overweight/obesity, self-reported health and limitations in daily activities when nurses are compared with women belonging to the same socio-economic group. Conclusions: In general, Danish nurses have a healthier lifestyle than other Danish women. In future studies, differences and similarities in lifestyle and health between nurses and other women should be considered when generalizing findings from Danish nurses to the Danish female population. abstract_id: PUBMED:16081152 Influence of lifestyle, health, and work environment on smoking cessation among Danish nurses followed over 6 years. Background: The purpose of this study was to examine the influence of lifestyle, health, and work environment on smoking cessation among Danish nurses age 45-66 years over a 6-year period from 1993-1999. Methods: Data derive from the Danish Nurse Cohort Study; a prospective cohort established in 1993 when all Danish female nurses 45+ years old and members of the Danish Nurses Association were mailed a questionnaire. The cohort was followed up 6 years later in 1999. In total, 12,980 responded to questions concerning smoking status in both the 1993 and 1999 surveys. This study deals with the 4713 women (36%) who reported smoking at baseline. Smoking cessation was based on self-report. Results: At follow-up in 1999, 24% reported that they no longer smoked. Low prior tobacco consumption at baseline, high fresh fruit consumption, high blood pressure, working day shifts, having low physical job strain, perceiving influence on one's own work, and partner's socio-economic status (as measured by most recent occupation) were associated with successful smoking cessation. Conclusion: The findings from this study highlight the importance of various factors, including lifestyle, health status, and aspects of one's work environment, on successful efforts at smoking cessation. abstract_id: PUBMED:20579156 Cancer incidence among male Danish nurses, 1980-2003. Background: The cancer risk of female nurses has been examined in several studies, but none has addressed the risk of male nurses, although they may be exposed to the same carcinogens as female nurses. In this register-based cohort study, we explored cancer incidence among male Danish nurses. Methods: We identified 3369 male nurses from the files of the Danish Nurses Association and followed them up from 1980 to 2003 in the Danish Cancer Registry. Standardized incidence ratios (SIRs) and 95% confidence intervals were calculated on the basis of standardized national rates. We compared the nurses with 3369 other male employees matched to the nurses by year of birth on social variables (vital and marital status). Findings: We observed 90 cancers in the cohort, with significantly increased SIRs for sarcomas and decreased SIRs for cancers of the respiratory system. When the cohort was stratified by educational generation and birth cohort, we observed significantly elevated relative risks for cancers of the brain and nervous system among the youngest nurses and for sarcomas among nurses in all educational generations and those born between 1945 and 1954. Conclusion: The overall risk for cancer among male Danish registered nurses is similar to that of the general male Danish population. The high SIRs observed for cancers of the brain and nervous system merit further attention. The high relative risks for sarcomas and connective tissue tumours reflect a large proportion of cases of Kaposi sarcoma, which is probably not occupationally related. abstract_id: PUBMED:19806273 Cancer incidence among large cohort of female Danish registered nurses. Background: Nurses are potentially exposed to carcinogens in their working environment. We investigated the risks for 21 types of cancers in Danish nurses. Methods: We identified 92 140 female nurses from the computerized files of the Danish Nurses' Association. By record linkage, we reconstructed information on employment since 1964 using data from a national pension fund; information on vital status and reproduction was obtained from the Central Population Register. Each woman was followed-up from 1980-2003 in the Danish Cancer Registry. We calculated standardized incidence ratios (SIR) and 95% confidence intervals (95% CI). Using Poisson regression models, we made internal comparisons in subgroups of nurses, adjusting for potential confounders. Results: We documented 8410 cancers during follow-up and found significantly increased SIR for breast cancer (SIR 1.1, 95% CI 1.1-1.2), cancers of the brain and nervous system (SIR 1.2, 95% CI 1.1-1.3), melanoma (SIR 1.2, 95% CI 1.1-1.3), and other skin cancers (SIR 1.2, 95% CI 1.1-1.2). Significantly decreased risks were observed for alcohol- and tobacco-related cancers. Nurses who were accredited by the Association after 1981 had significantly increased risks for thyroid cancer (SIR 1.9, 95% CI 1.3-2.5) and cancers of the brain and nervous system (SIR 1.5, 95% CI 1.2-1.9). Former nurses had significantly increased SIR for all cancers combined and breast cancer the first ten years after leaving the profession. In a Poisson regression analysis of breast cancer and duration of employment in hospitals, adjusted for reproductive factors, nurses had an increase risk the first 25 years of employment, but not for longer periods. Conclusion: The increased risk of breast cancer and the decreased risk of alcohol- and tobacco-related cancers support the findings of most other studies on nurses. The elevated risks for cancers of the breast, brain, nervous system, and thyroid warrant further study. abstract_id: PUBMED:2238222 Smoking habits among Danish physicians, nurses and midwives in 1989 Health staff play a central role in distribution of knowledge about the injurious effects of tobacco by influencing the smoking habits of the population. The danish council on smoking and health has therefore chosen health staff as one of the first targets for its activities. One of these was conduct of a questionnaire investigation among a representative section of Danish doctors, nurses and midwives about smoking habits and attitudes to the tobacco problem. The random sample consisted of 2,997 persons, of whom 2,606 (87%) replied to the question about daily smoking habits. In all of the age groups, the frequency of smoking was considerably lower among health staff than in the population as a whole, primarily because many had stopped smoking. The frequency among men was 23% for doctors as compared with 50% in the normal population (age 20-69 years) and 15% for female doctors, 29-30% for nurses and midwives as compared with 46% in the normal population. The frequency of smoking among doctors has decreased considerably since 1980. Very few heavy smokers were found among health staff, particularly among doctors, and many male doctors smoked pipes (47% of the smokers). Even though health staff smoke less than the rest of the population, attention must still be focussed on this group on account of its function for establishing opinions in the health sector. abstract_id: PUBMED:11928833 Lifestyle factors and choice of hormone replacement therapy among Danish nurses. Background: Studies have found that women who use hormone replacement therapy (HRT) are healthier or report better health than non-users. This study examines the association between lifestyle factors, self-reported health and the use of HRT in Danish female nurses above the age of 50 years. Methods: Postal questionnaires were used for data collection. Two cohorts were examined, a younger including women between 50 and 59 years and an older including women of 60 years and above. Results: Of the 14,071 nurses included, 5,528 (39%) reported ever using HRT. In both cohorts, ever using HRT was positively associated (odds ratio &gt; 1) with intake of vitamin supplements, alcohol consumption at weekends, consulting a physician, use of medicine, and previous use of oral contraceptives. Only a few differences were found between former and current users of HRT. A highly significant association was found between ever using HRT and self-reported poor health. Conclusions: In this study use of HRT was not found to be associated with a particularly healthy lifestyle or better health. Therefore, these factors should be specifically considered in studies evaluating the effect of HRT. abstract_id: PUBMED:16971763 Lifestyle, harassment at work and self-assessed health of female flight attendants, nurses and teachers. Health-related lifestyle, harassment at work, and self-assessed health of female flight attendants in comparison to that of female nurses and female primary school teachers were surveyed. A higher proportion of flight attendants than nurses or teachers were smokers, 26% vs. 15% and 17% respectively; and consumed alcohol at least once a week, 40% vs. 21% and 16%. Repeated sexual harassment at work was more common among the flight attendants, 31% vs. 8% and 4%; whereas bullying, physical violence and threats were less prevalent among the flight attendants (12%) than among nurses (19%). Flight attendants were on average somewhat taller, but weighed on average less, 63.8 kg vs. 72.4 kg and 72.7 kg respectively. Repeated exposure to sexual harassment, bullying, violence and threats was related to less physical and psychological well-being in all the groups. Teachers scored on average significantly lower than did the flight attendants on general health and physical well-being, while nurses did not. abstract_id: PUBMED:17366084 Influence of health, lifestyle, working conditions, and sociodemography on early retirement among nurses: the Danish Nurse Cohort Study. Aims: The aim of the study was to analyze the relationship between health, lifestyle, work-related and sociodemographic factors, and older nurses' exit from the labor market to Post-Employment Wage (PEW). PEW is an early retirement scheme to make it possible for workers to retire at the age of 60. Methods: The study was based on 5,538 nurses in the age of 51-59 who in 1993 completed a questionnaire on health, lifestyle, working environment, and sociodemographic factors. The survey information was combined with longitudinal data from the Danish Integrated Database for Labor Market Research compiled by Statistics Denmark. The follow-up period was from 1993 to 2002. Results: Nurses who had poor self-rated health were more likely to join PEW compared with nurses who considered their health as good (HR 1.28, 95% CI 1.16-1.41). Low job influence, high workload, and physical job demands only marginally increased the probability of retiring. Nurses who have relatively low gross incomes had an increased probability of entering PEW compared with nurses with relatively high gross incomes (HR 1.60, CI 1.43-1.79). Having a spouse increased the probability of joining PEW, especially having a spouse who had retired or was unemployed. Conclusions: The retirement age among nurses is influenced by a number of sociodemographic, work-related, and health-related factors. Poor health, low income, living outside the Copenhagen area, being married, having a spouse who is outside the labor force, and working in the daytime are all predictors of early retirement among nurses. Poor working environment only marginally increased the probability of retiring early. abstract_id: PUBMED:32652788 Shift work and incidence of dementia: A Danish Nurse Cohort study. Introduction: A few studies suggest that working night and rotating shifts increase the risk of dementia. We examined the association between shift work and the incidence of dementia in a cohort of female Danish nurses. Methods: We linked Danish Nurse Cohort participants, who reported work schedules (day, evening, night, rotating shifts) in 1993 and/or 1999 and their duration in 2009, to Danish registers to obtain information on dementia hospitalizations and prescription medication until November 2018. Results: Among 6048 nurses who reported work schedules in 1993 and 1999, nurses working night shifts ≥6 years had higher dementia incidence (hazard ratio: 2.43, 95% confidence interval: 1.39 to 4.23) than those working day shifts ≥6 years. Among 8059 nurses who reported shift work duration, nurses working night shifts ≥6 years had higher dementia incidence than those working night shifts &lt;1 year (1.47, 1.06 to 2.06). Discussion: Persistent night shift work may increase the risk of dementia. abstract_id: PUBMED:2238221 Knowledge and attitudes to the tobacco problem among Danish physicians, nurses and midwives in 1989 In an investigation including 2,233 Danish doctors, nurses and midwives between 72 and 96% replied that they found definite proof of connections between smoking and the diseases: cancer of the lung, cardiac infarction and chronic bronchitis. The highest percentages were found among doctors and, in all three instances, among nonsmokers. Approximately 2/3 of the persons questioned felt inconvenienced by other peoples' tobacco smoke. Even among health staff, there were, surprisingly enough, often problems between smokers and non-smokers. The majority of persons questioned considered that Danish hospitals and places of work for health staff should be free from smoke, so that none of the staff smoked at work. Half of the doctors and 3/4 of the nurses and midwives replied that they received insufficient teaching about the deleterious effects of tobacco smoking on health. All of the participants were unanimous that future campaigns against the tobacco problem in the population should be concentrated on prevention of recruitment of new smokers and increase of information about the deleterious effects of tobacco on health. As health staff constitute a very important group in the future campaigns against tobacco, we have obtained the impression from the questionnaire investigation that a thorough debate is necessary about how to solve the problem of tobacco smoking in Danish hospitals and other places of employment of health staff. Answer: The study aimed to determine if findings from a nursing population could be generalized to the general female population in Denmark. The comparison was made between Danish female nurses and the Danish female population, as well as between nurses and women from the same socio-economic group as nurses. The results showed that Danish nurses smoke less and are more physically active than the general female population. They also consume more alcohol on weekdays. When compared with women from the same socio-economic group, nurses still exhibited these lifestyle differences, but no significant differences were found in terms of overweight/obesity, self-reported health, and limitations in daily activities. Therefore, while Danish nurses generally have a healthier lifestyle than other Danish women, the study suggests that differences and similarities in lifestyle and health should be considered when generalizing findings from Danish nurses to the Danish female population (PUBMED:16324060).
Instruction: DNA measurement on histologic slides. Does it work on human tissue? Abstracts: abstract_id: PUBMED:9267562 DNA measurement on histologic slides. Does it work on human tissue? Objective: To determine the performance of different mathematical methods of correcting histograms obtained by densitometric DNA measurement on thin tissue sections for cutting artifacts in different types of human tissue. Study Design: The study group consisted of four different euploid-polyploidizing, nonmalignant human tissues (liver, breast cyst with apocrine metaplasia, adenoma of the thyroid, condyloma). The section thickness was accurately measured. The algorithms of Bins, McCready, Bacus and Haroske were applied to the histograms obtained from DNA measurements. The results were compared to those of the corresponding single-cell preparations. Results: Correct ploidy equivalents were calculated using the algorithm of Bins in very thin sections (+/- 3 microns). This was also the case applying the formulas of McCready and Haroske to medium-thick sections (5.3-5.9 microns). The algorithm of Bacus did not lead to reliable results. Measurements on sections thicker than approximately 7 microns led to good results without mathematical correction. Conclusion: Reliable results can be obtained only when knowing the exact section thickness. In very thin sections the method of Bins leads to reliable results; in medium-thick sections the methods of McCready and Haraske result in correct ploidy values. Sections thicker than 7 microns do not need histogram correction. abstract_id: PUBMED:31799154 DNA extraction from archived hematoxylin and eosin-stained tissue slides for downstream molecular analysis. Background: Histopathologically stained archived tissue slides are stored in hospital archives for years to decades. They are the largest available source of biological materials and are a potentially useful resource that can be used for retrospective epidemiological studies. DNA recovered from the slides can be used for several downstream molecular processes including polymerase chain reaction, single nucleotide polymorphism analysis, and whole genome sequencing. The DNA from these slides can be utilized to compare gene signatures of normal and diseased tissues. However, extraction of high-quality DNA from archived stained hematoxylin and eosin (H&amp;E) slides remains challenging. Aim: To standardize a new protocol for extracting DNA from archived H&amp;E-stained tissue slides for further molecular assays. Methods: A total of 100 archived H&amp;E-stained cancer slides were subjected to a total of five methods of DNA extraction. Methods were varied in the deparaffinization step, tissue rehydration, duration of lysis, and presence or absence of proteinase K. The extracted DNA was quantified using a NanoDrop spectrophometer and the quality was analyzed by agarose gel electrophoresis. Then each sample was subjected to polymerase chain reaction (PCR) to amplify the internal control gene GAPDH, thereby confirming the DNA intactness, which could be further utilized for other downstream applications. Results: Of the five different methods tested, the third method wherein xylene was used for tissue deparaffinization followed by 72 h of digestion and without proteinase K inactivation yielded the highest amount of DNA with good purity. The yield was significantly higher when compared to other methods. In addition, 90% of the extracted DNA showed amplifiable GAPDH gene. Conclusion: Here we present a step-by-step, cost-effective, and reproducible protocol for the extraction of PCR-friendly DNA from archived H&amp;E-stained cancer tissue slides that can be used for further downstream molecular applications. abstract_id: PUBMED:36798940 Automatic detection of head and neck squamous cell carcinoma on pathologic slides using polarized hyperspectral imaging and deep learning. The study is to incorporate polarized hyperspectral imaging (PHSI) with deep learning for automatic detection of head and neck squamous cell carcinoma (SCC) on hematoxylin and eosin (H&amp;E) stained tissue slides. A polarized hyperspectral imaging microscope had been developed in our group. In this paper, we firstly collected the Stokes vector data cubes (S0, S1, S2, and S3) of histologic slides from 17 patients with SCC by the PHSI microscope, under the wavelength range from 467 nm to 750 nm. Secondly, we generated the synthetic RGB images from the original Stokes vector data cubes. Thirdly, we cropped the synthetic RGB images into image patches at the image size of 96×96 pixels, and then set up a ResNet50-based convolutional neural network (CNN) to classify the image patches of the four Stokes vector parameters (S0, S1, S2, and S3) by application of transfer learning. To test the performances of the model, each time we trained the model based on the image patches (S0, S1, S2, and S3) of 16 patients out of 17 patients, and used the trained model to calculate the testing accuracy based on the image patches of the rest 1 patient (S0, S1, S2, and S3). We repeated the process for 6 times and obtained 24 testing accuracies (S0, S1, S2, and S3) from 6 different patients out of the 17 patients. The preliminary results showed that the average testing accuracy (84.2%) on S3 outperformed the average testing accuracy (83.5%) on S0. Furthermore, 4 of 6 testing accuracies of S3 (96.0%, 87.3%, 82.8%, and 86.7%) outperformed the testing accuracies of S0 (93.3%, 85.2%, 80.2%, and 79.0%). The study demonstrated the potential of using polarized hyperspectral imaging and deep learning for automatic detection of head and neck SCC on pathologic slides. abstract_id: PUBMED:27563489 Comparing whole slide digital images versus traditional glass slides in the detection of common microscopic features seen in dermatitis. Background: The quality and limitations of digital slides are not fully known. We aimed to estimate intrapathologist discrepancy in detecting specific microscopic features on glass slides and digital slides created by scanning at ×20. Methods: Hematoxylin and eosin and periodic acid-Schiff glass slides were digitized using the Mirax Scan (Carl Zeiss Inc., Germany). Six pathologists assessed 50-71 digital slides. We recorded objective magnification, total time, and detection of the following: Mast cells; eosinophils; plasma cells; pigmented macrophages; melanin in the epidermis; fungal bodies; neutrophils; civatte bodies; parakeratosis; and sebocytes. This process was repeated using the corresponding glass slides after 3 weeks. The diagnosis was not required. Results: The mean time to assess digital slides was 176.77 s and 137.61 s for glass slides (P &lt; 0.001, 99% confidence interval [CI]). The mean objective magnification used to detect features using digital slides was 18.28 and 14.07 for glass slides (P &lt; 0.001, 99.99% CI). Parakeratosis, civatte bodies, pigmented macrophages, melanin in the epidermis, mast cells, eosinophils, plasma cells, and neutrophils, were identified at lower objectives on glass slides (P = 0.023-0.001, 95% CI). Average intraobserver concordance ranged from κ = 0.30 to κ = 0.78. Features with poor to fair average concordance were: Melanin in the epidermis (κ = 0.15-0.58); plasma cells (κ = 0.15-0.49); and neutrophils (κ = 0.12-0.48). Features with moderate average intrapathologist concordance were: parakeratosis (κ = 0.21-0.61); civatte bodies (κ = 0.21-0.71); pigment-laden macrophages (κ = 0.34-0.66); mast cells (κ = 0.29-0.78); and eosinophils (κ = 0.31-0.79). The average intrapathologist concordance was good for sebocytes (κ = 0.51-1.00) and fungal bodies (κ = 0.47-0.76). Conclusions: Telepathology using digital slides scanned at ×20 is sufficient for detection of histopathologic features routinely encountered in dermatitis cases, though less efficient than glass slides. abstract_id: PUBMED:25551020 Work-relatedness of lung cancer by smoking and histologic type in Korea. Objectives: This study investigated the distribution of causative agents related to occupational lung cancer, their relationships with work, and associations between work-relatedness and the histologic type of lung cancer. Methods: We used data from the occupational surveillance system in Korea in 2013. In addition, data from 1,404 participants diagnosed with lung cancer were collected through interviews. We included the patients' longest-held job in the analysis. Work-relatedness was categorized as "definite," "probable," "possible," "suspicious," "none," or "undetermined." Results: Among the subjects, 69.3% were men and 30.7% were women. Regarding smoking status, current smokers were the most prevalent (35.5%), followed by non-smokers (32.3%), ex-smokers (32.2%). Regarding the causative agents of lung cancer, asbestos (1.0%) and crystalline silica (0.9%) were the most common in definite work-related cases, while non-arsenical insecticide (2.8%) was the most common in probable cases followed by diesel engine exhaust (1.9%) and asbestos (1.0%). Regarding histologic type, adenocarcinoma was the most common (41.7%), followed by squamous cell carcinoma (21.2%). Among current smokers, squamous cell carcinoma was the most common among definite and probable cases (13.4%), while non-small cell lung cancer was the least common (7.1%). Among non-smokers, squamous cell carcinoma was the most common (21.4%), while the least common was adenocarcinoma (1.6%). Conclusions: Approximately, 9.5% of all lung cancer cases in Korea are occupational-related lung cancer. Well-known substances associated with lung cancer, such as crystalline silica, asbestos, and diesel engine exhaust, are of particular concern. However, the histologic types of lung cancer related to smoking were inconsistent with previous studies when work-relatedness was taken into account. Future studies are required to clarify the incidence of occupational lung cancer in agricultural workers exposed to non-arsenical insecticides and the associations between work-relatedness and the histologic type of lung cancer. abstract_id: PUBMED:34955584 Automatic detection of head and neck squamous cell carcinoma on pathologic slides using polarized hyperspectral imaging and machine learning. The aim of this study is to incorporate polarized hyperspectral imaging (PHSI) with machine learning for automatic detection of head and neck squamous cell carcinoma (SCC) on hematoxylin and eosin (H&amp;E) stained tissue slides. A polarized hyperspectral imaging microscope had been developed in our group. In this paper, we imaged 20 H&amp;E stained tissue slides from 10 patients with SCC of the larynx by the PHSI microscope. Several machine learning algorithms, including support vector machine (SVM), random forest, Gaussian naive Bayes, and logistic regression, were applied to the collected image data for the automatic detection of SCC on the H&amp;E stained tissue slides. The performance of these methods was compared among the collected PHSI data, the pseudo-RGB images generated from the PHSI data, and the PHSI data after applying the principal component analysis (PCA) transformation. The results suggest that SVM is a superior classifier for the classification task based on the PHSI data cubes compared to the other three classifiers. The incorporate of four Stokes vector parameters improved the classification accuracy. Finally, the PCA transformed image data did not improve the accuracy as it might lose some important information from the original PHSI data. The preliminary results show that polarized hyperspectral imaging can have many potential applications in digital pathology. abstract_id: PUBMED:26964809 Comparison of four DNA extraction methods for the detection of Mycobacterium leprae from Ziehl-Neelsen-stained microscopic slides. Objective/background: The diagnosis of leprosy has been a challenge due to the low sensibility of the conventional methods and the impossibility of culturing the causative organism. In this study, four methods for Mycobacterium leprae nucleic-acid extraction from Ziehl-Neelsen-stained slides (ZNS slides) were compared: Phenol/chloroform, Chelex 100 resin, and two commercial kits (Wizard Genomic DNA Purification Kit and QIAamp DNA Mini Kit). Methods: DNA was extracted from four groups of slides: a high-codification-slide group (bacteriological index [BI]⩾4), a low-codification-slide group (BI=1), a negative-slide group (BI=0), and a negative-control-slide group (BI=0). Quality DNA was evidenced by the amplification of specific repetitive element present in M. leprae genomic DNA (RLEP) using a nested polymerase chain reaction. Results: This is the first report comparing four different extraction methods for obtaining M. leprae DNA from ZNS slides in Cuban patients, and applied in molecular diagnosis. Good-quality DNA and positive amplification were detected in the high-codification-slide group with the four methods, while from the low-codification-slide group only the QIAGEN and phenol-chloroform methods obtained amplification of M. leprae. In the negative-slide group, only the QIAGEN method was able to obtain DNA with sufficient quality for positive amplification of the RLEP region. No amplification was observed in the negative-control-slide group by any method. Patients with ZNS negative slides can still transmit the infection, and molecular methods can help identify and treat them, interrupting the chain of transmission and preventing the onset of disabilities. Conclusion: The ZNS slides can be sent easily to reference laboratories for later molecular analysis that can be useful not only to improve the diagnosis, but also for the application of other molecular techniques. abstract_id: PUBMED:36798939 Thyroid Carcinoma Detection on Whole Histologic Slides Using Hyperspectral Imaging and Deep Learning. Hyperspectral imaging (HSI), a non-invasive imaging modality, has been successfully used in many different biological and medical applications. One such application is in the field of oncology, where hyperspectral imaging is being used on histologic samples. This study compares the performances of different image classifiers using different imaging modalities as training data. From a database of 33 fixed tissues from head and neck patients with follicular thyroid carcinoma, we produced three different datasets: an RGB image dataset that was acquired from a whole slide image scanner, a hyperspectral (HS) dataset that was acquired with a compact hyperspectral camera, and an HS-synthesized RGB image dataset. Three separate deep learning classifiers were trained using the three datasets. We show that the deep learning classifier trained on HSI data has an area under the receiver operator characteristic curve (AUC-ROC) of 0.966, higher than that of the classifiers trained on RGB and HSI-synthesized RGB data. This study demonstrates that hyperspectral images improve the performance of cancer classification on whole histologic slides. Hyperspectral imaging and deep learning provide an automatic tool for thyroid cancer detection on whole histologic slides. abstract_id: PUBMED:27671660 Mitochondrial DNA from the eradicated European Plasmodium vivax and P. falciparum from 70-year-old slides from the Ebro Delta in Spain. Phylogenetic analysis of Plasmodium parasites has indicated that their modern-day distribution is a result of a series of human-mediated dispersals involving transport between Africa, Europe, America, and Asia. A major outstanding question is the phylogenetic affinity of the malaria causing parasites Plasmodium vivax and falciparum in historic southern Europe-where it was endemic until the mid-20th century, after which it was eradicated across the region. Resolving the identity of these parasites will be critical for answering several hypotheses on the malaria dispersal. Recently, a set of slides with blood stains of malaria-affected people from the Ebro Delta (Spain), dated between 1942 and 1944, have been found in a local medical collection. We extracted DNA from three slides, two of them stained with Giemsa (on which Plasmodium parasites could still be seen under the microscope) and another one consisting of dried blood spots. We generated the data using Illumina sequencing after using several strategies aimed at increasing the Plasmodium DNA yield: depletion of the human genomic (g)DNA content through hybridization with human gDNA baits, and capture-enrichment using gDNA derived from P. falciparum Plasmodium mitochondrial genome sequences were subsequently reconstructed from the resulting data. Phylogenetic analysis of the eradicated European P. vivax mtDNA genome indicates that the European isolate is closely related to the most common present-day American haplotype and likely entered the American continent post-Columbian contact. Furthermore, the European P. falciparum mtDNA indicates a link with current Indian strains that is in agreement with historical accounts. abstract_id: PUBMED:8960028 Histologic processing of thick tissue specimens from cytology slides. A novel technique. Objective: To describe a novel method by which small tissue fragments found in routine cytology specimens can be processed for histologic slides without destroying or damaging the cytologic specimen. Study Design: To develop a method by which small tissue fragments (microbiopsies) are removed from cytologic slides and processed through paraffin or plastic, after which histologic slides can be prepared. Results: In all cases described, the histologic sections from the microbiopsies were of excellent quality. Microscopic examination gave additional important information. Furthermore, immunochemistry could be performed on paraffin-processed microbiopsies. Conclusion: This method can be of great value in cytologic specimens in which microbiopsies are present, not only in those cases in which cytologic information is insufficient for diagnosis, but also in cases with conclusive diagnostic information. Routinely applied, this method could make biopsies unnecessary in some cases. Answer: Yes, DNA measurement on histologic slides does work on human tissue. The study described in abstract PUBMED:9267562 investigated the performance of different mathematical methods for correcting histograms obtained by densitometric DNA measurement on thin tissue sections to account for cutting artifacts in various types of human tissue. The study found that correct ploidy equivalents could be calculated using specific algorithms, depending on the section thickness. For very thin sections (approximately 3 microns), the algorithm of Bins provided reliable results. For medium-thick sections (5.3-5.9 microns), the formulas of McCready and Haroske resulted in correct ploidy values. Sections thicker than approximately 7 microns did not require mathematical correction for histogram artifacts. The conclusion was that reliable results can be obtained from DNA measurements on histologic slides of human tissue, provided that the exact section thickness is known and appropriate correction algorithms are applied.
Instruction: Is a day hospital rehabilitation programme associated with reduction of handicap in stroke patients? Abstracts: abstract_id: PUBMED:15137557 Is a day hospital rehabilitation programme associated with reduction of handicap in stroke patients? Objective: (1) To assess whether a rehabilitation day hospital programme is associated with a reduced handicap level of stroke patients. (2) To estimate the relationship between the London Handicap Scale (LHS) and other outcome measures. (3) To examine the effect of demographic parameters (age, gender, family status, education) on LHS scores. Design: A prospective longitudinal survey. Setting: An urban geriatric rehabilitation day hospital. Subjects: Two hundred and seven elderly stroke patients admitted between December 1999 and February 2001. Main Outcome Measures: London Handicap Scale (LHS), Functional Independent Measure (FIM), Nottingham Extended ADL Index, timed get up and go test. Results: LHS scores at discharge changed significantly (p &lt; 0.008) for mobility, physical independence and occupation. The overall change in LHS score was 2.3 points (20%); effect size 0.43. A significant relationship was found between discharge score of LHS and admission score of FIM, Nottingham Index, timed get up and go and age. Multiple linear regressions did not identify a good predictor for the discharge score of LHS. Higher education was associated with higher LHS scores on admission (p = 0.016) but with less success in correcting handicap (p = 0.046). Conclusions: A day hospital programme is associated with reduced level of handicap in stroke patients. The LHS is a useful and simple scale for measuring change in these patients. LHS in stroke patients correlates with other outcome measures, yet they cannot be used interchangeably. A significant relationship between education and level of handicap exists. abstract_id: PUBMED:1534933 Is "handicap" affected by a hospital based rehabilitation programme? "Handicap", categorised according to the ICIDH, and "Disability", measured by the Smith Activities of Daily Living (ADL) Index, were assessed in over 300 patients (suffering from strokes, lower limb amputations, cardiac disorders, low back pain and other diseases) on admission to and after discharge from a hospital based rehabilitation programme. Statistically significant improvements were found in Physical Independence and Mobility Handicaps which correlated with improvements in ADL Self-care (R = 0.58, p less than 0.001) and ADL Mobility (R = 0.53, p less than 0.001) averages. Small but significant improvements were also found in Orientation, Occupation and Social Integration Handicaps but there was no change in Economic Self Sufficiency Handicap. Thus it is possible to use the Handicap categorisations to assess disabled people sequentially during a rehabilitation programme. The problems of data collection, however, are so formidable that its usefulness must be limited. abstract_id: PUBMED:14606741 Programme evaluation of a geriatric rehabilitation day hospital. Objective: To evaluate the rehabilitation programme in a geriatric day hospital. Design: An observational study. Setting: An urban geriatric rehabilitation day hospital. Subjects: Three hundred and fifty-three older patients admitted to a rehabilitation day hospital during 2000: 163 post stroke, 113 with deconditioning and 77 post orthopaedic surgery (hip fracture and joint replacement). Main Outcome Measures: Functional Independence Measure (FIM), Nottingham Extended ADL Index, timed 'get up and go' test. Results: The mean discharge scores of all FIM items were significantly higher than the mean admission scores. Mean score change, however, for each item was less than one point. Most of the patients (70%) improved by less than 5 points, regardless of main admission diagnoses. Patients with a lower admission FIM score improved more than those with a higher admission score. The majority of the enrolled patients (92%) showed an improvement in their Nottingham Extended ADL Index score. The mean (SD) score on admission was 21 (11.9) and at discharge 31 (14.6). In all but three items (self-feeding, using a telephone and driving a car), mean scores at discharge were significantly higher than admission scores (p &lt; 0.001). A significant improvement in timed 'get up and go' score was found for all patients regardless of main diagnosis. Performance time decreased by 33% and discharge scores for all patient groups were approximately 20 seconds. Conclusions: The notable improvement in mobility and instrumental activities of daily living on the one hand, and the minor improvements achieved in basic activities of daily living on the other, suggests a need to revise treatment goals of day hospitals. abstract_id: PUBMED:10095503 Where to from here: patients of a day hospital rehabilitation programme perceived needs following stroke. The loss of ability to perform routine activities can result in difficulties for the stroke patient to resume their normal lifestyle. A desire to assist stroke patients after discharge prompted this pilot study into the needs of these patients during and after a program of rehabilitation at a day hospital. A longitudinal approach was used for this pilot study. The aim of the study was to identify patients' needs pertaining to the resumption of their lifestyle, and social and emotional well-being, that emerged during the rehabilitation programme at the day hospital, on its completion and within six months of discharge from the day hospital. The study consisted of three semi-structured interviews with ten stroke patients over a period of nine to twelve months. The semi-structured interview employed open-ended questions to collect information from the participants. The interview process sought to elicit the needs of the participants during their time at the day hospital and then after discharge. It was evident from the interviews that the needs of the participants were primarily the opportunity to practice activities which could improve physical capabilities as the physical limitations placed on the participants by the stroke affected their motivation, morale and general sense of self. The day hospital was instrumental in providing participants with opportunities to learn to 'use' what was functioning, and therefore help the participants improve their physical capabilities. This assisted them to regain their independence. During the interviews participants also spoke about social and emotional needs. While they received some support from interactions with the staff and other patients at the day hospital, participants stated that mostly social and emotional support came from their families. After discharge from the day hospital participants were unable to identify specific needs which they believed the day hospital could provide. abstract_id: PUBMED:18723862 Home versus day rehabilitation: a randomised controlled trial. Objective: to assess the effect of home versus day rehabilitation on patient outcomes. Design: randomised controlled trial. Setting: post-hospital rehabilitation. Participants: two hundred and twenty-nine hospitalised patients referred for ambulatory rehabilitation. Interventions: hospital-based day rehabilitation programme versus home-based rehabilitation programme. Main Outcome Measures: at 3 months, information was collected on hospital readmission, transfer to residential care, functional level, quality of life, carer stress and carer quality of life. At 6 months, place of residence, hospital re-admissions and mortality status were collected. Results: there were significant improvements in the functional outcomes from baseline to 3 months for all participants. At discharge, carers of patients in day hospital reported higher Caregiver Strain Index (CSI) scores in comparison to home rehabilitation carers (4.95 versus 3.56, P = 0.047). Patients in day hospital had double the risk of readmission compared to those in home rehabilitation (RR = 2.1; 95% CI 1.2-3.9). This effect persisted at 6 months. Conclusions: day hospital patients are more likely to be readmitted to hospital possibly due to increased access to admitting medical staff. This small trial favours the home as a better site for post-hospital rehabilitation. abstract_id: PUBMED:12437863 Predictors of handicap situations following post-stroke rehabilitation. Purpose: Many stroke survivors have to cope with impairments and disabilities that may result in the occurrence of handicap situations. The purpose of the study was to explore bio-psycho-social predictors of handicap situations six months after discharge from an intensive rehabilitation programme. Methods: At discharge from a rehabilitation programme, participants were evaluated with instruments measuring motor, sensory, cognitive, perceptual, affective and psychosocial impairments and disabilities that may play a role in the development of handicap. Some other demographic and clinical variables, and those related to rehabilitation, were also collected. Six months later, they were re-assessed in their own environment in order to document their handicap level with the Assessment of Life Habits (LIFE-H). Results: One hundred and thirty-two stroke patients participated in the discharge evaluation and 102 of them also participated in the handicap measurement. Relationships between handicap level and impairments and disabilities were all statistically significant. Multiple regression analyses indicated that affect, lower extremity co-ordination, length of stay in rehabilitation, balance, age and comorbidity at the end of an intensive rehabilitation programme are the best predictors of handicap situations six months later (adjusted R(2): 68.1%). Conclusions: In spite of its exploratory nature, this study revealed that, among a substantial number of personal characteristics, some were more related to a handicap measure and have greater predictive value. Other studies should be carried out to validate these findings and to consider more environmental factors in order to better understand factors related to the development of handicap situations. abstract_id: PUBMED:20441412 Comparison of rehabilitation outcomes in day hospital and home settings for people with acquired brain injury - a systematic review. Purpose: To conduct a systematic literature review that identifies and summarises the findings of studies that have compared the outcomes of outpatient, brain injury rehabilitation programmes conducted in real-life home settings and day hospital clinic settings. Method: A systematic search, using specific inclusion and exclusion criteria, of English language electronic databases, as well as manual search of reference lists of identified articles. Each study was evaluated and rated using 11 methodological criteria. The findings are summarised. Results: Database searches, using key terms, initially yielded 4484 articles. Of these articles, 17 articles were included in the review and rated according to methodological quality. The majority of studies indicated that the outcomes of rehabilitation programmes conducted in home settings are at least equivalent to rehabilitation conducted in day hospital and outpatient settings for people with acquired brain injury. Most identified studies (15/17) were with groups of people with a diagnosis of stroke, primarily over the age of 65 years. Assessments of impairment or activity limitation were primarily used to reflect outcomes and no studies used participation or goal-based outcome measures. Conclusions: The available studies indicate that outcomes of outpatient rehabilitation programmes delivered at home, of short-term duration (mostly 3 months) for people with stroke recently discharged from hospital, are at least equivalent to day hospital-based outpatient rehabilitation programme outcomes. However, there have been no controlled studies designed to investigate the influence of therapy context (home and clinic settings) on outcomes for people receiving outpatient neurological rehabilitation. Furthermore, investigations of the efficacy of community-based rehabilitation with younger people with brain injuries, caused by mechanisms other than stroke, are required. abstract_id: PUBMED:2933353 Geriatric rehabilitation in day hospitals. The main objective of geriatric day hospitals, that of physical rehabilitation of elderly patients mainly with strokes and arthritis, is matched by their facilities and their medical, nursing and therapy staffing. Patients attend by ambulance on 1 or 2 days a week for about 3 months for assessment and treatment, while continuing to live at home. Day hospitals are a popular and effective resource for the rehabilitation of selected patients, though not necessarily cheaper than institutional care. Problems remain in the separation of social care and psychogeriatric patients and in ambulance transport. abstract_id: PUBMED:17364783 The association between patients' cognitive status and rehabilitation outcome in a geriatric day hospital. Purpose: To assess whether and to what extent cognitive status affects functional gain in mobility and extended activities of daily living (ADL) in patients admitted to a geriatric day hospital. Methods: During 2003, 135 older patients admitted to a rehabilitation day hospital were evaluated by the Mini Mental State Examination (MMSE) and the Cognitive Functional Independence Measure (FIM). On admission and at discharge, they were assessed by the Nottingham Extended ADL Index (NEAI) and the Timed Get Up and Go test (TUG). Results: Orthopedic and stroke patients achieved significant functional gain in mobility regardless of their cognitive status. Nevertheless, cognitively intact stroke patients achieved higher admission and discharge NEAI scores, manifesting greater absolute functional gain than cognitively impaired patients. The patients' ability to extract their rehabilitation potential was significantly better. Conclusions: Cognitively impaired patients can improve in mobility and performance of extended ADL during day hospital rehabilitation. Evaluation of the patients' cognitive status may enable the multidisciplinary team to develop suitable individual programs and better allocate resources. Cognitively intact stroke patients may benefit from extended rehabilitation treatment compared to impaired stroke patients as they are more capable of utilizing their potential to achieve independence in daily tasks. abstract_id: PUBMED:26727977 Effectiveness of a fall-risk reduction programme for inpatient rehabilitation after stroke. Purpose: To evaluate effectiveness of fall-risk-assessment-based fall prevention for stroke rehabilitation inpatients. Method: A consecutive series of 232 patients admitted for the first time to a subacute stroke-rehabilitation ward during 2010-2011 was studied in detail. The Assessment Sheet for Fall Prediction in Stroke Inpatients (ASFPSI by Nakagawa et al.) was used to assess fall-risk upon admission. Association of ASFPSI score and patient characteristics with actual falls was statistically tested. Yearly incidence of falls per 1000 hospital days (HD) was retrospectively audited for the 2006-2014 period to evaluate effectiveness of fall-risk reduction measures. Results: The observed incidence of falls over the detailed-study-period was 3.0/1000 HD; 39% of the fallers fell during the first week after admission. ASFPSI score was not significantly associated with falls. Longer hospital stay, left body-side affected and non-extreme FIM score (55-101) were associated with higher odds of fall. Introduction of fall-risk reduction measures followed by compulsory fall-risk assessment lead to incidence of falls dropping from 7.1/1000 HD in 2006 to 2.8/1000 HD in 2011 and remaining at that level until 2014. Conclusions: The fall-risk-assessment-based measures appear to have led to decreasing falls risk among post-stroke rehabilitation inpatients classified as being at high risk of falls. The fall prevention programme as a whole was successful. Patients with non-extreme level of functional independence should receive enhanced fall prevention. Implications for Rehabilitation Recognising the fall risk upon the patient's admission is essential for preventing falls in rehabilitation wards. Assessing the fall risk is a team tasks and combines information from various sources. Assessing fall risk in stroke patients using the assessment sheet by Nakagawa et al. immediately upon admission systematically draws attention to the risk of falls in each individual patient. Answer: Yes, a day hospital rehabilitation programme is associated with a reduction of handicap in stroke patients. The evidence from the studies indicates that patients who participate in day hospital rehabilitation programmes experience significant improvements in various aspects of their functioning. For instance, one study found that the London Handicap Scale (LHS) scores at discharge changed significantly for mobility, physical independence, and occupation, with an overall change in LHS score of 2.3 points (20%) and an effect size of 0.43 (PUBMED:15137557). Another study reported statistically significant improvements in Physical Independence and Mobility Handicaps, which correlated with improvements in ADL Self-care and ADL Mobility (PUBMED:1534933). Additionally, an observational study showed that the majority of patients improved in their Nottingham Extended ADL Index score, and there was a significant improvement in the timed 'get up and go' score for all patients regardless of the main diagnosis (PUBMED:14606741). Furthermore, a randomised controlled trial comparing home versus day rehabilitation found significant improvements in functional outcomes for all participants, although day hospital patients had a higher risk of readmission (PUBMED:18723862). Another study highlighted that cognitively intact stroke patients achieved higher admission and discharge NEAI scores, showing greater absolute functional gain than cognitively impaired patients (PUBMED:17364783). Lastly, a systematic review indicated that outcomes of outpatient rehabilitation programmes delivered at home for people with stroke recently discharged from hospital are at least equivalent to day hospital-based outpatient rehabilitation programme outcomes (PUBMED:20441412). In summary, the evidence suggests that day hospital rehabilitation programmes are beneficial in reducing the level of handicap in stroke patients, improving their mobility, physical independence, and ability to perform activities of daily living.
Instruction: The microbial pattern of the catheter exit-site infection in peritoneal dialysis: A non-diphtheria Corynebacteria emergence? Abstracts: abstract_id: PUBMED:17725455 The microbial pattern of the catheter exit-site infection in peritoneal dialysis: A non-diphtheria Corynebacteria emergence? Background: A prospective cohort study was undertaken to compare the rates of the infecting microorganisms of the peritoneal catheter exit-site in three periods of the prophylactic protocol of a peritoneal dialysis program. All patients treated for more than one month on Peritoneal Dialysis were included: Fourty-eight in Period 1 (P1), 48 in Period 2 (P2), and 54 in Period 3 (P3). Each period was of 3 years. Methods: Infection prophylaxis protocol: P1: hydrogen peroxide or povidone iodine and non-occlusive dressing; P2: sterile water (boiled water) instead of antiseptic agents, semi-permeable dressing for taking showers, and nasal mupirocine prophylaxis for Staphylococcus aureus carriers; P3: equal to P2, plus local application of antibiotics in equivocal exit-site for infection and argentic nitrate in granulation tissue. Main Outcome Measure: The rates of catheter infection and microorganisms causing infection were analysed by means of the Poisson regression method. Chi-square and ANOVA when appropriate. Results: The proportion of catheters implanted by nephrologist or surgeon (p&lt;0.01) and modality treatment by CAPD or CCPD (p&lt;0.0001) were significantly different in the three periods, while the Staph. Aureus carrieres was in the limit of significance (p=0.048). Throughout the three periods, a significantly decreasing rate of total (P=0.0035) and acute infections (P&lt;0.001), Staph. aureus (P=0.003) and peritonitis (P=0.0025) were found. The Pseudomonas aer. (P=0.006) and Gram negative Bacteria (P=0.023) decreased significantly in P2. The multiple factor analysis included eight factors: sex, age group, ESRD, DM, catheter implantation (nephrologist, surgeon), modality treatment (CAPD, CCPD), manufacturer and prophylaxis period as possible predictors of the catheter infections, the specific microorganisms and the peritonitis. That analysis revealed the prophylaxis period as the main predictive factor of the improvements found (p&lt;0.02,- p&lt;0.001). In contrast, the Corynebacteria spp. increased significantly (P=0.008) throughout the three periods. One half of the Corynebacteria in each period could be considered colonisers. The other half caused true infections, but not one of those episodes required catheter intervention. The non-diphtheria Corynebacteria increase was found related with the continuous cycling Peritoneal Dialysis treatment in multiple factor analysis (p=0.0023) and in the proportion analysis (P=0.039, c2). Conclusion: The progressive protocol applied obtained good results, without the continued use of local antiseptics or antibiotics at the exit-site. However, the non-diphtheria Corynebacteria sp. infection increment favours the consideration of an antiseptic agent for the exit-site care. abstract_id: PUBMED:15490985 Exit-site infections by non-diphtheria corynebacteria in CAPD. Non-diphtheria corynebacteria species cause disease in risk populations such as immunocompromised patients and patients with indwelling medical devices. Despite reports of exit-site infection and peritonitis caused by non-diphtheria corynebacteria, these organisms are frequently dismissed as contaminants. During a 10-year observation period, we prospectively identified 8 cases of exit-site/tunnel infections caused by 2 different species of corynebacteria (Corynebacterium striatum in 5 and C. jeikeium in 3 cases). Four patients experienced a second episode of exit-site infection 3 months (2 cases), 25 months, and 40 months, respectively, after termination of an oral cephalosporin therapy of 4 to 6 weeks' duration. Non-diphtheria corynebacteria accounted for 9% of all exit-site infections during the study period. All catheter-related infections healed; no catheter had to be removed. The diagnosis of catheter-related non-diphtheria corynebacteria infection may be suspected when Gram stain shows gram-positive rods and with colony morphology and commercial biochemical identification systems. Susceptibility of non-diphtheria corynebacteria to antibiotics may vary, especially in C. jeikeium. Virtually all Corynebacterium species are sensitive to vancomycin. Empirical antibiotic therapy with vancomycin should be initiated while antibiotic susceptibility testing is being carried out. Oral cephalosporin may be an alternative treatment regimen for exit-site infections if sensitive. This study highlights the importance of non-diphtheria corynebacteria as emerging nosocomial pathogens in the population of end-stage renal disease patients on on continuous ambulatory peritoneal dialysis. abstract_id: PUBMED:14579942 Corynebacterium striatum peritoneal dialysis catheter exit site infection. Background: Regarded as normal flora of the human skin and mucus membranes, non-diphtheria corynebacteria are frequently dismissed as contaminants or harmless colonizers. Recently, the pathogenic potential of C. striatum has been realized in immunocompromised patients with indwelling medical devices and previous antibiotic exposure. Objective: We report here the diagnosis, treatment and clinical outcome of a peritoneal dialysis patient with a C. striatum infection of the catheter exit site. The aim is to present important features to assist in identifying clinically significant infections and provide guidelines for treatment. Results: An immunocompromised patient with previous antimicrobial exposure developed an acute dialysis catheter exit site infection. C. striatum was isolated in pure growth. After initial treatment failure with oral antibiotics and intensified wound care, a satisfactory outcome was ultimately achieved without relapse or loss of the catheter with a 1-month course of vancomycin, 1 g intravenously, administered at 5-day intervals. Conclusions: The virulent capacity of Corynebacterium species should not be underestimated, particularly in high-risk patients. The presence of clinical signs of infection with isolation of the organism in pure culture and the presence of Gram-positive rods on direct Gram stain, especially in association with a leukocyte reaction, supports a cause and effect relationship. Because corynebacteria may be multiresistant, susceptibility testing should be performed on clinically significant isolates. Initial antibiotic selection is influenced by the severity of the infection, however, current experience favors vancomycin in significant infections. abstract_id: PUBMED:24403054 Identification of low-molecular-weight compounds inhibiting growth of corynebacteria: potential lead compounds for antibiotics. The bacterial genus Corynebacteria contains several pathogenic species that cause diseases such as diphtheria in humans and "cheesy gland" in goats and sheep. Thus, identifying new therapeutic targets to treat Corynebacteria infections is both medically and economically important. CG2496, a functionally uncharacterized protein from Corynebacterium glutamicum, was evaluated using an NMR ligand-affinity screen. A total of 11 compounds from a library of 460 biologically active compounds were shown to selectively bind CG2496 in a highly conserved region of the protein. The best binder was identified to be methiothepin (KD =54 ± 19 µM), an FDA-approved serotonin receptor antagonist. Methiothepin was also shown to inhibit the growth of C. glutamicum, but not bacteria that lack CG2496 homologs. Our results suggest that CG2496 is a novel therapeutic target and methiothepin is a potential lead compound or structural scaffold for developing new antibiotics specifically targeting Corynebacteria. abstract_id: PUBMED:9480648 Susceptibility of Corynebacteria isolated in St Petersburg to antibacterial drugs Susceptibility of 150 Corynebacterium isolates (91 strains of C.pseudodiphtheriticum and 59 strains of the ANF group corynebacteria) to 21 antibacterial drugs was determined by the method of serial dilutions in a solid medium. It was shown that the MIC of the drugs for the diphtheroids was within the ranges of &lt; 0.015 to &gt; 32.0 micrograms/ml. 66 per cent of the Corynebacterium strains circulating in St. Petersburg was resistant at least to 1 antibacterial drug. The Corynebacterium isolates with moderate resistance to erythromycin and lincomycin (57.3 per cent) and resistant to trimethoprime (16.7 per cent) were the most frequent. 8.0 per cent of the diphtheroids was resistant at least to 4 antibacterial drugs. No significant difference in the susceptibility of the ANF group corynebacteria and C.pseudodiphtheriticum to the drugs was observed. Gentamicin, rifampicin, tetracycline and doxycycline showed high activity against the corynebacteria at present circulating in St. Petersburg. When antibacterial therapy of the infection due to corynebacteria fails it is necessary to estimate antibioticograms of Corynebacterium pure cultures. abstract_id: PUBMED:34292686 Possibilities of practical application of different culture mediums for laboratory diagnostic of diphtheria. The purpose of the work is to evaluate the cultural and morphological properties of colonies of clinically significant corynebacteria on culture mediums for the isolation of corynebacteria. The study used 9 culture mediums for the isolation of corynebacteria: a culture medium for the isolation of corynebacteria (Corynebacagar); Tellurite-containing blood agars on base - Culture medium № 1 GRM, Culture agar for the cultivation of microorganisms (GRM agar), Culture medium for determining the sensitivity of microorganisms to antibacterial preparations - AGV, culture agar for the cultivation of dry microorganisms (SPA), Clauberg medium II, Hoyle Medium agar (Oxoid), Blood agar base (Conda), Columbia Agar Base (Conda). The work used 7 test strains of microorganisms from the State collections of pathogenic microorganisms - C. diphtheriae biovars gravis, mitis, intermedius, belfanti and subspecies lausannense, C. ulcerans and C.pseudotuberculosis. Studies were carried out in accordance with MUK 4.2.3065-13 «Laboratory diagnosis of diphtheria infection». We describe culture-morphological properties of strains on all tested culture mediums the isolation of corynebacteria after 24 and 48 hours of incubation. Analysis of the results on the growth properties of culture mediums showed that all culture mediums had high sensitivity - from dilution 10-7 for all test strains. Colonies of corynebacteria were visually detected on culture mediums after 19-20 hours of cultivation. When cultivating a suspension of corynebacteria from breeding 10-6 on culture mediums, the number of colonies ranged from 95±5 to 120±10. Conclusion. All culture mediums had differential diagnostic properties that ensure the growth of corynebacteria after the day of incubation. abstract_id: PUBMED:6760340 Infections caused by nondiphtheria corynebacteria. After decades of confusion about their microbiologic classification and clinical significance, the nondiphtheria corynebacteria have emerged as important pathogens. Although isolation of these organisms may represent contamination with skin flora, several species, including Corynebacterium ulcerans, Corynebacterium pseudotuberculosis (Corynebacterium ovis), Corynebacterium haemolyticum, Corynebacterium pseudodiptheriticum, Corynebacterium equi, Corynebacterium bovis, Corynebacterium xerosis, and corynebacteria of group JK, clearly cause disease in humans. Most of these organisms infect animals, which are the source of human infection with some species. Some nondiptheria species of Corynebacterium produce recognizable clinical syndromes such as granulomatous lymphadenitis, pneumonitis, pharyngitis, cutaneous infections, and, most commonly, endocarditis. Certain species infect healthy hosts, while others predominantly attack immunocompromised individuals. Several species produce toxins, including a diphtheria-like toxin, a dermonecrotic toxin, and a soluble hemolysin. A microbiologic scheme of identification of the genus Corynebacterium and its major defined species is presented. abstract_id: PUBMED:30324081 Corynebacteria as a cause of pulmonary infection: a case series and literature review. Background: In most cases of community-acquired pneumonia (CAP), an etiologic agent is not determined; the most common report from the microbiological evaluation of sputum cites "normal respiratory flora." Non-diphtheria Corynebacterium spp., a component of this flora, is commonly viewed as a contaminant, but it may be the cause of pneumonia and the frequency with which it causes CAP may be underestimated. Case Presentations: This report present 3 cases of CAP in which Corynebacterium spp. was clearly the predominant isolate; identification was confirmed by matrix-assisted laser desorption ionization time of flight (MALDI-TOF) mass spectrometry. Two cases were caused by C. propinquum and one by C. striatum. Two patients had a tracheostomy and one was on hemodialysis. Patients who received an appropriate antibiotic responded well. Conclusion: When identified as the predominant isolate in sputum from a patient with CAP, Corynebacterium spp. should be considered as a potential cause of the infection. In cases with patients who have compromised airway clearance or who are immunocompromised, microaspiration may be responsible. While some Corynebacterium spp. are suspectible to antibiotics usually prescribed for CAP, others are susceptible only to vancomycin or aminoglycosides. Vancomycin is thus the appropriate empiric antibiotic, pending speciation and susceptibility test results. The number of reported cases with result of antibiotic susceptibility testing, however, remains limited, and further investigation is needed. Non-diphtheria Corynebacterium spp. represent a noteworthy clinical cause of pneumonia. Identification by Gram stain and as a predominant organism on culture demands careful consideration for management. abstract_id: PUBMED:19628642 Corynebacterium diphtheriae as an emerging pathogen in nephrostomy catheter-related infection: evaluation of traits associated with bacterial virulence. Corynebacterium diphtheriae still represents a global medical challenge, particularly due to the significant number of individuals susceptible to diphtheria and the emergence of non-toxigenic strains as the causative agents of invasive infections. In this study, we characterized the clinical and microbiological features of what we believe to be the first case of C. diphtheriae infection of a percutaneous nephrostomy catheter insertion site in an elderly patient with a fatal bladder cancer. Moreover, we demonstrated the potential role of adherence, biofilm formation and fibrin deposition traits in C. diphtheriae from the catheter-related infection. Non-toxigenic C. diphtheriae isolated from the purulent discharge (named strain BR-CAT5003748) was identified by the API Coryne system (code 1 010 324) and a multiplex PCR for detection of dtxR and tox genes. Strain BR-CAT5003748 showed resistance to oxacillin, ceftazidime and ciprofloxacin. In experiments performed in vitro, the catheter isolate was classified as moderately hydrophobic and as moderately adherent to polystyrene surfaces. Glass provided a more effective surface for biofilm formation than polystyrene. Micro-organisms adhered to (&gt;1.5 x 10(6) c.f.u.) and multiplied on surfaces of polyurethane catheters. Microcolony formation (a hallmark of biofilm formation) and amorphous accretions were observed by scanning electron microscopy on both external and luminal catheter surfaces. Micro-organisms yielded simultaneous expression of localized adherence-like and aggregative-like (LAL/AAL) adherence patterns to HEp-2 cells. Interestingly, the coagulase tube test resulted in the formation of a thin layer of fibrin embedded in rabbit plasma by the non-toxigenic BR-CAT5003748 strain. In conclusion, C. diphtheriae should be recognized as a potential cause of catheter-related infections in at-risk populations such as elderly and cancer patients. LAL/AAL strains may be associated with virulence traits that enable C. diphtheriae to effectively produce biofilms on catheter surfaces. Biofilm formation and fibrin deposition could have contributed to the persistence of C. diphtheriae at the infected insertion site and the obstruction of the nephrostomy catheter. abstract_id: PUBMED:22493337 Bloodstream infection caused by nontoxigenic Corynebacterium diphtheriae in an immunocompromised host in the United States. Corynebacterium species are well-known causes of catheter-related bloodstream infections. Toxigenic strains of Corynebacterium diphtheriae cause respiratory diphtheria. We report a bloodstream infection caused by a nontoxigenic strain of C. diphtheriae and discuss the epidemiology, possible sources of the infection, and the implications of rapid species identification of corynebacteria. Answer: The microbial pattern of catheter exit-site infections in peritoneal dialysis has shown an emergence of non-diphtheria Corynebacteria. A prospective cohort study aimed to compare the rates of infecting microorganisms of the peritoneal catheter exit-site over three periods of a prophylactic protocol in a peritoneal dialysis program. The study found that while the rates of total and acute infections, Staphylococcus aureus, Pseudomonas aeruginosa, Gram-negative bacteria, and peritonitis decreased significantly over the periods, the incidence of Corynebacteria spp. infections increased significantly (PUBMED:17725455). Non-diphtheria Corynebacteria species, which are often dismissed as contaminants, have been increasingly recognized as pathogens, particularly in immunocompromised patients and those with indwelling medical devices. During a 10-year observation period, 8 cases of exit-site/tunnel infections caused by Corynebacterium striatum and C. jeikeium were identified, accounting for 9% of all exit-site infections. These infections were treatable, and no catheter had to be removed (PUBMED:15490985). Corynebacterium striatum, in particular, has been identified as a pathogen in immunocompromised patients with indwelling medical devices and previous antibiotic exposure. A case of a peritoneal dialysis patient with a C. striatum infection of the catheter exit site was reported, which was ultimately treated successfully with vancomycin without the need for catheter removal (PUBMED:14579942). The emergence of non-diphtheria Corynebacteria as a significant cause of catheter exit-site infections in peritoneal dialysis patients suggests the need for reconsideration of the use of antiseptic agents for exit-site care, as the progressive protocol applied in the study obtained good results without the continued use of local antiseptics or antibiotics at the exit-site (PUBMED:17725455).
Instruction: Does the topical use of epinephrine for sinus floor augmentation affect systemic hemodynamics? Abstracts: abstract_id: PUBMED:23648576 Does the topical use of epinephrine for sinus floor augmentation affect systemic hemodynamics? Purpose: Although epinephrine is one of the most commonly used vasoconstrictor in association with local anesthesia in dentistry, systemic effects of topical admission of epinephrine for sinus augmentation have not been investigated yet. The purpose of this study was to reveal the safety of epinephrine as a topical vasoconstrictor in sinus augmentation procedures. Patients And Methods: Forty-three healthy patients who require sinus floor augmentation for dental implant placement were included in this study. Patients were divided into 2 groups according to the application of either epinephrine-soaked cottonoid or saline-soaked cottonoid for sinus floor augmentation, and heart rate, systolic, and diastolic pressures were evaluated and compared before, during, and after the procedure. Results: Although there were changes in heart rate, systolic, and diastolic blood pressures, no statistical significance was observed for neither heart rate nor systolic and diastolic blood pressures (P &gt; 0.05). Conclusions: This study showed that the topical use of 1/100,000 epinephrine ensures efficacy by helping the clinician to elevate the sinus membrane and keeps the changes in systemic hemodynamics within safe limitations. abstract_id: PUBMED:19575117 Topical use of adrenaline in different concentrations for endoscopic sinus surgery. Unlabelled: The ideal adrenaline concentration remains unknown. Aim: Compare topical adrenaline solutions in different concentrations. Study Design: Prospective, double blind, randomized trial. Patients And Methods: 49 patients divided in 3 groups underwent endoscopic sinus surgery, using only topical solutions of adrenaline in different concentrations (1:2,000, 1:10,000 and 1:50,000). We compared the duration of surgery, intra-operative bleeding, plasmatic levels of catecholamines, hemodynamic parameters and changes in heart rhythm. Results: Surgery time was shorter in the group using adrenaline 1:2,000, which also showed less bleeding in all evaluations (objective and subjective - p &lt; 0.0001). Plasmatic levels of epinephrine rose in all groups, more sharply in the 1:2,000 group. There was a trend towards elevation of blood pressure in the groups using adrenaline 1:2,000 and 1:10,000, with a greater occurrence of hypertensive peaks. Discussion: We found a very significance bleeding difference favoring the 1:2,000. The blood pressure elevation in the 1:2,000 and 1:10,000 groups was progressive but very slow throughout the procedure, which could be associated with the anesthesia technique. Conclusion: We favor the use of topical adrenalin 1:2,000 due to a clear superiority in hemostasis. Further investigation is needed to corroborate our findings. abstract_id: PUBMED:20338114 Concentrated topical epinephrine is safe in endoscopic sinus surgery. Background: The safety of topical epinephrine (1:1000 concentration) has recently been called into question. No objective data have been presented examining this issue. Methods: We retrospectively reviewed all cases of endoscopic sinus surgery performed by the senior authors at their respective institutions. Perioperative records were reviewed for use of topical 1:1000 epinephrine and for any intraoperative or postoperative complications related to its use. Results: During the study periods, 1998 cases of endoscopic sinus surgery were identified where topical 1:1000 epinephrine was used. Two complications were identified (0.1%), one at each institution. One patient suffered coronary artery vasospasm, attributed to the use of topical 1:1000 epinephrine during surgery. The other patient developed transient intraoperative hypertension immediately after inadvertent submucosal injection of concentrated epinephrine. This event was caused by an erroneous switch with the 1% lidocaine/1:100,000 solution. This error occurred early in the institution's experience with concentrated topical epinephrine. After implementation of additional preventative safeguards, no further errors or complications have occurred. Conclusion: Concentrated epinephrine has the potential for complications when used during endoscopic sinus surgery. Nevertheless, with appropriate safeguards described in this study, its topical use is safe. abstract_id: PUBMED:28922579 Evaluating real-time effects of topical 1:1000 epinephrine in endoscopic sinus and skull-base surgery on hemodynamic parameters through intraoperative arterial line monitoring. Background: Administration of topical 1:1000 epinephrine is commonly used in practice to achieve vasoconstriction during endoscopic sinus surgery and skull-base surgery; however, real-time effects on cardiovascular changes from systemic absorption have not been well studied. Methods: Twenty-six patients undergoing endoscopic transsphenoidal resection of a pituitary lesion at a single institution were included into the study. Following arterial line placement by anesthesiology, 6 cottonoid pledgets soaked in 1:1000 epinephrine were placed into the bilateral nasal passages. Hemodynamic parameters including heart rate, blood pressure, and mean arterial pressure were collected at baseline, 30 seconds, and increments in minutes up to 10 minutes. Additional potentially confounding factors such as use of antihypertensives, stress dose steroids, and positioning with head pins were all performed following termination of data collection. Results: The majority of patients (20/26, 77%) showed no significant change in any parameter following placement of epinephrine soaked cottonoids. Six patients, however, had transient increases in blood pressure following administration of topical epinephrine, with a few requiring vasodilatory interventions. Return to baseline cardiovascular values were noted after an average of 7 minutes. There was no correlative preoperative characteristic that predicted sensitivity to placement of epinephrine. There were no lasting or permanent effects. Conclusion: Although intranasal topical 1:1000 epinephrine use showed no substantial hemodynamic changes in the majority of patients, in a subset of patients it can cause significant transient elevations in blood pressure to a degree necessitating intervention. Topical epinephrine should be used judiciously in endoscopic sinus surgery. abstract_id: PUBMED:21271600 Systematic review of topical vasoconstrictors in endoscopic sinus surgery. Objective: The objective of this study is to systematically review the literature and examine the safety for the use of topical vasoconstrictors in endoscopic sinus surgery. Study Design: Systematic review clinical trials. Method: A systematic literature search was performed in MEDLINE, EMBASE, The Cochrane Library, and National Guideline Clearinghouse, and references in the selected articles. Results: The search criteria captured 42 manuscripts with relevant titles. A systematic review on the topical use of phenylephrine was found; however, no other systematic review, meta-analyses, or clinical guidelines were identified. Six randomized clinical trials or comparative studies, as well as multiple case reports and review articles were also identified. The literature supports the safety of oxymetazoline and epinephrine when used judiciously in carefully selected patients undergoing endoscopic sinonasal surgery; however, topical phenylephrine is not recommended because of its risk profile. Conclusion: In sinus or nasal surgery, topical vasoconstrictors should be used in a manner that minimizes the risk of cardiovascular morbidity. abstract_id: PUBMED:32284027 Outcomes and Complications with Topical Epinephrine in Endoscopic Sinus Surgery: A Systematic Review and Meta-analysis. Objective: To assess the safety and efficacy of topical epinephrine in adults undergoing endoscopic sinus surgery (ESS). Data Sources: PubMed, Embase, and Ovid MEDLINE online databases. Review Methods: Non-case report English articles studying the effects of topical epinephrine as the topical vasoconstrictor used in adult ESS were found from the online databases through January 2019. The PRISMA literature selection process was used (Preferred Reporting Items for Systematic Reviews and Meta-analyses). Results: An overall 2216 articles were identified, with 9 meeting inclusion criteria involving a total of 5043 patients. All 9 studies assessed the safety of topical epinephrine, while 5 examined efficacy. Intraoperative average blood loss (ABL) ranged from 60 to 426 mL. Topical epinephrine concentrations varied from 1:1000 to 1:100,000, and 3 major complications were found for a rate of 0.06%. There were no reports of ophthalmic, orbital, or skull base injury, nor were there reports of cerebrospinal fluid leaks. A meta-analysis was performed on the 4 studies examining ABL. Estimated mean (95% CI) ABL was 119.4 mL (39.1-199.6) in the higher-concentration cohort (&gt;1:10,000) and 372.2 mL (296.8-447.5) in the lower-concentration cohort (≤1:10,000) (P = .001). Conclusion: Topical epinephrine is generally safe and provides acceptable hemostasis during ESS, with higher concentrations (&gt;1:10,000) providing improved hemostasis. Caution is advised for its use in patients with preexisting cardiovascular disease or in combination with other topical or injected vasoconstrictive agents. More prospective comparative studies are necessary to determine the ideal hemostatic concentration of epinephrine in ESS. abstract_id: PUBMED:26152362 Safety and efficacy of concentrated topical epinephrine use in endoscopic endonasal surgery. Background: Effective topical decongestion is critical for the safe performance of endoscopic endonasal surgery (EES). Despite the vasoconstriction offered by topical concentrated (1:1000) epinephrine (CE), its use has not gained widespread acceptance because of concerns over systemic absorption and its effect on blood pressure and postoperative rebound epistaxis. The purpose of this study was to examine the physiological changes in blood pressure and rate of epistaxis with use of topical CE in a variety of endoscopic nasal procedures. Methods: EES procedures using inhalational anesthesia and topical CE performed on 1140 consecutive patients (14 patients under 18 years) between 2011 and 2014 were evaluated retrospectively. Demographic data, intraoperative hemodynamic parameters, and postoperative epistaxis rates were recorded. Results: The mean patient age was 45.8 years (range, 5-97 years). No intraoperative cardiovascular complications related to CE use were found. Four patients (0.35%) developed postoperative epistaxis requiring intervention. The mean estimated blood loss among patients undergoing bilateral sinus surgery, skull-base surgery, and orbital decompression was (mean ± SD) 61.7 ± 51.6 mL, 60.1 ± 115 mL, and 67.9 ± 42.2 mL, respectively. The maximum mean systolic blood pressure among these groups was 108.3 ± 28.3 mmHg, 111.9 ± 27.6 mmHg, and 95.1 ± 31.7 mmHg, respectively. Conclusion: The use of topical CE is safe when performing endoscopic endonasal procedures. CE was not associated with any intraoperative complications. The profound intraoperative vasoconstriction does not confer a higher rate of postoperative rebound epistaxis. abstract_id: PUBMED:12271565 A novel aid to elevation of the sinus membrane for the sinus lift procedure. A maxillary sinus augmentation graft floor can increase vertical bone height for implant placement in patients with a pneumatized sinus. A simple technique can make this routine procedure even more predictable. After the sinus membrane has been adequately reflected, a cottonoid soaked with 1 carpule of 2% Lidocaine with 1:100,000 epinephrine and left in place for 5 to 15 minutes in each maxillary sinus can help create vasoconstriction and hemostasis. By arresting the oozing type of bleeding from the sinus membrane and the floor of the maxillary sinus, the cottonoid technique enhances direct visualization of the sinus and sinus membrane, further reflects the sinus membrane bluntly and gently with minimal risk of a membrane perforation, and serves as a volume indicator to assess the graft volume required. abstract_id: PUBMED:33840220 Survey of Anesthesiologists on Topical Vasoconstrictors and Intravenous Tranexamic Acid for Endoscopic Sinus Surgery. Objectives: Topical vasoconstrictors and intravenous tranexamic acid (IV TXA) are safe and efficacious to decrease bleeding and improve the surgical field during endoscopic sinus surgery (ESS). The purpose of this study was to investigate practice patterns, awareness of clinical evidence, and comfort levels among anesthesia providers regarding these hemostatic agents for ESS. Methods: A total of 767 attending anesthesiologists, residents, and certified registered nurse anesthetists (CRNAs) at 5 United States academic centers were invited to participate in a survey regarding their experience with IV TXA and 3 topical vasoconstrictor medications (oxymetazoline, epinephrine, and cocaine) during ESS. Results: 330 (47%) anesthesia providers responded to the electronic survey. 113 (97%) residents, 92 (83%) CRNAs, and 52 (68%) attendings managed 5 or fewer ESS cases per month. Two-thirds of providers had not reviewed efficacy or safety literature for these hemostatic agents. Oxymetazoline was perceived safest, followed by epinephrine, IV TXA, and cocaine. Respondents considered potential side effects over surgical field visibility when selecting agents. The majority of providers had no formal training on these agents for ESS, but indicated interest in educational opportunities. Conclusion: Many anesthesia providers are unfamiliar with safety and efficacy literature regarding agents used to improve hemostasis for ESS, highlighting a need for development of relevant educational resources. Rhinologic surgeons have an opportunity to communicate with anesthesia colleagues on the use of hemostatic agents to improve the surgical field during ESS. abstract_id: PUBMED:26383187 Topical concentrated epinephrine (1:1000) does not cause acute cardiovascular changes during endoscopic sinus surgery. Background: Topical epinephrine is used in endoscopic sinonasal surgery for local vasoconstriction. Potential for cardiovascular complications remains a concern for some due to the possibility of systemic absorption. Topical vs injected epinephrine was examined in a prospective analysis of perioperative cardiovascular effects, and in an audit of cardiovascular complications during endoscopic sinonasal surgery. Methods: A prospective cohort study of patients undergoing endoscopic sinonasal surgery was performed. Topical (1:1000) and injected (1:100,000) epinephrine were assessed. Cardiovascular outcomes of heart rate (HR), systolic blood pressure (SBP), diastolic blood pressure (DBP), mean arterial pressure (MAP), and electrocardiogram (ECG) changes were examined at baseline and minutely post-topical application (to 10 minutes) and postinjection (to 5 minutes). A retrospective assessment of cardiovascular events associated with a standardized regimen of topical (1:2000) and injected (1:100,000) epinephrine was performed. Results: Nineteen patents were assessed (43.42 ± 15.90 years, 47.4% female) in the prospective analysis. Post-topical epinephrine, no significant changes occurred in any cardiovascular parameter. However, following injected epinephrine, changes in HR (59.53 vs 64.11 bpm, p &lt; 0.001), SBP (96.16 vs 102.95 mmHg, p = 0.015), DBP (56.53 vs 60.74 mmHg, p = 0.019), and MAP (69.74 vs 74.81 mmHg, p = 0.002) occurred. On repeated-measures analysis of variance (ANOVA) all parameters were significantly affected by injection. No ECG abnormalities were seen in either topical or injection phases. The retrospective analysis of 1260 cases identified 2 cases of cardiovascular complications (0.16%), both relating to injected epinephrine. Conclusion: Combination topical (1:1000 to 1:2000) and injectable (1:100,000) epinephrine is safe for use in endoscopic sinonasal surgery. Injection resulted in the cardiovascular changes and accounted for the cardiovascular events reported. Answer: The topical use of epinephrine for sinus floor augmentation does not significantly affect systemic hemodynamics. A study involving 43 healthy patients undergoing sinus floor augmentation for dental implant placement showed that the use of 1/100,000 epinephrine-soaked cottonoid did not result in statistically significant changes in heart rate, systolic, or diastolic blood pressures (PUBMED:23648576). This suggests that the topical application of epinephrine in this context ensures efficacy in helping clinicians elevate the sinus membrane while keeping changes in systemic hemodynamics within safe limits.
Instruction: Do South Asian women with PCOS have poorer health-related quality of life than Caucasian women with PCOS? Abstracts: abstract_id: PUBMED:21171983 Do South Asian women with PCOS have poorer health-related quality of life than Caucasian women with PCOS? A comparative cross-sectional study. Background: Polycystic ovary syndrome (PCOS) is the most common chronic endocrine disorder affecting women of reproductive age. This study aimed to compare the HRQoL of South Asian and white Caucasian women with PCOS, given that it is particularly common among women of South Asian origin and they have been shown to have more severe symptoms. Methods: The Polycystic Ovary Syndrome Questionnaire (PCOSQ) and the Short Form-36 (SF-36) were administered in a cross-sectional survey to 42 South Asian and 129 Caucasian women diagnosed with PCOS recruited from the gynaecology outpatient clinics of two university teaching hospitals in Sheffield and Leeds. Additional clinical data was abstracted from medical notes. Normative data, collected as part of the Oxford Health and Lifestyles II survey, was obtained to compare SF-36 results with ethnically matched women from the general UK population. Using the SF-36, normative HRQoL scores for women of South Asian origin were lower than for Caucasian women. Given this lower baseline we tested whether the same relationship holds true among those with PCOS. Results: Although HRQoL scores for women with PCOS were lower than normative data for both groups, South Asian women with PCOS did not have poorer HRQoL than their Caucasian counterparts. For both the SF-36 and PCOSQ, mean scores were broadly the same for both Asian and Caucasian women. For both groups, the worst two HRQoL domains as measured on the PCOSQ were 'infertility' and 'weight', with respective scores of 35.3 and 42.3 for Asian women with PCOS compared to 38.6 and 35.4 for Caucasian women with PCOS. The highest scoring domain for South Asian women with PCOS was 'menstrual problems' (55.3), indicating best health, and was the only statistically significant difference from Caucasian women (p = 0.01). On the SF-36, the lowest scoring domain was 'Energy &amp; Vitality' for Caucasian women with PCOS, but this was significantly higher for Asian women with PCOS (p = 0.01). The best health status for both groups was 'physical functioning', although this was significantly lower for South Asian women with PCOS (p = 0.005). Interestingly, only two domains differed significantly from the normative data for the Asian women with PCOS, while seven domains were significantly different for the Caucasian women with PCOS compared to their normative counterparts. Conclusions: The HRQoL differences that exist between South Asian and Caucasian women in the general population do not appear to be replicated amongst women with PCOS. PCOS reduces HRQoL to broadly similar levels, regardless of ethnicity and differences in the normative baseline HRQoL of these groups. abstract_id: PUBMED:37817138 The effects of weight loss on health-related quality of life in obese women with PCOS and controls. Background: Polycystic ovary syndrome (PCOS) is a complicated endocrine disorder with widespread symptoms that reduce women's quality of life. The adverse effect of associated obesity on this reduction is unclear, therefore the impact of weight loss on health-related quality of life (HRQOL) in obese women with PCOS is unknown. This study aimed to compare the impact of weight loss following a 24-week dietary intervention on HRQOL in obese women with and without PCOS. Methods: In a 24-week hypocaloric LGI (low glycemic index) diet intervention study, 286 women were recruited (140 PCOS, 146 controls) and 216 participants (PCOS = 105, non-PCOS = 111) completed the study. HRQOL was assessed using the SF-36 questionnaire (Short Form Health Survey). Physical activity was measured using the short form of the International Physical Activity Questionnaire (IPAQ). Anthropometric assessments, dietary intake, physical activity levels, and HRQOL scores, according to the Iranian version of SF-36, were compared at baseline and 24 weeks with intervention. Results: At the end of the intervention, there was no significant difference in the percentages of weight loss between the two groups (PCOS: 6.29 ± 3.32% vs. non-PCOS: 7 ± 3.62%, p = 0.1). At baseline, women with PCOS had lower mean scores in aspects of physical function (PF), general health perception (GH), role limitation due to emotional problem (RE), vitality (VT), mental health (MH), physical (PCS) and mental component summary scales (MCS), compared to non-PCOS (P &lt; 0.01). At 24 weeks, the percentage of improvement in HRQOL in the non-PCOS group was higher compared to the PCOS group but this difference didn't reach statistical significance except for PCS. In both groups, greater weight loss was associated with further improvement in the physical aspects of HRQOL and vitality. In the non-PCOS group, with trivial exception, greater weight loss was associated with greater improvement in the mental aspects of HRQOL. Conclusions: Both obese PCOS and non-PCOS women achieved nearly similar levels of improvement in HRQOL due to weight loss induced by a hypocaloric LGI diet. Trial Registration: This study is registered in the Iranian Randomized Clinical Trials Registry (IRCT, code: IRCT2016092129909N1). abstract_id: PUBMED:15333601 Infertility caused by PCOS--health-related quality of life among Austrian and Moslem immigrant women in Austria. Background: The polycystic ovarian syndrome (PCOS) is the most common endocrine disorder affecting female fertility. In this study, we examined the symptomatology of PCOS and the health-related quality of life among infertility patients suffering from PCOS with different socio-cultural and ethnic background. Methods: Symptomatology of PCOS, body composition characteristics as well as socio-economic factors were examined in 49 PCOS infertility patients of the University Clinic for Gynecology and Obstetrics, in Vienna, who originated from two socio-cultural subgroups, Austrian women and Moslem immigrant women. Additionally, the Cronins health-related quality of life questionnaire of women with PCOS was used in order to examine the impact of PCOS symptoms on the individual quality of life of the affected women. Results: In terms of the appearance of the symptoms, the typical heterogeneity of PCOS could be found in both subgroups with no differences. However, differences in the health-related quality of life were impressive. Health-related quality of life of women from an Islamic background was affected to a greater degree than that of Austrian women, although no differences in symptomatology were found. This was true of all five domains investigated (infertility, overweight, hirsutism, menstrual irregularities and emotional problems). First of all, infertility was a dramatic problem for immigrant women. Islamic women had very high reproductive pressure. The Moslem immigrant PCOS women suffer more from infertility than do Austrian women. Conclusions: Health professionals should be sensitive to the ethnicity, religious and cultural background of their patients to provide the best possible medical support. abstract_id: PUBMED:31172306 Quality of life among infertile PCOS patients. Objective: To investigate whether quality of life differs between PCOS and non-PCOS infertile women. Study Design: Two questionnaire forms of quality of life (PCOSQ, SF-36) were given to 238 women. Patients were asked to answer all of the questions in both of the questionnaires. Of these 238 women, only 49 infertile PCOS patients, 47 infertile non-PCOS patients and 62 fertile PCOS patients fılled the forms completely. Results: Both PCOSQ and SF-36 scores are lowest in infertile PCOS group. Although total PCOSQ score of infertile non-PCOS group was better than non-infertile PCOS and fertile PCOS groups; there was trend for lower scores when infertility is added on PCOS status. Conclusion: The quality of life is lowest among infertile PCOS women. Both PCOS and infertility as individual factors may have negative impact in quality of life of reproductive age women. abstract_id: PUBMED:30123008 Measures of health-related quality of life in PCOS women: a systematic review. Introduction: Polycystic ovary syndrome (PCOS) is associated with biochemical and hormonal disturbance and adverse cosmetic, reproductive, metabolic, and psychological consequences, resulting in reduced health-related quality of life (HRQoL). Various generic and specific questionnaires have been used for assessing different dimensions of HRQoL in PCOS women. The purpose of this systematic review was to identify those general and specific instruments and to determine the factors that affect HRQoL in PCOS women. Materials And Methods: The research strategy involved general and specific terms in relation to PCOS women and their QoL. A review was performed on studies that were published between 1945 to 2017 and that were indexed in MEDLINE, ISI Web of Science, and Scopus. A narrative synthesis of the data was provided. Results: In total, 52 studies (9 qualitative and 43 quantitative) were included in the review. The analysis indicated that 3 specific and 5 general instruments were used to measure the QoL in PCOS women. Of these, the 36-Item Short Form Health Survey (SF-36) and the Polycystic Ovary Syndrome Health-Related Quality of Life Questionnaire (PCOSQ) were used most frequently. All studies assessed different aspects of QoL in PCOS women and found that PCOS had negative effects on QoL in this population. Conclusion: The PCOSQ and the SF-36 were used most frequently for the assessment of QoL in PCOS women. Perhaps using either a specific questionnaire solely or a specific questionnaire in conjunction with a generic measure would be more appropriate when measuring QoL in PCOS women. However, both questionnaires showed that they are able to capture different aspects of QoL in PCOS women and to identify areas that can help to improve QoL in these women. abstract_id: PUBMED:38146573 Knowledge, Attitude, and Practice of Lifestyle Modifications Among Saudi Women Diagnosed With Polycystic Ovary Syndrome (PCOS). Background Polycystic ovary syndrome (PCOS) is a common endocrine disorder in women, often associated with metabolic issues like obesity and insulin resistance. Lifestyle changes, including weight loss, healthy eating, and regular exercise, are recommended for PCOS management. Studies have explored women's perspectives on these changes, revealing misconceptions and adherence challenges. Recognizing the importance of individualized interventions, particularly addressing knowledge gaps, is vital for improving the quality of life for women with PCOS, especially in cultural contexts like Saudi Arabia. Methodology This was a cross-sectional study conducted in Saudi Arabia from May to August 2023 including PCOS patients. Data was collected through questionnaires and analyzed using IBM SPSS 29 (IBM Corp. Released 2020. IBM SPSS Statistics for Windows, Version 27.0. Armonk, NY: IBM Corp). This study was conducted in accordance with the ethical guidelines and principles outlined by the Scientific Research Ethics Committee of Taif University (no.44-359). All participants provided informed consent, and the study protocols, including data collection, analysis, and publication, adhered to the relevant ethical standards. Results Our study included a majority aged 18-29 (27.4%), married individuals (55.6%) and those with a Bachelor's degree (72%). About 46.9% were medically diagnosed with PCOS. A notable proportion (70.2%) reported no family history of PCOS. Doctors were the primary information source (40.7%). Knowledge about PCOS was generally high, with correct recognition of various PCOS characteristics and treatment options. Attitudes were positive, especially among medically diagnosed individuals, and 91% believed weight reduction could improve PCOS symptoms. Lifestyle modification knowledge, attitude, and practices showed significant associations with demographic factors like age, place of residence, education, marital status, working in healthcare sectors, and PCOS diagnosis status. Conclusions Knowledge about PCOS is generally high among women with positive attitudes toward its management through lifestyle modifications. Women generally show positive practices of lifestyle modifications in PCOS, and they are associated with sociodemographic features. abstract_id: PUBMED:34684438 Effectiveness of a 6-Month Lifestyle Intervention on Diet, Physical Activity, Quality of Life, and Markers of Cardiometabolic Health in Women with PCOS and Obesity and Non-PCOS Obese Controls: One Size Fits All? Little is known about the difference in effectiveness of lifestyle intervention between women with PCOS and non-PCOS women. In a post hoc longitudinal analysis of a randomized, controlled trial, we aimed to investigate whether infertile women with PCOS and obesity (N = 87) responded differently to a 6-month lifestyle intervention program than infertile non-PCOS obese controls (N = 172). We evaluated several aspects of the intervention such as changes in diet, physical activity, and dropout rate, as well as the effect on weight, quality of life (QoL), and cardiometabolic outcomes. Multilevel analyses were used, and analyses were adjusted for baseline characteristics such as age, education, and smoking. Although BMI in both groups significantly decreased at 3 months and 6 months, there were no significant differences between the groups at 3 months (adjusted B: -0.3, 95% CI: -0.9 to 0.3, p = 0.35) and 6 months (adjusted B: 0.5, 95% CI: -0.4 to 1.4, p = 0.29). Women with PCOS and non-PCOS women had similar compliance with the lifestyle intervention in terms of actual change in diet and physical activity. Mental QoL scores were not different at either 3 or 6 months. Physical QoL scores were lower in women with PCOS compared with non-PCOS women at 3 months (adjusted B: -2.4, 95% CI: -4.8 to -0.06, p = 0.045) but not at 6 months. Cardiometabolic parameters did not differ between the groups. Our results showed that infertile women with PCOS and obesity and non-PCOS obese controls responded largely similarly to our lifestyle intervention and achieved the same level of improvement in markers of cardiometabolic health. abstract_id: PUBMED:38485896 Psychological symptoms and brain activity alterations in women with PCOS and their relation to the reduced quality of life: a narrative review. Background: Polycystic ovary syndrome (PCOS) is the most common feminine endocrine disorder, characterized by androgen excess, ovulatory dysfunction, and polycystic ovarian morphology. The negative impact of symptoms on the quality of life (QoL) of patients is still not clear. Purpose: The present review aimed at studying the impact of the symptoms, the psychological symptoms, and brain alterations in women with PCOS. Methods: A systematic search was undertaken for studies that assessed the impact of PCOS symptoms on QoL, psychological symptoms, and brain alterations in PCOS patients. Results: Most of the information about QoL came from psychometric studies, which used culture-based questionnaires. Alterations of sleep quality, body image, and mood disorders can negatively affect the QoL of the patients. Sexual satisfaction and desire were affected by PCOS. Brain imaging studies showed functional alterations that are associated with impairments of visuospatial working memory, episodic and verbal memory, attention, and executive function. Conclusions: Several factors can negatively influence the quality of life of the patients, and they are directly related to hyperandrogenism and the risk of infertility. In particular, obesity, hirsutism, acne, and the fear of infertility can have a direct impact on self-esteem and sexual function. Metabolic and psychiatric comorbidities, such as mood, anxiety, and eating disorders, can affect the well-being of the patients. Moreover, specific cognitive alterations, such as impairments in attention and memory, can limit PCOS patients in a series of aspects of daily life. abstract_id: PUBMED:38389707 PCOS stratification for precision diagnostics and treatment. Globally, polycystic ovarian syndrome (PCOS) affects approximately 10% of fertile women, leading to great health and economic burden. PCOS is a heterogenous illness that can cause infertility, irregular menstrual cycles, acne, and hirsutism, among other symptoms. The clinical diagnosis is primarily a diagnosis of exclusion if one or more of the three primary symptoms, namely, oligo- or anovulation, hyperandrogenism, and polycystic ovarian morphology, are present. Obesity and PCOS are often coexisting disorders that may be bidirectionally causally related. Phenotypic heterogeneity throughout the reproductive lifespan, such as the overlap of PCOS symptoms with regular fluctuations in a woman's menstrual cycle and metabolism during the menarche and menopausal transition, further complicates diagnosis. PCOS etiology is mostly unknown and complex, likely due to the fact that it is a group of disorders with overlapping metabolic and reproductive problems. Evidence-based, common, standardized guidelines for PCOS diagnosis and treatment are urgently needed. Genomics and clinical data from populations across diverse ages and ethnicities are urgently needed to build efficient machine learning models for the stratification of PCOS. PCOS subtype-specific strategies for early screening, an accurate diagnosis, and management throughout life will optimize healthcare resources and reduce unnecessary testing. This will pave the way for women to be able to take the best possible care of their own health using the latest clinical expertise combined with their unique needs and preferences. abstract_id: PUBMED:30599799 The effects of treatment with liraglutide on quality of life and depression in young obese women with PCOS and controls. Polycystic ovary syndrome (PCOS) is associated with reduced quality of life (QoL), though the role of associated obesity is unclear. In this study we examined the effects of six months treatment with liraglutide, 1.8 mg od, on obesity, depression and QoL in young women with PCOS and obesity compared to age- and weight-matched controls. In a cross-sectional study, 36 women were recruited (19 PCOS, 17 controls), age 33.9 ± 6.7 vs. 33.5 ± 7.1 yr, and weight 102.1 ± 17.1 vs. 100.4 ± 15.1 kg, respectively. PCOS was diagnosed according to the Rotterdam criteria. Depression was measured using the Centre for Epidemiologic Studies Depression Scale (CES-D). QoL was measured using the World Health Organization QoL questionnaire (WHOQOL-BREF). At baseline there was no difference in QoL or CES-D scores between the two groups. At six months, weight was reduced by 3.0 ± 4.2 kg, p = .01, in the PCOS group and 3.8 ± 3.4 kg, p = .001, in controls. Psychological health improved in the PCOS group (percentage change 11.3%, p &lt; .02). Combining the two groups revealed significant improvement (p &lt; .05) in physical (82.6 ± 11.2 vs. 78.9 ± 13.6), psychological (62.4 ± 16.5 vs. 57.5 ± 16.4) and social health (76.6 ± 15.3 vs. 71 ± 16.8) components of the WHOQOL-BREF at six months. Weight loss is associated with an improvement in QoL; and when matched for age and obesity, PCOS was not independently associated with reduced QoL or depression. Answer: No, South Asian women with PCOS do not have poorer health-related quality of life (HRQoL) than Caucasian women with PCOS. A comparative cross-sectional study using the Polycystic Ovary Syndrome Questionnaire (PCOSQ) and the Short Form-36 (SF-36) found that although HRQoL scores for women with PCOS were lower than normative data for both groups, South Asian women with PCOS did not have poorer HRQoL than their Caucasian counterparts. The mean scores for both the SF-36 and PCOSQ were broadly the same for both Asian and Caucasian women. The study concluded that the HRQoL differences that exist between South Asian and Caucasian women in the general population do not appear to be replicated amongst women with PCOS, and that PCOS reduces HRQoL to broadly similar levels, regardless of ethnicity and differences in the normative baseline HRQoL of these groups (PUBMED:21171983).
Instruction: Pathophysiology of Meniere's syndrome: are symptoms caused by endolymphatic hydrops? Abstracts: abstract_id: PUBMED:15699723 Pathophysiology of Meniere's syndrome: are symptoms caused by endolymphatic hydrops? Background: The association of Meniere's syndrome with endolymphatic hydrops has led to the formation of a central hypothesis: many possible etiologic factors lead to hydrops, and hydrops in turn generates the symptoms. However, this hypothesis of hydrops as being the final common pathway has not been proven conclusively. Specific Aim: To examine human temporal bones with respect to the role of hydrops in causing symptoms in Meniere's syndrome. If the central hypothesis were true, every case of Meniere's syndrome should have hydrops and every case of hydrops should show the typical symptoms. Methods: Review of archival temporal bone cases with a clinical diagnosis of Meniere's syndrome (28 cases) or a histopathologic diagnosis of hydrops (79 cases). Results: All 28 cases with classical symptoms of Meniere's syndrome showed hydrops in at least one ear. However, the reverse was not true. There were 9 cases with idiopathic hydrops and 10 cases with secondary hydrops, but the patients did not exhibit the classic symptoms of Meniere's syndrome. A review of the literature revealed cases with asymptomatic hydrops (similar to the current study), as well as cases where symptoms of Meniere's syndrome existed during life but no hydrops was observed on histology. We also review recent experimental data where obstruction of the endolymphatic duct in guinea pigs resulted in cytochemical abnormalities within fibrocytes of the spiral ligament before development of hydrops. This result is consistent with the hypothesis that hydrops resulted from disordered fluid homeostasis caused by disruption of regulatory elements within the spiral ligament. Conclusion: Endolymphatic hydrops should be considered as a histologic marker for Meniere's syndrome rather than being directly responsible for its symptoms. abstract_id: PUBMED:29389274 Electrocochleography in Retrosigmoid Vestibular Nerve Section for Intractable Vertigo Caused by Meniere's Disease. Interest in electrocochleography has increased in recent years because of the discovery of an elevated summating potential to action potential amplitude ratio (SP/AP ratio) in patients with endolymphatic hydrops caused by Meniere's disease or perilymph fistula. It was the purpose of this investigation to determine whether the intraoperative SP/AP ratio will decrease after vestibular nerve section in patients with intractable Meniere's disease. Fourteen patients with medically intractable classic Meniere's disease underwent retrosigmoid vestibular nerve section. Intraoperative transtympanic electrocochleography was performed with alternating click stimuli presented at 95 dB HL. In all patients the SP/AP ratio was recorded before the skin incision ("baseline" condition) and after the dura was closed ("closing" condition). Statistical analysis was applied to the recorded data. In 11 (79%) patients, the SP/AP ratio was found to be elevated above 0.30 in the baseline state. In 13 (93%) patients, the SP/AP ratio decreased more than 25% after the nerve was sectioned. These results were highly statistically significant (p &lt; 0.001). We conclude that the SP/AP ratio does decrease in patients with Meniere's disease after undergoing retrosigmoid vestibular nerve section and offer a possible explanation. (Otolaryngol Head Neck Surg 1997;116:593-6.). abstract_id: PUBMED:15108502 Recurrent vertigo caused by perilymphatic fistula N/A abstract_id: PUBMED:34136526 A Synchrotron and Micro-CT Study of the Human Endolymphatic Duct System: Is Meniere's Disease Caused by an Acute Endolymph Backflow? Background: The etiology of Meniere's disease (MD) and endolymphatic hydrops believed to underlie its symptoms remain unknown. One reason may be the exceptional complexity of the human inner ear, its vulnerability, and surrounding hard bone. The vestibular organ contains an endolymphatic duct system (EDS) bridging the different fluid reservoirs. It may be essential for monitoring hydraulic equilibrium, and a dysregulation may result in distension of the fluid spaces or endolymphatic hydrops. Material and Methods: We studied the EDS using high-resolution synchrotron phase contrast non-invasive imaging (SR-PCI), and micro-computed tomography (micro-CT). Ten fresh human temporal bones underwent SR-PCI. One bone underwent micro-CT after fixation and staining with Lugol's iodine solution (I2KI) to increase tissue resolution. Data were processed using volume-rendering software to create 3D reconstructions allowing orthogonal sectioning, cropping, and tissue segmentation. Results: Combined imaging techniques with segmentation and tissue modeling demonstrated the 3D anatomy of the human saccule, utricle, endolymphatic duct, and sac together with connecting pathways. The utricular duct (UD) and utriculo-endolymphatic valve (UEV or Bast's valve) were demonstrated three-dimensionally for the first time. The reunion duct was displayed with micro-CT. It may serve as a safety valve to maintain cochlear endolymph homeostasis under certain conditions. Discussion: The thin reunion duct seems to play a minor role in the exchange of endolymph between the cochlea and vestibule under normal conditions. The saccule wall appears highly flexible, which may explain occult hydrops occasionally preceding symptoms in MD on magnetic resonance imaging (MRI). The design of the UEV and connecting ducts suggests that there is a reciprocal exchange of fluid among the utricle, semicircular canals, and the EDS. Based on the anatomic framework and previous experimental data, we speculate that precipitous vestibular symptoms in MD arise from a sudden increase in endolymph pressure caused by an uncontrolled endolymphatic sac secretion. A rapid rise in UD pressure, mediated along the fairly wide UEV, may underlie the acute vertigo attack, refuting the rupture/K+-intoxication theory. abstract_id: PUBMED:13300564 Diuretics in therapy of Meniere's disease caused by labyrinthine hydrops N/A abstract_id: PUBMED:9745178 Endolymphatic subarachnoid shunt failure caused by Silastic allergy. During the period from 1964 through 1994, the endolymphatic subarachnoid shunt operation was initially successful in eliminating endolymphatic hydrops and the symptoms and findings it produces in 76% of 645 ears of patients with Meniere's disease. After initial success, lasting from five weeks to nine years, endolymphatic hydrops suddenly returned due to obstruction of the Silastic shunt tube in 11% of patients. In these cases, prompt revision can often restore an initial good result. Histologic and immunologic examination of the material surrounding and occluding the tubes showed an allergic response to the Silastic material in most instances. Efforts to eliminate this cause of failure using a tube of new design and different plastic material are described. abstract_id: PUBMED:9639481 Ménière disease caused by an anomalous vein of the vestibular aqueduct. The precise cause of Ménière disease remains unclear. Multiple causes have been proposed with most experimental evidence pointing to impaired fluid resorption by the endolymphatic duct and sac as the final common pathway in development of hydrops. We report a unique case of Ménière disease secondary to compression of the endolymphatic duct and sac by an anomalous vein of the vestibular aqueduct. The resulting mechanical obstruction led to excessive accumulation of endolymph with clinical signs of Ménière disease. We review the literature and discuss proposed pathogenesis of disease. The finding of this anomalous vein provides further evidence that anatomical obstruction of the endolymphatic duct and sac may lead to Ménière-like symptoms. This unique example of an anatomical variant offers additional insight into the pathophysiology of endolymphatic hydrops. abstract_id: PUBMED:12438861 Ossicular fixation caused by bone dust after saccus decompression surgery. Hypothesis: To highlight the clinical and histologic features of ossicular fixation produced by bone dust during mastoid surgery. Background: Bone dust deposition in the middle ear may occur during temporal bone surgery, but the ossicular fixation that may result from it remains underreported. Methods: A case is reported of delayed conductive hearing loss after saccus decompression surgery resulting from ossicular fixation by bone dust deposition. Results: Six months after the initial surgery, the patient experienced a 45 dB conductive hearing loss. The findings on second-look tympanotomy were stapes fixation and round window niche occlusion by newly formed tissue. Histologic examination of the latter showed fibrosis and new bone formation in reaction to particles of bone dust. New bone formation is a frequent concomitant of fibrosis in the middle ear and is probably not a growth from the bone dust. Conclusion: The use of high-speed drilling of bone in temporal bone surgery carries the risk of bone dust particle deposition in the middle ear. Measures are discussed that can be used to prevent this complication. abstract_id: PUBMED:1086664 Pathophysiology of endolymphatic hydrops. Endolymphatic hydrops of the nonprogressive type occurs in response to a single traumatic or toxic insult of limited duration and although it may result in permanent deficits in sensorineural function, there is total subsidence of vestibular symptoms. Endolymphatic hydrops of the progressive type, on the other hand, appears to be the result of permanent impairment of endolymph resorption and is caused principally by disorders of the endolymphatic sac. It occurs in Menière's disease, syphilitic labyrinthitis and the delayed hydrops syndrome. In addition to deafness of varying extent, it is characterized by episodic vertigo and sometimes by Hennebert's sign. Histological studies suggest that the acute vertiginous episodes are caused by potassium intoxication following ruptures of the membranous labyrinth and that Hennerbert's sign is caused by vestibular fibrosis. abstract_id: PUBMED:7857019 Degeneration of vestibular sensory cells caused by ablation of the vestibular aqueduct in the gerbil ear. The vestibular aqueduct of the gerbil has a unique anatomic feature that makes it possible to selectively obliterate the endolymphatic sac with or without interfering with its venous drainage. In animals in which only the endolymphatic sac was ablated, endolymphatic hydrops was slight in the cochlea and was absent in the vestibular labyrinth. The cochlear and vestibular sensory cells were normal. In animals in which both the endolymphatic duct and the vein were obliterated, hydrops was slight, with the exception of a few cochleas that showed moderate hydrops. The sensory cells of the posterior canal cristae had degenerated in all specimens, while varied pathologic changes in cochlear and vestibular sensory cells were present in some specimens. These results suggest that hydrops is primarily due to blockage of the endolymphatic duct and sac and that degeneration of sensory cells occurs when blood flow in the vestibular aqueduct is impeded. Pathologic changes in the endolymphatic sac, including the vascular plexus at the endolymphatic sac, may play an important role in the production of endolymphatic hydrops and vestibular symptoms in Meniere's disease. Answer: The pathophysiology of Meniere's syndrome and its association with endolymphatic hydrops has been a subject of investigation, but the relationship between the two is not conclusively established. The central hypothesis suggests that various etiologic factors lead to endolymphatic hydrops, which in turn generates the symptoms of Meniere's syndrome. However, this hypothesis has not been proven conclusively. A study examining human temporal bones found that while all cases with classical symptoms of Meniere's syndrome showed hydrops in at least one ear, not all cases of hydrops exhibited the classic symptoms of Meniere's syndrome. This indicates that endolymphatic hydrops should be considered a histologic marker for Meniere's syndrome rather than being directly responsible for its symptoms (PUBMED:15699723). Additionally, electrocochleography studies have shown that the summating potential to action potential amplitude ratio (SP/AP ratio), which is elevated in patients with endolymphatic hydrops caused by Meniere's disease, decreases after vestibular nerve section in patients with intractable Meniere's disease. This suggests a relationship between the SP/AP ratio and the symptoms of Meniere's disease, although the exact mechanism is not fully understood (PUBMED:29389274). Other studies have explored the anatomy of the human endolymphatic duct system (EDS) and speculated that precipitous vestibular symptoms in Meniere's disease may arise from a sudden increase in endolymph pressure caused by uncontrolled endolymphatic sac secretion, rather than from a rupture/K+-intoxication theory (PUBMED:34136526). In conclusion, while endolymphatic hydrops is closely associated with Meniere's syndrome, it may not be the direct cause of the symptoms. The symptoms could be a result of a complex interplay between hydrops and other factors such as fluid homeostasis disruption, endolymph pressure changes, and possibly immune responses (PUBMED:15699723; PUBMED:29389274; PUBMED:34136526).
Instruction: Should your GP be an osteopath? Abstracts: abstract_id: PUBMED:36473174 Humanities and GP training: the perception of GP trainers towards using medical humanities to teach GP trainees primary care. Purpose: The benefits of medical humanities in medical education have been extensively cited. The aim of this research was to explore the perspectives of UK GP trainers towards using medical humanities to teach GP trainees in primary care. Method: Phenomenological, qualitative research designed was used. Semi-structured interviews were conducted with eight GP trainers from across the UK, between April and May 2020. Poems were used as discussion prompts for those participants who lacked experience using medical humanities to teach GP trainees. Results: Emerging themes were that all participating GP trainers recognised the benefit of medical humanities to GP training, yet only 50% of them were actually using medical humanities in their teaching. An overburdened GP curriculum, a lack of funding and a lack of clinically relevant art and literature resources were identified as barriers for the inclusion of medical humanities into GP training. Furthermore, GP trainers identified the need for peer networks and medical humanities conferences to encourage the use of medical humanities in GP training. Conclusion: This study provides the perspective of UK GP trainers towards using medical humanities in teaching GP trainees in primary care. GP trainers identified sustainable funding to develop a resource bank of art and literature, peer network events and medical humanities conferences as key interventions to facilitate medical humanities teaching in GP training. Recommendations for future research include robust studies focusing on the effectiveness of various techniques used to integrate medical humanities into GP training, such as poetry. abstract_id: PUBMED:38109181 Cultural competency of GP trainees and GP trainers: a cross-sectional survey study. Objective: To assess the cultural competence (CC) of GP trainees and GP trainers.Design and setting: A cross-sectional survey study was conducted at the GP Training Institute of Amsterdam UMC. Subjects: We included 92 GP trainees and 186 GP trainers. Main Outcome Measures: We measured the three domains of cultural competency: 1) knowledge, 2) culturally competent attitudes and 3) culturally competent skills. Regression models were used to identify factors associated with levels of CC. Participants rated their self-perceived CC at the beginning and end of the survey, and the correlation between self-perceived and measured CC was assessed. Results: Approximately 94% of the GP trainees and 81% of the GP trainers scored low on knowledge; 45% and 42%, respectively, scored low on culturally competent attitudes. The level of culturally competent skills was moderate (54.3%) or low (48.4%) for most GP trainees and GP trainers. The year of residency and the GP training institute were significantly associated with one or more (sub-)domains of CC in GP trainees. Having &gt;10% migrant patients and experience as a GP trainer were positively associated with one or more (sub-) domains of cultural competence in GP trainers. The correlation between measured and self-perceived CC was positive overall but very weak (Spearman correlation coefficient ranging from -0.1-0.3). Conclusion: The level of cultural competence was low in both groups, especially in the knowledge scores. Cultural competence increased with experience and exposure to an ethnically diverse patient population. Our study highlights the need for cultural competence training in the GP training curricula. abstract_id: PUBMED:27878667 Eligibility for free GP care and the utilisation of GP services by children in Ireland. The majority of the Irish population pay the full out-of-pocket price of a GP visit, with only those on low incomes exempt. While there is an extensive literature analysing the impact of the Irish system of eligibility for free GP care on GP visiting rates among adults, there is a lack of evidence for children. Given the importance of socio-economic health inequalities in shaping the future outcomes of children, it is important to analyse the extent to which the current system of eligibility leads to inequities in access to GP services among Irish children. In addition, some private health insurance plans have started to offer cover for GP expenses, which adds an additional layer of complexity to the existing system of eligibility, and to date, this has not been studied. Using a large, nationally-representative data-set covering two cohorts of Irish children (9-month olds and 9-year olds), we examine the role of eligibility for free GP care in determining GP visiting rates among children. As with the adult population, the results show that, even with controls for child health, and parental and family characteristics, eligibility for free GP care is a significant determinant of GP utilisation among Irish children. abstract_id: PUBMED:33863583 Clinical empathy in GP-training: Experiences and needs among Dutch GP-trainees. "Empathy as an element of personal growth". Objective: Clinical empathy has been described as a key component of effective person-centeredness in patient-physician communication. Yet little is known about general practitioner (GP) trainees' experiences and opinions regarding clinical empathy, empathy-education and the development of empathic skills. This study aimed to explore trainees' experiences with clinical empathy during GP training. Methods: This study used focus group interviews. GP trainees at two Dutch universities were approached by e-mail. Focus groups were conducted between April and November 2018. Six focus groups were conducted: two with starting trainees, two with trainees at the end of their first year and two with trainees at the end of their 3 years' training. Two experienced qualitative researchers analyzed the focus groups. During the thematic analysis the differences and similarities between the various stages of education were taken into account and a framework for the identified themes and subthemes was developed. Results: Thirty-five GP trainees took part. Four main themes could be identified. Starting trainees experienced frictions regarding the influence of personal affective reactions on their medical competencies. Trainees at the end of their first year indicated that they reached a balance between empathic involvement and their responsibility to carry out relevant medical tasks, such as following GP guidelines. Trainees at the end of their three years' training recognized the mutual relationship between the development of the behavioral part of clinical empathy and personal growth. All trainees stated that their needs concerning education changed during their GP training and proposed changes to the curriculum. Conclusions: GP trainees face various obstacles in developing empathic skills and behavior. Particularly they mention handling personal affective reactions. Trainees express a clear wish for clinical empathy, in its theoretical as well as its skill and emotional aspects, to play a central role in the curriculum. Practice Implications: More explicit attention to be paid to empathy by embedding theoretical education, explicit attention to skill training and assessment of empathic behavior by patients and supervisors. abstract_id: PUBMED:31819804 A pilot study of the duration of GP consultations in Ireland. Background: General practitioner (GP)-led primary care is the linchpin of health care in Ireland. Reflecting international trends, there are increasing concerns about the sustainability of the current Irish GP service due to an increasing workload. Objective data on the duration of GP consultations are currently not available in Ireland. The objective of this pilot study is to demonstrate how the duration of consultations can be collected, using readily available administrative data. Methods: Software was developed to extract the duration of GP consultations using the opening and closing of electronic patient records associated with a GP consultation. GP practices (N = 3) comprising 15 GPs were recruited from a university-affiliated research network. A retrospective analysis of GP consultations with patients with diabetes for the 9 years between 2010 and 2018 was used to assess the feasibility of using this system to measure the duration of consultations. Results: The average duration of a consultation was 14.1 min for the 9 years spanning 2010 to 2018. Patients had an average time between consultations of 99 days. Conclusions: This pilot study confirms that an administrative data set can be utilised at negligible cost to monitor GP practice consultation workload over time. Our preliminary pilot data show that GP consultation durations among participating practices were longer than the 5-11.7 min reported in the UK and show an increase over the period. Clearly, a larger number of practices and patients are required to substantiate this finding. abstract_id: PUBMED:26854141 Mini-P-gp and P-gp Co-Expression in Brown Trout Erythrocytes: A Prospective Blood Biomarker of Aquatic Pollution. In aquatic organisms, such as fish, blood is continually exposed to aquatic contaminants. Multidrug Resistance (MDR) proteins are ubiquitous detoxification membrane pumps, which recognize various xenobiotics. Moreover, their expression is induced by a large class of drugs and pollutants. We have highlighted the co-expression of a mini P-gp of 75 kDa and a P-gp of 140 kDa in the primary culture of brown trout erythrocytes and in the erythrocytes of wild brown trout collected from three rivers in the Auvergne region of France. In vitro experiments showed that benzo[a]pyrene, a highly toxic pollutant model, induced the co-expression of mini-P-gp and P-gp in trout erythrocytes in a dose-dependent manner and relay type response. Similarly, in the erythrocytes of wild brown trout collected from rivers contaminated by a mixture of PAH and other multi-residues of pesticides, mini-P-gp and P-gp were able to modulate their expression, according to the nature of the pollutants. The differential and complementary responses of mini-P-gp and P-gp in trout erythrocytes suggest the existence in blood cells of a real protective network against xenobiotics/drugs. This property could be exploited to develop a blood biomarker of river pollution. abstract_id: PUBMED:32862790 GP appraisal: an evaluation of generational differences on the utility of GP appraisal. Background: A number of studies have previously been published on the benefits of GP appraisal. Ours is the first study that compares the views of doctors registered before and after the introduction of appraisals on various core elements of the appraisal process. Aim: This study aimed to explore potential differences, between GPs qualifying before and those after the introduction of NHS appraisals, on the utility of the appraisal process. Additionally, to discover the perceived impact of appraisals on interaction with colleagues and patients. Design And Setting: Suffolk appraisers were recruited to distribute a paper questionnaire, in two sections, for each appraisal undertaken over a 12-month period. The first part of the questionnaire related to that specific appraisal. The second part, seven questions using Likert scales and free-text comments, asked about the appraisal process in general. The feedback from the second part forms the data for this study. Results: Overall, doctors tended to agree that appraisals had a beneficial impact in the core areas investigated. However, there was a significant difference between generations: those registered before 1998 were less likely to find NHS appraisals beneficial. Both groups reported that preparation took up too much time. Opinions regarding the impact of appraisals on interaction were evenly divided amongst appraisees, with the older cohort more likely to report negatively. Conclusions: There is a generational difference amongst GP in the perceived utility of NHS appraisal. Current plans to redesign the process to make appraisals less onerous will be welcomed by both groups. abstract_id: PUBMED:38465617 GP trainees as teachers: a rapid review of the barriers, facilitators and outcomes. Background: There is a workforce crisis in General Practice (GP) within the United Kingdom (UK). High-quality clinical placement experiences in GP influence medical students' interest and likelihood to enter this speciality. GP trainees often express a desire to teach, yet teaching does not feature significantly within their current practice. This study aims to explore outcomes, barriers, and facilitators of GP trainees teaching medical students through a rapid review of published literature. Methods: MEDLINE, EMBASE, PsychINFO, Web of Science were searched for articles relating to GP trainees teaching. Studies conducted in the UK and Australia, between January 2000 and October 2022 were included. The Medical Education Research Study Quality Index and the Critical Appraisal Skills Programme checklist were used to assess quality. Results: Twenty-seven publications, 11 qualitative, six quantitative, three mixed-methods and seven opinion pieces, were identified. Many studies have methodological limitations. Empirical studies show that there are benefits of near-peer teaching, but these are limited by trainees' capacity and capability to teach, and GPs' confidence in trainees' teaching. The culture within the practice influenced whether trainees were seen solely as learners or also as teachers for students. Discussion: When GP trainees teach there are positive outcomes for medical students and trainees. Teaching was considered an activity that trainees did in 'addition to' rather than 'part' of' their training. Appropriate teaching opportunities, and relevant training, were key to trainees' competence and confidence in teaching. Resources and recognition and a strong teaching culture are needed to support GP trainees to teach. abstract_id: PUBMED:25000631 Osteopath - an independent health profession? N/A abstract_id: PUBMED:3016094 Immunity to varicella-zoster viral glycoproteins, gp I (gp 90/58) and gp III (gp 118), and to a nonglycosylated protein, p 170. Humoral and cellular immunity against two major glycoproteins (gp) of varicella-zoster virus (VZV), gp I (gp 90/58) and gp III (gp 118), and against a nonglycosylated phosphoprotein (p 170) was demonstrated in human subjects. Primary VZV infection was accompanied by the development of IgG to gp I (mean titer 1:200), gp III (mean titer 1:132), and p 170 (mean titer 1:331). Increased IgG antibody production to each of the VZV proteins occurred during recurrent VZV infection with mean titers to gp I of 1:29512, to gp III of 1:15848, and to p 170 of 1:15848. Persistent high titers to gp III (mean titer 1:891) and to p 170 (mean titer 1:2238) were observed in 75% and 88% of VZV-immune subjects, respectively. T lymphocytes which proliferated on stimulation with gp I, gp III, and p 170 developed with primary VZV infection. VZV-immune subjects had mean transformation indices of 4.2 +/- 0.70 SE to gp I, 4.7 +/- 1 SE to gp III, and 3 +/- 0.39 SE to p 170. Among individual subjects, humoral and cellular immunity was not always detected to all three of the VZV proteins. Resolution of primary VZV infection and maintenance of VZV latency did not require a host response to each of these major viral proteins. Answer: The provided abstracts do not directly address the question of whether a General Practitioner (GP) should be an osteopath. Osteopathy is a type of alternative medicine that emphasizes physical manipulation of muscle tissue and bones. In contrast, a GP, also known as a family physician, is a medical doctor who is trained to provide primary and continuing care for patients of all ages, dealing with a wide range of medical issues. None of the abstracts discuss the qualifications or the role of osteopaths in primary care or compare the practice of osteopathy with that of general medicine. Therefore, based on the provided abstracts, it is not possible to make a recommendation regarding whether a GP should be an osteopath. It is important to note that the decision of whether to see an osteopath or a GP for primary care should be based on individual health needs, preferences, and the specific medical services required. In some healthcare systems, osteopaths are recognized as primary care providers, while in others, they may not have the same scope of practice as medical doctors. Patients should consider the qualifications, regulatory framework, and the type of care they are seeking when choosing a healthcare provider.
Instruction: Does the reproductive cycle influence sleep patterns in women with sleep complaints? Abstracts: abstract_id: PUBMED:20001564 Does the reproductive cycle influence sleep patterns in women with sleep complaints? Objective: The aim of the present study was to investigate how prominent gynecological factors (regular/irregular menstrual cycle, premenstrual complaints, or menopause) can influence both subjective and objective sleep data. Methods: A total of 931 women who sought clinical assistance because of a sleep complaint were included in the investigation. All subjects filled out a sleep and gynecological questionnaire prior to undergoing a polysomnography (PSG) recording. Results: Premenopausal women with an irregular menstrual cycle were more likely to report sleep difficulties (related to falling sleep and insomnia symptoms) than those with a regular cycle (odds ratio 2.1; 95% confidence interval 1.2-3.5). These women showed increased light sleep stages and awakenings. Premenstrual complaints were reported by a higher percentage of women with periodic leg movement, lower time in saturation under 90%, and higher sleep efficiency. A lower percentage of women who took hormonal contraceptives reported snoring, had fewer arousals and longer REM latency compared to those not taking hormonal contraceptives. Menopausal women with hot flushes had more restless leg complaints. Awake time was shorter in hormone therapy users compared with non-users. Women with menopause are more likely to have an apnea-hypoapnea index greater than 5/h. Conclusion: The present findings suggest that gynecological status is associated with subjective sleep quality and objective sleep parameters in women with sleep complaints. abstract_id: PUBMED:38022155 The Menstrual Cycle's Influence on Sleep Duration and Cardiovascular Health: A Comprehensive Review. The menstrual cycle, a fundamental biological process in women, extends its influence beyond reproduction, impacting sleep duration and cardiovascular health. This comprehensive review delves into the intricate connections that bind these three vital aspects of women's health. Beginning with thoroughly exploring the menstrual cycle, we uncover its phases and the dynamic hormonal fluctuations that underlie each stage. We pay special attention to estrogen and progesterone, the primary sex hormones orchestrating the menstrual cycle. With their rhythmic rise and fall, these hormones orchestrate events, affecting sleep duration, sleep patterns, and various indicators of cardiovascular well-being. The review examines how the menstrual cycle influences sleep patterns, exploring the nuanced changes in sleep duration observed throughout menstrual phases. We elucidate the contributing factors, encompassing hormonal fluctuations, the impact of pain and discomfort, and the significance of emotional and psychological factors. All of these elements collectively contribute to variations in sleep quality. Shifting our focus to the cardiovascular system, we investigate the bidirectional relationships between sleep disturbances and cardiovascular conditions, emphasizing the need to address sleep-related issues in the context of cardiovascular risk. The menstrual cycle is analyzed as a pivotal mediator in these intricate connections, exploring how hormonal fluctuations across menstrual phases can influence sleep patterns and cardiovascular health. This analysis provides valuable insights into the complex causality web. As clinical implications emerge, we emphasize the importance of tailoring healthcare strategies for individuals with irregular menstrual cycles. We explore potential interventions, from personalized care and hormone management to lifestyle adjustments, to improve sleep and cardiovascular well-being. In conclusion, this comprehensive review sheds light on the interplay between the menstrual cycle, sleep duration, and cardiovascular health. It underscores the urgent necessity for personalized healthcare approaches and preventive strategies, empowering women to navigate these intricate relationships. Ultimately, through a nuanced understanding of these interactions, we can work towards enhancing women's overall well-being and reducing cardiovascular risk within the context of menstrual cycle-related influences. abstract_id: PUBMED:26821442 Unique Aspects of Sleep in Women. Sleep in women differs in many respects from that of men. In general, women appear to report a greater need for sleep and more subjective complaints of non-refreshing sleep than men. Sleep in women is affected at least partially by hormonal factors, with women typically suffering from sleep disturbance in connection with the menstrual cycle, pregnancy, and menopause Menstrual cycles are associated with prominent changes in reproductive hormones that may influence sleep. Sleep apnea and restless legs syndrome may be aggravated by pregnancy. Women may also develop insomnia during pregnancy, childbirth and menopause. abstract_id: PUBMED:31610482 Sleep, premenstrual mood disorder, and women's health. Women experience increased vulnerability for both mood and sleep disorders, and the female menstrual cycle represents one fundamental mechanism related to risk. This review evaluates recent literature integrating female reproduction, sleep, and mood. For as many as one third of women, sleep is disrupted premenstrually, and sleep disturbances are particularly prevalent in those with premenstrual mood disorders. Variation in sleep patterns, circadian rhythm alterations, and decreased melatonin secretions due to hormonal fluctuations during the premenstrual phase of the menstrual cycle could explain sleep complaints and have been linked to menstrual irregularity. Menstrual irregularity is also independently associated with increased risk of mood complaints and poor sleep. Therefore, there is growing evidence for the interactional relationships between poor sleep, circadian rhythm disruption, and mood in reproductive-age women, although further research relating to specific mechanisms of risk are needed. abstract_id: PUBMED:18774898 The Influence of reproductive status and age on women's sleep. This review paper summarizes presentations from a conference update on the state of knowledge about the extent to which reproductive status and age influence women's sleep. Key issues pertaining to sleep during menstrual cycles, pregnancy and postpartum, perimenopausal transition, and aging are presented. Clinical implications and areas for further research are identified. abstract_id: PUBMED:25688329 Sleep and Women's Health. Sex differences in sleep begin at a very early age and women report poorer sleep quality and have higher risk for insomnia than do men. Sleep may be affected by variation in reproductive hormones, stress, depression, aging, life/role transitions, and other factors. The menstrual cycle is associated with changes in circadian rhythms and sleep architecture. Menstruating women (even without significant menstrual-related complaints) often report poorer sleep quality and greater sleep disturbance during the premenstrual week compared to other times of her menstrual cycle. In addition to these sleep disturbances, women with severe premenstrual syndrome often report more disturbing dreams, sleepiness, fatigue, decreased alertness and concentration during the premenstrual phase. Sleep disturbances are also commonly reported during pregnancy and increase in frequency and duration as the pregnancy progresses. The precipitous decline in hormones and unpredictable sleep patterns of the newborn contribute to and/or exacerbate poor sleep and daytime sleepiness during the early postpartum period. Insomnia is also among the most common health complaints that are reported by perimenopausal women. Women are particularly vulnerable to developing insomnia disorder during these times of reproductive hormonal change. In this review, we present a discussion on the most relevant and recent publications on sleep across the woman's lifespan, including changes in sleep related to menstruation, pregnancy, postpartum, and the menopausal transition. Treatment for sleep disturbances and insomnia disorder and special considerations for treating women will also be discussed. abstract_id: PUBMED:28989323 The relationship of sleep complaints risk factors with sleep phase, quality, and quantity in Japanese workers. Numerous studies have determined that lifestyle factors (smoking, drinking, snacking, etc.) and the bedroom environment can influence sleep. We developed a new sleep scale-the 3-Dimensional Sleep Scale (3DSS)-which measures three elements of sleep: phase, quality, and quantity. The purpose of this study is to determine which risk factors of sleep complaints are associated with these sleep elements. Data were obtained from 366 Japanese day workers (302 men and 64 women). Sleep condition was assessed with the 3DSS, and we also assessed various habits within 2 h of going to bed, including smoking, drinking, snacking, caffeine intake, mobile phone use, and working. We also asked about bedroom environmental conditions (noise, lighting, and temperature and humidity). Multivariate logistic regression analysis using the backward selection method (likelihood ratio) was used, with 3DSS scores as the outcome (i.e., over or under the cutoff). The results showed that smoking was associated with significantly greater odds ratio [2.71 (1.65-4.44)] of disordered sleep phase, while lighting as well as temperature and humidity led to greater odds [3.67 (1.55-8.68), 1.93 (1.20-3.11)] of poor sleep quality. Finally, only noise was significantly related to greater odds [1.98 (1.13-3.46)] of low sleep quantity. These findings indicated the various risk factors of sleep complaints could be associated with different sleep elements. This might help in the effective treatment of sleep complaints. abstract_id: PUBMED:30098748 Menstrual Cycle Effects on Sleep. Subjective and objective sleep changes occur during the menstrual cycle. Poorer sleep quality in the premenstrual phase and menstruation is common in women with premenstrual symptoms or painful menstrual cramps. There is increased sleep spindle activity from follicular to luteal phase, potentially progesterone related. Luteal phase changes also include blunted temperature rhythm amplitude and reduced rapid eye movement sleep. Women with polycystic ovary syndrome should be screened for sleep disordered breathing. Short sleep duration is associated with irregular menstrual cycles, which may impact reproductive health. Menstrual cycle phase and menstrual-related disorders should be considered when assessing women's sleep complaints. abstract_id: PUBMED:32318244 Gender differences in sleep patterns and sleep complaints of elite athletes. Objective: The present study aimed to investigate the gender differences for sleep complaints, patterns and disorders of elite athletes during preparation for the Rio 2016 Olympic Games. Methods: The study included 146 athletes from the Brazilian Olympic Team (male: n=86; 59%; female: n=60; 41%). The assessment of the Olympic athletes' sleep took place in 2015, during the preparation period for the Rio Olympic Games. The athletes underwent a single polysomnography (PSG) evaluation. Sleep specialists evaluated the athletes and asked about their sleep complaints during a clinical consultation. In this evaluation week, the athletes did not take part in any training or competitions. Results: The prevalence of sleep complaints was 53% of the athletes during the medical consultation, the most prevalent being insufficient sleep/waking up tired (32%), followed by snoring (21%) and insomnia (19.2%). In relation to the sleep pattern findings, the men had significantly higher sleep latency and wake after sleep onset than the women (p=0.004 and p=0.002, respectively). The sleep efficiency and sleep stages revealed that men had a lower percentage of sleep efficiency and slow wave sleep than the women (p=0.001 and p=0.05, respectively). Conclusion: Most athletes reported some sleep complaints, with men reporting more sleep complaints than women in the clinical evaluation. The PSG showed that 36% of all athletes had a sleep disorder with a greater reduction in sleep quality in men than in women. abstract_id: PUBMED:25458772 Sleep, sleep disturbance, and fertility in women. Sleep and sleep disturbances are increasingly recognized as determinants of women's health and well-being, particularly in the context of the menstrual cycle, pregnancy, and menopause. At present, however, little is known about whether fertility is affected by sleep quantity and quality. That is, to what degree, and by what mechanisms, do sleep and/or its disturbances affect fertility? The purpose of this review is to synthesize what is known about sleep disturbances in relation to reproductive capacity. A model is provided, whereby stress, sleep dysregulation, and circadian misalignment are delineated for their potential relevance to infertility. Ultimately, if it is the case that sleep disturbance is associated with infertility, new avenues for clinical intervention may be possible. Answer: Yes, the reproductive cycle does influence sleep patterns in women with sleep complaints. Premenopausal women with an irregular menstrual cycle are more likely to report sleep difficulties, such as trouble falling asleep and insomnia symptoms, compared to those with a regular cycle. These women also show increased light sleep stages and awakenings (PUBMED:20001564). The menstrual cycle is known to cause changes in circadian rhythms and sleep architecture, with many women reporting poorer sleep quality and greater sleep disturbance during the premenstrual week compared to other times of their menstrual cycle (PUBMED:25688329). Additionally, subjective and objective sleep changes occur during the menstrual cycle, with poorer sleep quality commonly reported in the premenstrual phase and menstruation, especially in women with premenstrual symptoms or painful menstrual cramps (PUBMED:30098748). Menstrual cycles are associated with changes in reproductive hormones that may influence sleep, and conditions such as sleep apnea and restless legs syndrome may be aggravated by pregnancy. Insomnia is also a common complaint during pregnancy, childbirth, and menopause (PUBMED:26821442). Furthermore, short sleep duration is associated with irregular menstrual cycles, which may impact reproductive health (PUBMED:30098748). The review of sleep, premenstrual mood disorder, and women's health suggests that sleep disturbances are particularly prevalent in those with premenstrual mood disorders, and variation in sleep patterns, circadian rhythm alterations, and decreased melatonin secretions due to hormonal fluctuations during the premenstrual phase could explain sleep complaints (PUBMED:31610482). Overall, the evidence indicates that the reproductive cycle has a significant impact on sleep patterns in women with sleep complaints, and this influence is mediated by hormonal fluctuations and associated with various stages of the reproductive cycle, including the menstrual cycle, pregnancy, and menopause.
Instruction: Does computer survey technology improve reports on alcohol and illicit drug use in the general population? Abstracts: abstract_id: PUBMED:24465720 Does computer survey technology improve reports on alcohol and illicit drug use in the general population? A comparison between two surveys with different data collection modes in France. Background: Previous studies have shown that survey methodology can greatly influence prevalence estimates for alcohol and illicit drug use. The aim of this article is to assess the effect of data collection modes on alcohol misuse and drug use reports by comparing national estimates from computer-assisted telephone interviews (CATI) and audio-computer-assisted self interviews (A-CASI). Methods: Design: Two national representative surveys conducted in 2005 in France by CATI (n = 24,674) and A-CASI (n = 8,111). Participants: French-speaking individuals aged [18]-[64] years old. Measurements: Alcohol misuse according to the CAGE test, cannabis use (lifetime, last year, 10+ in last month) and experimentation with cocaine, LSD, heroin, amphetamines, ecstasy, were measured with the same questions and wordings in the two surveys. Multivariate logistic regressions controlling for sociodemographic characteristics (age, educational level, marital status and professional status) were performed. Analyses were conducted on the whole sample and stratified by age (18-29 and 30-44 years old) and gender. 45-64 years old data were not analysed due to limited numbers. Results: Overall national estimates were similar for 9 out of the 10 examined measures. However, after adjustment, A-CASI provided higher use for most types of illicit drugs among the youngest men (adjusted odds ratio, or OR, of 1.64 [1.08-2.49] for cocaine, 1.62 [1.10-2.38] for ecstasy, 1.99 [1.17-3.37] for LSD, 2.17 [1.07-4.43] for heroin, and 2.48 [1.41-4.35] for amphetamines), whereas use amongst women was similar in CATI and A-CASI, except for LSD in the 30-44 age group (OR = 3.60 [1.64-7.89]). Reported alcohol misuse was higher with A-CASI, for all ages and genders. Conclusions: Although differences in the results over the whole population were relatively small between the surveys, the effect of data collection mode seemed to vary according to age and gender. abstract_id: PUBMED:28710967 Illicit drug use is increasing among non-medical users of prescription drugs-Results from population-based surveys 2002-2014. Background: Non-medical use of prescription drugs (NMUPD) is known to be associated with illicit drug use, but less is known about how illicit drug use has changed in NMUPD. We examined (1) the changes in illicit drug use among Finnish non-medical users of prescription drugs during the 2000s and (2) whether the trends of illicit drug use differ by non-medical use of prescription drugs in the general population. Methods: Data were derived from population-based (aged 15-69) Drug Surveys conducted in Finland in 2002, 2006, 2010 and 2014. The response rates varied between 63% and 48%. NMUPD during the last year was measured (n=252). Past-year illicit drug use among non-medical users of prescription drugs and the reference population not reporting NMUPD (n=10,967) was compared. Logistic regression was used to estimate the p-values for trends. Results: Illicit drug use has increased notably among Finnish non-medical users of prescription drugs (from 21% to 70%, p for trend&lt;0.001). This was not explained by the respondents' gender, age, employment status or alcohol use. Among the reference population, illicit drug use also increased statistically significantly, but much more moderately (from 2.5% to 5.4%). The difference between the trends was confirmed by an interaction test (p=0.022). Conclusions: NMUPD seems to be increasingly merging with illicit drug use. This indicates an increasing prevalence of polydrug use among non-medical users of prescription drugs, which may bring about more severe harms and worse health outcomes for users and more challenges in regard to treatment. abstract_id: PUBMED:34001049 The role of tobacco smoking and illicit drug use in adolescent acute alcohol intoxication. Background: This study aims to determine the prevalence of tobacco smoking and illicit drug use among Dutch adolescents admitted to hospital for acute alcohol intoxication treatment. Furthermore, socio-demographic predictors for smoking and illicit drug use in the sample population will be studied. The relationship between illicit drug use and specific characteristics of intoxication, such as blood alcohol concentration (BAC) and duration of reduced consciousness is also investigated. Methods: The national Dutch Paediatric Surveillance Unit was used to prospectively register cases of acute alcohol intoxication from 2007 through 2017. Cases were included if they met the following inclusion criteria: BAC &gt; 0.0 g/L, aged between 10 to 18 years old and requiring hospital treatment due to reduced consciousness. Questionnaires were sent to paediatricians to obtain clinical information. Results: During the period 2007-2017, 5322 cases that met the inclusion criteria were reported. In this patient group, the prevalence of tobacco smoking was 22.2% (CI 21.0-23.5%), while the prevalence of illicit drug use was 11.8% (CI 10.9-12.7%). The predictors for smoking were the absence of alcohol-specific parental rule-setting, lower educational level, non-traditional family structure and positive drug screening. The predictors for illicit drug use were the absence of alcohol-specific parental rule-setting and smoking. Illicit drug use was also associated with a lower BAC at the time of admission. Conclusions: Assessing smoking and illicit drug use among adolescents admitted for acute alcohol intoxication is important in acute cases of intoxication, for outpatient follow-up and for the purposes of prevention. The relationship between simultaneous illicit drug use and a lower BAC is of relevance for paediatricians' attempts to diagnose acute intoxication. With respect to outpatient follow-up and preventive measures, it is important to be aware that adolescents' alcohol consumption, tobacco and illicit drug use are related and, ultimately, increase the odds of using other substances. abstract_id: PUBMED:29516786 Alcohol and drug use among staff at licensed premises in Norway. Aims: There is increased concern about the use of alcohol and illicit drugs in nightlife settings. Most studies of substance use in nightlife settings are from the patrons' perspective, which leaves an understudied population - the nightclub staff. The aim of this paper is to study self-reported alcohol and substance use among staff at licensed premises in Norway: types of illicit drugs used, attitudes towards drugs, and observed drug use among patrons. Methods: A survey was conducted at server-training courses in 20 different cities in Norway during 2015. The survey included: demographics, respondents' own alcohol and drug experience, attitudes towards drug use, and observed drug use among patrons at licensed premises. Results: Data were collected from 912 staff working at licensed premises. A majority reported alcohol use in the past year, and 61% reported alcohol use two or more times a month. Overall, 45% of the respondents reported ever-used of illicit drugs. The four most commonly used drugs among staff were cannabis, cocaine, ecstasy/MDMA, and amphetamine. The majority of respondents supported Norway's strict drug laws, and 63% reported observing drug-intoxicated patrons at licensed premises during the past six months. Conclusions: The proportion of frequent drinkers and heavy episodic drinking among staff at licensed premises was high, and the prevalence of illicit drug use was much higher compared with the general population. Thus, staff at licensed premises can be considered a risk-group for alcohol and illicit drug use and therefore represent an important target population in club drug-prevention programmes. abstract_id: PUBMED:10729742 Sharp increase in illicit drug use in Greece: trends from a general population survey on licit and illicit drug use. The study aims to follow trends in the prevalence of licit and illicit substance use between 1984 and 1998 in the Greek general population. Data are drawn from two nationwide cross-sectional household surveys. Probability samples consisted of 4,297 (1984) and 3,759 (1998) 12- to 64-year-old respondents. Regular tobacco smoking in the total sample increased in women and decreased in men while alcohol consumption remained at similar levels. The use of illicit drugs showed a 3-fold increase since 1984 reaching a lifetime prevalence of 19% in males and 6% in females. In contrast, a significant decrease was noted in the unprescribed use of psychoactive medicines, especially among women. Trends in licit and illicit substance use can be explained by sociocultural and policy changes in Greece in the last 15 years. abstract_id: PUBMED:16361206 The management of alcohol, tobacco and illicit drug use problems by general practitioners in Australia. The aim of this study was to document the frequency of the management of illicit drug, alcohol and tobacco problems in general practice in Australia. Data from the Bettering the Evaluation and Care of Health (BEACH) study of general practice, April 1998 to March 2003, were analysed. BEACH is an ongoing national study of general practice in Australia. Each year a random sample of approximately 1000 general practitioners (GPs) participate, each providing details of 100 patient encounters. Samples are drawn from the Medicare data held by the Health Insurance Commission. Patient demographic breakdowns, medication, other treatment, referrals and other medical procedures ordered were examined for all problems labelled by GPs as illicit, alcohol and tobacco problems. Annually in Australia, it was estimated that 615,000 GP encounters--or 0.6% of all encounters--involved the management of illicit drug use problems presumably most commonly for problematic heroin use. Despite a much higher population prevalence of use and use disorders, the management of alcohol or tobacco use problems was less common, with 0.4% and 0.3% of encounters, respectively, comprising treatment of these problems. Clear demographic differences existed across the groups. The management of problems also differed, with illicit drug use problems more likely to involve provision of medication, and alcohol and tobacco treatment more likely to involve counselling and/or health advice. Despite higher rates of alcohol and tobacco use problems among patients seeing GPs in Australia, the rate of treatment for such problems was relatively lower than it was for illicit drug use problems. More efforts need to be directed towards assisting GPs to identify and target problematic alcohol and tobacco use among their patients. abstract_id: PUBMED:33103757 Socioeconomic inequalities in tobacco, alcohol and illicit drug use: evidence from Iranian Kurds. Background: The associations between socioeconomic status (SES) and tobacco use, alcohol consumption and drug use are poorly understood in the Islamic Republic of Iran. Aims: To measure education- and wealth-related inequalities in cigarette smoking, hookah smoking, illicit drug use and alcohol consumption in Kermanshah Province, Islamic Republic of Iran. Methods: We used baseline data from the Ravansar Noncommunicable Disease (RaNCD) study. The study collected information on socioeconomic and demographic characteristics, cigarette and hookah smoking, alcohol consumption and illicit drug use of 10 015 adults aged ≥ 35 years between 2014 and 2016. The relative concentration index and absolute concentration index were used to measure education- and wealth-related inequalities in cigarette smoking, hookah smoking, illicit drug use and alcohol consumption. Results: Cigarette smoking was concentrated among less-educated and less-wealthy men and women. Similarly, illicit drug use was concentrated among lower-SES men. In contrast, hookah smoking and alcohol consumption were more prevalent among higher-SES men. Conclusions: There were education- and wealth-related inequalities in tobacco, alcohol and illicit drug use in the west of the Republic of Iran. Future studies should aim to identify the main socioeconomic determinants of these inequalities in Kermanshah Province and generally in the Islamic Republic of Iran. abstract_id: PUBMED:30395766 More than just Alcohol: Marijuana and Illicit Drug Use at Parties Attended by 15-20 Year Olds. Background: Parties are a common setting for marijuana and illicit drug use among adolescents. Objectives: This study examined the context of parties with alcohol, marijuana and illicit drug use attended by adolescents and young adults. Methods: In 2016, an address-based sample of 1,764 15-20-year-olds in 24 U.S. communities participated in an online survey. Parties were categorized as alcohol-only (Alc-only), marijuana + alcohol (Mj + Alc), and illicit + marijuana + alcohol (ID + Mj + Alc) based on survey participants' observations and self-reported drug use at the last party attended. Multivariable logistic regression was used to identify correlates of substance use at parties. Results: 1,089 participants (61.7%) reported ever attending a party where alcohol was consumed. Of those, 60.1% reported that the last party they attended had Alc-only, 24.9% had Mj + Alc, and 10.0% had ID + Mj + Alc. Older participants were more likely to attend a party with Mj + Alc or ID + Mj + Alc. Participants whose mother had a college degree (compared to less than a college degree) were less likely to attend a party with ID + Mj + Alc. Parties with Mj + Alc and ID + Mj + Alc were larger and the majority of attendees were under 21. Parties with ID + Mj + Alc were more likely to be majority female compared to mixed gender. Parties with Mj + Alc were more likely to occur at someone else's home and be in states where medical and recreational marijuana use was legal. Conclusion: One-third of parties attended by 15-20-year-olds had marijuana and/or illicit drug use in addition to alcohol consumption. The identified risk factors of parties with marijuana and illicit drug use can be used to develop targeted prevention strategies. abstract_id: PUBMED:31133937 Psychiatric Diagnoses Differ Considerably in Their Associations With Alcohol/Drug-Related Problems Among Adolescents. A Norwegian Population-Based Survey Linked With National Patient Registry Data. The aim of this study was to examine alcohol/drug use and problems across psychiatric diagnoses and to what extent associations between each psychiatric diagnosis and alcohol/drug use and problems were independent from the potential confounding effects of psychiatric comorbidity, socioeconomic status, sex and age. We used a dataset comprising a linkage between a large population-based and cross-sectional study among Norwegian adolescents (the youth@hordaland conducted in 2012) and national registry-based data on specialist mental health care use during the 4 years prior to the survey (2008 to 2011). The study sample included 16 to 19 year olds who participated in the youth@hordaland survey and consented to the linkage with patient registry data (n = 9,408). Among these, 853 (9%) had received specialist mental health care and comprised the clinical group, while the rest (n = 8,555) comprised the comparison group. The main outcome variables were several self-reported indicators for alcohol/drug use, including any alcohol use, frequent alcohol intoxication, high-level alcohol consumption, and lifetime illicit drug use, as well as one indicator for potential alcohol/drug-related problems: a positive CRAFFT-score. Adolescents receiving specialist mental health care (n = 853) reported more frequently alcohol/drug use and problems compared to adolescents not receiving these services (Cohens d's ranging from 0.09 to 0.29, all p ≤ 0.01). Anxiety, depression, conduct disorders, eating disorders, ADHD, and trauma-related disorders were all associated with single measures of alcohol/drug use and problems, with odds ratios (ORs) ranging from 1.58 to 4.63, all p &lt; 0.05) in unadjusted models. Trauma-related disorders, depression and conduct disorders were also positively associated with higher scores on a combined indicator of alcohol/drug use and problems (ORs ranging from 1.89 to 3.15, all p &lt; 0.01), even after the full adjustment from psychiatric comorbidity and sociodemographic variables (adjusted odds ratios ranging from 1.61 to 2.79, p &lt; 0.05). These results suggest that alcohol/drug use and problems were slightly more common among adolescents who received specialist mental health care during the past 4 years compared with the general adolescent population, and adolescents with trauma-related disorders, depression and conduct disorders were high-risk groups for alcohol/drug use and problems. abstract_id: PUBMED:16019976 Alcohol use disorder and illicit drug use in admissions to general hospitals in the United States. This study estimated the prevalence and explored the management of illicit drug use, illicit drug use associated with alcohol use disorder (AUD), and AUD without reported illicit drug use in a national sample of 2040 admissions to general hospitals in the United States. Surveyed in 1994, admissions were diagnosed with past 12-month DSM-IV AUD according to the Alcohol Use Disorders and Associated Disabilities Interview Schedule. Information about drug use was also included in the interview. Entries in hospital records were used to operationalize management. Prevalence of chronic drug use in hospital admissions was 5%, 14% in 18-44-year-old admissions, and 31% in admissions with an AUD. In admissions with an AUD, 45% reported no drug use. Detection rates were 82% for admissions with comorbid AUD and chronic drug use (where detection of either problem was assessed); detection rates hovered around 50% in admissions with one or the other condition. Low rates of treatment and referral (33% and 42%, respectively) were observed in the comorbid group; rates were 13-17% in admissions with AUD alone or illicit drug use alone. Findings indicate the need for increased attention to drug use and to AUD with and without other drug use among general hospital admissions. Answer: Yes, computer survey technology can improve reports on alcohol and illicit drug use in the general population. A study comparing national estimates from computer-assisted telephone interviews (CATI) and audio-computer-assisted self interviews (A-CASI) in France found that A-CASI provided higher use reports for most types of illicit drugs among the youngest men, and reported alcohol misuse was higher with A-CASI for all ages and genders (PUBMED:24465720). This suggests that the anonymity and privacy afforded by computer-assisted self-interviews may encourage more honest reporting of sensitive behaviors such as alcohol misuse and drug use.
Instruction: Is age-related macular degeneration associated with serum lipoprotein and lipoparticle levels? Abstracts: abstract_id: PUBMED:12417100 Is age-related macular degeneration associated with serum lipoprotein and lipoparticle levels? Objective: The etiology of age-related macular degeneration (ARMD) is poorly understood. Risk factors for cardiovascular disease have been thought to be associated with ARMD. Our purpose was to measure the concentration of atherogenic apolipoproteins (apo) and lipoparticles in serum from ARMD patients. Methods: We analyzed lipids, lipoparticles and apolipoproteins concentrations in 84 unrelated patients with ARMD and compared the results with those of age- and sex-matched control subjects (n=62). Serum lipid concentrations were determined enzymatically; apolipoproteins levels by kinetic nephelometry and lipoparticles by electroimmunodiffusion. Results: No difference in total cholesterol, triglycerides, phospholipids, high- and low-density lipoprotein-cholesterol (lHDL-C and LDL-C) concentrations were observed between ARMD patients and controls. Apo E and LpE non-B concentrations were found to be higher in serum from patients than in serum from controls. In contrast, Apo C-III and LpC-III non-B concentrations were lower in serum from patients than in serum from controls. Conclusions: The main differences observed between ARMD patients and controls are in Apo E, Apo C-III, LpC-III non-B and LpE non-B concentrations. These lipoparticles belong to the HDL family, which is considered to consist of anti-atherogenic lipoproteins. These results raise the possibility that cardiovascular risk factors are not associated with ARMD. Furthermore, we can hypothesize that ARMD development is linked to perturbations of HDL metabolism. abstract_id: PUBMED:15963792 Lycopene but not lutein nor zeaxanthin decreases in serum and lipoproteins in age-related macular degeneration patients. Background: Epidemiological studies have established that a low serum concentration of carotenoids was associated with risk of Age-Related Macular Degeneration (ARMD). The aim of this study was to determine carotenoid levels in serum and in different lipoprotein fractions in patients diagnosed for ARMD and in matched control group. Method: Thirty-four ARMD patients and 21 control subjects from Brest area (France) have been included to this study. Lipoproteins have been separated from serum by gradient density ultracentrifugation. We measured concentration of carotenoids and tocopherols in serum and in different lipoprotein fractions by HPLC. Results: No difference was observed between ARMD patients and control subjects in total serum carotenoids. Individual carotenoid levels showed that only lycopene was decreased significantly in serum, LDL and HDL fractions in patients (P&lt;0.05). Concentrations in serum and lipoparticle fractions of lutein and zeaxanthin, the major pigments present in macula were not modified between both groups. Conclusions: Lycopene, as liposoluble antioxidant nutrient, is the only carotenoid altered in ARMD patients. It cannot be excluded that this effect is related to different dietary habits, but we hypothesise that lower lycopene status could result also from specific antioxidant protection of lutein and zeaxanthin by lycopene. abstract_id: PUBMED:26641553 Genetic Variants and Systemic Complement Activation Levels Are Associated With Serum Lipoprotein Levels in Age-Related Macular Degeneration. Purpose: Genetic variants in genes encoding components of lipid metabolism have been associated with AMD. The aims of this study were to evaluate the relation of these genetic variants with serum lipid levels in AMD in a large case-control cohort (n = 3070) and to test for correlations between lipids and complement activation. Methods: Single nucleotide polymorphisms (SNPs) in eight lipid metabolism genes, previously described to be associated with AMD, were genotyped and tested for their association in our case-control cohort. Serum apolipoprotein B (ApoB), apolipoprotein AI (Apo-AI), cholesterol, triglycerides (TG), high-density lipoprotein-cholesterol (HDLC), and complement activation levels (C3d/C3) were measured and tested for association with AMD. Non-HDL cholesterol and LDL were inferred based on the measurements of the other lipids and lipoproteins. General linear models and χ2 tests were used to evaluate the relation of SNPs and lipids/lipoproteins to the disease as well as their interrelations. Results: Significant genotypic associations with AMD were observed for SNPs in CETP, APOE, and FADS1. The serum levels of Apo-AI and HDLC were significantly higher in patients compared with controls. Triglycerides (TG) levels were lower in AMD compared with controls. A cumulative effect was observed for APOE and CETP genotypes on HDLC and Apo-AI levels. Complement activation levels correlated positively with HDLC and Apo-AI, and negatively with TG. Both the lipids/lipoproteins and the complement activation levels associate independently to AMD. Conclusions: This study bridges the gap between genetic associations and physiological lipid levels in AMD. Additionally, the observed correlations between complement activation and lipid levels link two major systems that previously were always assessed independently. abstract_id: PUBMED:35629418 SIRT1: Genetic Variants and Serum Levels in Age-Related Macular Degeneration. Background: The aim of this paper was to determine the frequency of SIRT1 rs3818292, rs3758391, rs7895833 single nucleotide polymorphism genotypes and SIRT1 serum levels associated with age-related macular degeneration (AMD) in the Lithuanian population. Methods: Genotyping of SIRT1 rs3818292, rs3758391 and rs7895833 was performed using RT-PCR. SIRT1 serum level was determined using the ELISA method. Results: We found that rs3818292 and rs7895833 were associated with an increased risk of developing exudative AMD. Additional sex-differentiated analysis revealed only rs7895833 was associated with an increased risk of developing exudative AMD in women after strict Bonferroni correction. The analysis also revealed that individuals carrying rs3818292, rs3758391 and rs7895833 haplotype G-T-G are associated with increased odds of exudative AMD. Still, the rare haplotypes were associated with the decreased odds of exudative AMD. After performing an analysis of serum SIRT1 levels and SIRT1 genetic variant, we found that carriers of the SIRT1 rs3818292 minor allele G had higher serum SIRT1 levels than the AA genotype. In addition, individuals carrying at least one SIRT1 rs3758391 T allele also had elevated serum SIRT1 levels compared with individuals with the wild-type CC genotype. Conclusions: Our study showed that the SIRT1 polymorphisms rs3818292 and rs7895833 and rs3818292-rs3758391-rs7895833 haplotype G-T-G could be associated with the development of exudative AMD. Also, two SNPs (rs3818292 and rs3758391) are associated with elevated SIRT1 levels. abstract_id: PUBMED:21613373 Evaluation of serum lipid concentrations and genetic variants at high-density lipoprotein metabolism loci and TIMP3 in age-related macular degeneration. Purpose: To analyze the association between polymorphisms in the TIMP3 gene and genes of the high-density lipoprotein (HDL) metabolism and age-related macular degeneration (AMD), and evaluate serum lipid and lipoprotein levels in AMD patients compared with control individuals. Methods: Single nucleotide polymorphisms in or near the TIMP3, ABCA1, FADS1-3, CETP, LIPC, and LPL genes were genotyped. Serum levels of apolipoprotein B (ApoB), apolipoprotein A1, lipoprotein a, cholesterol, triglycerides, and HDL-cholesterol were determined. Results: Significant associations were found between AMD and variants in ABCA1 and FADS1-3, and a nearly significant association in TIMP3. No significant associations were observed for variants in LPL, LIPC, and CETP. We also observed a significant elevation of ApoB levels in serum of AMD patients. Other lipids and lipoproteins were not significantly altered. Conclusions: These results confirm associations of AMD with variants near the TIMP3 gene and at loci involved in HDL metabolism. They further highlight a role of the extracellular matrix and the HDL metabolism in the pathogenesis of AMD. This study identified increased ApoB levels as a possible new serum biomarker for AMD. abstract_id: PUBMED:22030834 Serum paraoxonase phenotype distribution in exudative age-related macular degeneration and its relationship to homocysteine and oxidized low-density lipoprotein. Purpose: Disequilibrium between oxidative stress and antioxidant levels has been proposed as an important case of exudative age-related macular degeneration (AMD). The aim of the present study was to investigate homocysteine (Hcy) level and antioxidant paraoxonase 1 (PON1) activity within its phenotypes together with oxidized low-density lipoprotein (OX-LDL) levels in the patients with exudative AMD. Methods: Serum PON1 activity and plasma Hcy and OX-LDL levels were analyzed in 45 exudative AMD patients and compared with 45 healthy controls. Paraoxonase 1 activity was measured in serum using paraoxon and phenylacetate as substrates. The PON1 phenotype was determined using double-substrate method. Homocysteine and OX-LDL levels were determined by enzyme-linked immunosorbent assay method. Results: The distribution of PON1 phenotypes was significantly different between the patients with exudative AMD and control subjects (chi-square = 6.17, P = 0.01). AA phenotype with low activity was significantly more frequent in exudative AMD patients compared with healthy subjects (62.2% vs. 35.6%, respectively). Other phenotype frequencies in the patients compared with controls were as AB phenotype (intermediate activity) 28.9% versus 46.7% and BB phenotype (high activity) 8.9% versus 17.8%, respectively. Except in BB phenotype (P = 0.2), patients with AA and AB phenotypes had higher plasma Hcy levels in comparison to those of controls (P = 0.02 and P = 0.03, respectively). The mean OX-LDL levels, in all 3 phenotypes (P &lt; 0.05), and OX-LDL/high-density lipoprotein ratio, in AA and AB phenotypes (P = 0.001, P = 0.1, respectively) but not in BB (P = 0.1), were significantly higher in the patients than controls. No significant differences in comparison of Hcy and OX-LDL levels between 3 PON1 phenotypes in both control (P = 0.6 for Hcy, P = 0.7 for OX-LDL) and patients (P = 0.8 for Hcy, P = 0.6 for OX-LDL) were found Conclusion: Increased plasma OX-LDL levels and ratios of OX-LDL/high-density lipoprotein, as biomarkers of lipoprotein oxidative stress, higher levels of Hcy, as oxidant agent, and more common low or intermediate PON1 activity in patients with exudative AMD, compared with controls, indicate that PON1 activity is insufficient to explain the increased oxidative stress observed in exudative AMD. abstract_id: PUBMED:30010042 Evaluation of serum SLCO1B1 levels and genetic variants of SLCO1B1 rs4149056 and rs2306283 in patients with early and exudative age-related macular degeneration. Purpose: To determine SLCO1B1 rs4149056 and rs2306283 gene polymorphisms and SLCO1B1 serum levels in patients with early and exudative age-related macular degeneration. Materials And Methods: The study enrolled 206 patients with exudative AMD, 253 patients with early AMD and 301 control subjects. DNA was extracted from peripheral venous blood leukocytes using commercial kits. Genotyping of SLCO1B1 rs4149056 and rs2306283 was carried out using a real-time polymerase chain reaction (RT-PCR) method. Serum SLCO1B1 levels were measured using SLCO1B1 ELISA kit. Results: We found statistically significant differences in genotype (T/T, T/C and C/C) distribution of SLCO1B1 rs4149056 variant between the patients with exudative AMD and control group (52.4%, 47.6% and 0% vs. 64.8%, 31.6% and 13.7%, respectively, p &lt; 0.001). Univariate binary logistic regression analysis showed that age was a risk factor for exudative AMD development. Also, T/C variant was associated with 1.9-fold increased Odds ratio of exudative AMD development under a codominant model (OR = 1.863; 95% CI: 1.290;2.689; p &lt; 0.001). The results remained of the same statistical significance after multivariate analysis. On the other hand, C allele was associated with 1.6-fold increased odds ratio of exudative AMD development (OR = 1.563; 95% CI: 1.035;2.359; p = 0.034) only after adjustment for age. No significant associations were found in analysis of genotypes and alleles at rs2306283. Serum SLCO1B1 concentration was significantly higher in early AMD patients than in healthy controls (median, IQR: 2.92 ng/ml, 5.01 ng/ml versus 1.26 ng/ml, 2.63 ng/ml, respectively, p = 0.025), as well as in exudative AMD patients than in controls (median, IQR: 2.72 ng/ml, 5.71 ng/ml versus 1.26 ng/ml, 2.63 ng/ml, respectively, p = 0.002). Furthermore, subjects with rs4149056 T/C genotype had higher SLCO1B1 serum levels than those with T/T genotype (median, IQR: 3.73 ng/ml, 3.14 ng/ml versus 1.23 ng/ml, 1.47 ng/ml, respectively, p = 0.037). Conclusion: Our study determined that SLCO1B1 (c.521 T &gt; C) rs4149056 T/C genotype and C allele may be associated with exudative age-related macular degeneration, as well as with elevated serum SLCO1B1 levels. Also, higher serum SLCO1B1 levels were found to be associated with early and exudative age-related macular degeneration. abstract_id: PUBMED:35463114 To Study the Role of Serum Homocysteine Levels and Ocular Involvement in Retinal Vasculitis. Idiopathic retinal vasculitis is a chronic disease of unknown aetiology which results in ocular morbidity in patients of productive age group. Homocysteine has been implicated in various ocular conditions like age-related macular degeneration, retinal vein occlusions, diabetic retinopathy, and optic nerve diseases. We conducted a study to investigate the relation between serum homocysteine levels and retinal vasculitis at a tertiary care centre in North India. In this case-control study, 32 cases and 64 controls were included and the duration was from June 2017 to March 2019. Serum homocysteine of cases and controls was detected by reflectance photometry using VITROS Chemistry Products HCY 2 (Homocysteine) and the normal range of serum homocysteine as per this method was 6.60 to 14.80 micro mol per litre. Our study found that of the total 32 retinal vasculitis patients, serum homocysteine was raised in 65.62% (21 cases out of 32) and in 70.31% of control group (45 out of 64). Chi square test results showed that there was no significant association found between S. homocysteine levels and Reticular vasculitis (P: 0.64). The two groups were comparable in terms of the age with mean ± SD in cases being 33.47 ± 8.336 years and controls being 35.16 ± 8.568 years with a P value of 0.37 being statistically insignificant. The data collected was analysed using SSPS-16 (Statistical Package for Social Sciences Version 16) software and test of association was Odd's ratio. Power of study was 80% and P &lt; 0.05 is considered statistically significant. We found that there is no significant association between raised serum homocysteine and retinal vasculitis (with P &lt; 0.64). Odds ratio was 0.80(0.33-1.99) implying that the cases and controls were statistically significantly different with respect to homocysteine levels. abstract_id: PUBMED:38111140 Association of TAS2R16 gene (rs860170, rs978739, rs1357949) polymorphisms and TAS2R16 serum levels in patients with age-related macular degeneration. Background: The aim of this study is to determine the association of TAS2R16 (rs860170, rs978739, rs1357949) gene polymorphisms and TAS2R16 serum levels in patients with the occurrence of age-related macular degeneration (AMD). Methods: Subjects with early AMD, subjects with exudative AMD, and healthy controls participated in the study. DNA was isolated by salting out leukocytes from peripheral venous blood. Single nucleotide polymorphisms (SNPs) were analysed by RT-PCR. TAS2R16 levels were determined by enzyme-linked immunosorbent assay (ELISA) using the Abbexa Human Taste Receptor Type 2 Member 16 (TAS2R16) ELISA kit. Statistical data analysis was performed using "IBM SPSS Statistics 27.0" and SNPstats statistical data analysis programmes. Results: The TAS2R16 rs860170 TT genotype is statistically significantly less frequent in the exudative AMD group than in the control group, whereas the TAS2R16 rs860170 C allele gene is statistically significantly more frequent in the exudative AMD group. Each C allele of TAS2R16 rs860170 is associated with a 2.8-fold increased probability of occurrence of exudative AMD. The C allele of TAS2R16 rs860170 is statistically significantly more frequent in men and women with exudative AMD than in the control group. The C allele of TAS2R16 rs860170 is associated with a 2.8-fold increased odds of occurrence of exudative AMD in women and a 2.9-fold increased odds of occurrence of exudative AMD in men. In TAS2R16 (rs860170, rs978739, and rs1357949), the T-T-A haplotype is associated with a 2.6-fold decreased likelihood of developing early AMD and the T-T-A haplotype is associated with a 3.2-fold decreased likelihood of developing early AMD in women. For TAS2R16 (rs860170, rs978739, and rs1357949), carriers of the T-T-G and T-T-A haplotypes are associated with a 2.2- and 3.2-fold decreased probability of exudative AMD, respectively. Individuals with the C-C-A haplotype are 9.2-fold more likely to develop exudative AMD. Specifically, the C-C-A haplotype is associated with a 9.3-fold increased likelihood of exudative AMD in men. In contrast, women with the T-T-A haplotype are 5.6-fold less likely to develop exudative AMD. Conclusion: TAS2R16 plays an important role in the development of AMD. abstract_id: PUBMED:28698208 Plasma lipoprotein subfraction concentrations are associated with lipid metabolism and age-related macular degeneration. Disturbance in lipid metabolism has been suggested as a major pathogenic factor for age-related macular degeneration (AMD). Conventional lipid measures have been inconsistently associated with AMD. Other factors that can alter lipid metabolism include lipoprotein phenotype and genetic mutations. We performed a case-control study to examine the association between lipoprotein profile and neovascular AMD (nAMD) and whether the cholesterylester transfer protein (CETP) D442G mutation modulates these associations. Patients with nAMD had significantly higher concentrations of HDL and IDL compared with controls. The increase in HDL particles in nAMD patients was driven by an excess of medium-sized particles. Concurrently, patients with nAMD also had lower Apo A-1, lower VLDL and chylomicron lipoprotein. Many of these associations showed a dose-dependent association between controls, early AMD cases, and nAMD cases. Adjustment for the presence of the D442G mutation at the CETP locus did not significantly alter the increased AMD risk associated with HDL particle concentration. AMD is associated with variation in many lipoprotein subclasses, including increased HDL and IDL particles and decreased Apo A-1, VLDL, and chylomicron particles. These data suggest widespread systemic disturbance in lipid metabolism in the pathogenesis of AMD, including possible alterations in lipoprotein carrier capacity. Answer: Yes, age-related macular degeneration (AMD) is associated with serum lipoprotein and lipoparticle levels. Several studies have investigated the relationship between AMD and various lipids, lipoproteins, and genetic factors that influence lipid metabolism. One study found no difference in total cholesterol, triglycerides, phospholipids, high-density lipoprotein-cholesterol (HDL-C), and low-density lipoprotein-cholesterol (LDL-C) concentrations between AMD patients and controls. However, it did observe differences in apolipoprotein E (Apo E), Apo C-III, LpC-III non-B, and LpE non-B concentrations, suggesting that AMD development might be linked to perturbations of HDL metabolism (PUBMED:12417100). Another study reported that lycopene levels were significantly decreased in serum, LDL, and HDL fractions in AMD patients, while concentrations of lutein and zeaxanthin, which are major pigments in the macula, were not modified between both groups (PUBMED:15963792). Genetic variants in genes encoding components of lipid metabolism have been associated with AMD. Significant associations with AMD were observed for single nucleotide polymorphisms (SNPs) in CETP, APOE, and FADS1 genes. Serum levels of Apo-AI and HDLC were significantly higher in patients compared with controls, and triglyceride levels were lower in AMD compared with controls. Complement activation levels correlated positively with HDLC and Apo-AI, and negatively with TG, linking two major systems that were previously assessed independently (PUBMED:26641553). Increased ApoB levels in serum of AMD patients were identified, suggesting a possible new serum biomarker for AMD (PUBMED:21613373). Additionally, the distribution of paraoxonase 1 (PON1) phenotypes was significantly different between exudative AMD patients and control subjects, with the AA phenotype with low activity being more frequent in exudative AMD patients (PUBMED:22030834). Furthermore, SLCO1B1 rs4149056 and rs2306283 gene polymorphisms and elevated serum SLCO1B1 levels were found to be associated with early and exudative AMD (PUBMED:30010042).
Instruction: Plasma endothelin in congestive heart failure: a predictor of cardiac death? Abstracts: abstract_id: PUBMED:8798108 Plasma endothelin in congestive heart failure: a predictor of cardiac death? Background: Endothelin is a potent vasoconstrictor, with growth-promoting and possible cardiotoxic and arrhythmogenic properties. This study investigates the association between increased plasma endothelin in congestive heart failure and prognosis. Methods And Results: Forty-four patients with congestive heart failure showed increased plasma endothelin concentrations (mean +/- SD, 3.7 +/- 1.7 pg/mL; n = 44) compared with healthy volunteers (2.0 +/- 0.4 pg/mL, n = 21, P &lt; .00003). Plasma endothelin increased with the severity of the disease. All patients were followed for an average of 17 months (range, 2-25 months). Cardiac mortality was 27% (8 of 30 patients) in the group with plasma endothelin concentrations above 3 pg/mL and 0% (0 of 14 patients) in the group with plasma endothelin concentrations below 3 pg/mL (P &lt; .05). In contrast, separation of patients based on different New York Heart Association groups and ejection fractions revealed no significant difference regarding this endpoint. Conclusions: Increased plasma endothelin concentrations in patients with congestive heart failure appears to serve as a predictor of cardiac death. abstract_id: PUBMED:9043842 Prognostic value of plasma endothelin-1 in patients with chronic heart failure. Aims: Endothelin-1 is a potent vasoconstrictive and multifunctional peptide. Elevated concentrations have been reported in congestive heart failure. We hypothesized that the level of endothelin-1 in plasma is a prognostic marker in congestive heart failure. Methods And Results: Plasma levels of endothelin-1 were measured by radioimmunoassay in 120 congestive heart failure patients with ischaemic or non-ischaemic cardiomyopathy (mean ejection fraction 28 +/- 11%, in New York Heart Association (NYHA) functional class I:21, class II 35, class III: 61, class IV: 3). During a median follow-up of 361 +/- 338 days, 14 cardiac deaths occurred. In the univariate Cox model, endothelin-1 was the most powerful prognostic marker among the variables tested (P = 0.0001). A multivariate model, including plasma atrial natriuretic peptide and noradrenaline, NYHA class, age, and echocardiographic left ventricular end-diastolic diameter index was highly predictive of mortality (P = 0.00008), but only endothelin-1 remained significantly associated with outcome (P = 0.02). Patients with plasma endothelin-1 &gt; or = 5 pg. ml-1 had a higher mortality rate than those with endothelin-1 &lt; 5 pg. ml-1 (21% vs 4%, P = 0.001). Conclusion: Our results suggest that elevated endothelin-1 plasma levels are associated with a poor prognosis and routine plasma endothelin-1 determination provides important prognostic information in mild to moderate heart failure. abstract_id: PUBMED:11897436 Plasma oxidized low-density lipoprotein as a prognostic predictor in patients with chronic congestive heart failure. Objectives: The aim of this study was to evaluate the relationship between plasma oxidized low-density lipoprotein (oxLDL), a marker of oxidative stress, and the prognosis of patients with chronic congestive heart failure (CHF). Background: Oxidative stress appears to play a role in the pathophysiology of CHF. We have recently reported the usefulness of plasma oxLDL as a marker of oxidative stress in CHF patients with dilated cardiomyopathy. Methods: We measured the plasma level of oxLDL by sandwich enzyme-linked immunosorbent assay using a specific monoclonal antibody against oxLDL in 18 age-matched normal subjects and in 84 patients with chronic CHF (New York Heart Association functional class II to IV) and monitored them prospectively for a mean follow-up period of 780 days. Results: Plasma oxLDL level was significantly higher in severe CHF patients than in control subjects and mild CHF patients. A significant negative correlation existed between the plasma level of oxLDL and left ventricular ejection fraction (LVEF) and a significant positive correlation between the plasma level of oxLDL and plasma norepinephrine level. Twenty-six patients had cardiac events; 14 had cardiac death and 12 were hospitalized for heart failure or other cardiovascular events. Among 10 variables including LVEF and neurohumoral factors, only high plasma levels of brain natriuretic peptide and oxLDL were shown to be independent predictors of mortality. Conclusions: These results indicate that the plasma level of oxLDL is a useful predictor of mortality in patients with CHF, suggesting that oxidative stress plays an important role in the pathophysiology of CHF. abstract_id: PUBMED:15357108 The value of plasma levels of tumor necrosis factor-alpha and interleukin-6 in predicting the severity and prognosis in patients with congestive heart failure. Background: High plasma levels of pro-inflammatory cytokines play an important role in the pathophysiology of congestive heart failure (CHF). Therefore, we conducted a case-control study to determine the correlations between plasma levels of cytokines, i.e., tumor necrosis factor-alpha (TNF-alpha) and interleukin (IL)-6, and the severity and mortality in patients with CHF. Methods: One-hundred and 18 cases (62+/-15 years old) were classified into 3 groups: group 1 comprised 44 control cases with normal coronary arteriogram and left ventriculography and without valvular disorders or cardiomyopathy; group 2 comprised of 37 cases with mild CHF in New York Heart Association (NYHA) functional class (FC) II; group 3 had 37 cases with moderate/severe CHF in NYHA FC III or IV. Pre-catheterization plasma levels of TNF-alpha and IL-6 along with clinical and hemodynamic variables and follow-up data of cardiac death were assessed. Results: Patients of group 3 had smaller body mass index, lower systolic and diastolic blood pressures, faster heart rates, higher left ventricular end-diastolic pressure and lowered triglyceride levels than the patients of groups 1 and 2. The plasma levels of TNF-alpha and IL-6 increased significantly in patients of group 3 in comparison with patients of groups 1 and 2 (both p &lt; 0.001). Over the following 1.5 years, 13 patients died. Univariate analysis identified the following variables to be associated with poor prognosis: NYHA FC (p &lt; 0.001), plasma TNF-alpha (p = 0.013), plasma IL-6 (p &lt; 0.001), systolic blood pressure (p = 0.001), heart rate (p = 0.045) and left ventricular end-diastolic pressure (p = 0.021). Multivariate Cox regression analysis identified the independent predictors of cardiac death as FC (p = 0.007) and plasma IL-6 (p = 0.021). Conclusions: Our findings indicate that the plasma levels of IL-6 and TNF-alpha and especially the former, is a useful marker to correlate the progression of severity and late cardiac death in patients with CHF. abstract_id: PUBMED:8207366 Relationships between plasma levels of catecholamines and neuropeptides and the survival time in patients with congestive heart failure. Objectives: To evaluate the importance of various variables reflecting neuroendocrine activation in relation to prognosis in patients with congestive heart failure (CHF). Design: Plasma levels of noradrenaline, adrenaline, neuropeptide Y, substance P, calcitonin gene-related peptide, vasopressin and atrial natriuretic peptide were measured in patients presenting with CHF and related to survival time. Survival time was calculated from the time-point of this investigation until the month when the patient died or until December 1992. The follow-up period ranged up to 60 months. Seven of the 31 patients were still alive at the end of this period. Setting: Lund University Hospital, Sweden. Subjects: Thirty-one patients with CHF due to ischaemic heart disease. Six patients had a degree of heart failure corresponding to NYHA I-II and 25 corresponding to NYHA III-IV. Ten of these 25 patients were on angiotensin converting enzyme (ACE) inhibition in addition to therapy with digoxin and diuretics. The catecholamine and neuropeptide levels were compared to those of a control group of 31 healthy subjects aged 20-80 years. Intervention: There were not any specific intervention in addition to the treatment for heart failure as outlined above. Main Outcome Measures: Survival time until death from heart disease or until the end of the follow up period in December 1992. Results: There were inverse relationships between survival time on one hand and plasma values of noradrenaline (r = -0.49; P &lt; 0.01) and atrial natriuretic peptide (r = -0.49; P &lt; 0.01) on the other hand. There was a significant correlation between the plasma values of noradrenaline and atrial natriuretic peptide (r = 0.50; P &lt; 0.01). Conclusion: Increased plasma levels of noradrenaline and atrial natriuretic peptide are of major importance as prognostic markers in patients with CHF. abstract_id: PUBMED:15997451 Preoperative plasma N-terminal pro-brain natriuretic peptide as a marker of cardiac risk in patients undergoing elective non-cardiac surgery. Background: Plasma N-terminal pro-brain natriuretic peptide (NTproBNP) is a sensitive marker for heart failure. This study tested whether the preoperative plasma level of NTproBNP could predict cardiac complications in patients undergoing non-cardiac surgery. Methods: A total of 190 consecutive patients who underwent elective non-cardiac surgery that required general anaesthesia were studied. In addition to routine preoperative evaluation, a blood sample was taken for estimation of plasma NTproBNP concentration. Postoperative cardiac complications were defined as cardiac death, acute coronary syndrome, heart failure and haemodynamic compromise from cardiac arrhythmias. Results: Fifteen of the 190 patients had a cardiac complication: four had acute coronary syndrome and 13 had congestive heart failure. NTproBNP concentration was significantly higher in patients with a cardiac complication; a level greater than 450 ng/l was predictive of cardiac complications with a sensitivity of 100 per cent and a specificity of 82.9 per cent. Other factors associated with cardiac complications were a higher American Society of Anesthesiologists grade, age and clinical cardiac impairment, but in a multivariate analysis NTproBNP level was the only independent factor. Conclusion: Preoperative plasma NTproBNP concentration may be an independent predictor of cardiac complications in patients undergoing non-cardiac surgery. abstract_id: PUBMED:11079662 High levels of plasma brain natriuretic peptide and interleukin-6 after optimized treatment for heart failure are independent risk factors for morbidity and mortality in patients with congestive heart failure. Objectives: The aim of this study was to evaluate whether repetitive measurements of plasma levels of neurohumoral factors and cytokines before and after additional treatment are useful for predicting mortality in patients with congestive heart failure (CHF). Background: Neurohumoral and immune activation play an important role in the pathophysiology of CHF. However, the effects of serial changes in these factors on the prognostic value remain unknown. Methods: We measured plasma levels of neurohumoral factors and cytokines and left ventricular ejection fraction (LVEF) before and three months after optimized treatment for CHF in 102 consecutive patients with severe CHF (New York Heart Association class III to IV) on admission to our hospital. Physicians who were blind to the plasma neurohumoral factors until study completion treated patients using standard drugs. Patients were monitored for a mean follow-up period of 807 days. Results: Plasma levels of neurohumoral factors, cytokines and LVEF were significantly improved three months after optimized treatment. Cardiac death occurred in 26 patients. Among 19 variables including LVEF, only a high level of brain natriuretic peptide (BNP) and interleukin-6 (IL-6) at three months after optimized treatment showed significant independent relationships by Cox proportional hazard analysis with a high mortality for patients with CHF. Conclusions: These findings indicate that high plasma BNP and IL-6 levels three months after optimized treatment are independent risk factors for mortality in patients with CHF, suggesting that sustained high plasma levels of BNP and IL-6 after additional standard treatment were independent risk factors for mortality in patients with CHF despite improvements in LVEF and symptoms. abstract_id: PUBMED:10875035 Treatment of hypertension with perindopril reduces plasma atrial natriuretic peptide levels, left ventricular mass, and improves echocardiographic parameters of diastolic function. Background: Hypertension is a major independent risk factor for cardiac deaths, and diastolic dysfunction is a usual finding during the course of this disease. Hypothesis: This study was designed to investigate the effects of chronic therapy with perindopril on left ventricular (LV) mass, left atrial size, diastolic function, and plasma level of atrial natriuretic peptide (ANP) in patients with hypertension. Methods: Twenty four patients who had not been previously taking any antihypertensive medication and without prior history of angina pectoris, myocardial infarction, congestive heart failure, dysrhythmias, valvular heart disease, or systemic illnesses received 4-8 mg/day of perindopril orally. Echocardiographic studies were acquired at baseline and 6 months after the initiation of therapy. Results: Systolic and diastolic blood pressure decreased from 174 +/- 19.7 and 107.5 +/- 7.8 mmHg to 134 +/- 10.6 and 82 +/- 6.7 mmHg, respectively (p &lt; 0.001). Left ventricular mass decreased from 252.4 +/- 8.3 to 205.7 +/- 7.08 g and left atrial volume from 20.4 +/- 5.1 to 17.6 +/- 5.2 ml, respectively (p &lt; 0.001). Transmitral Doppler early and atrial filling velocity ratio (E/A) increased from 0.69 +/- 0.06 to 0.92 +/- 0.05 m/s and plasma ANP level decreased from 71.9 +/- 11.7 to 35.3 +/- 7.8 pg/ml (p &lt; 0.001). Reduction of LV mass correlated positively with a reduction in ANP levels (r = 0.66, p &lt; 0.0005). Conclusions: Perindopril caused a significant reduction of LV mass, left atrial volume, and plasma ANP levels, as well as improvement in Doppler parameters of LV filling in this group of patients with hypertension. abstract_id: PUBMED:23019395 Plasma N-terminal pro-B-type natriuretic peptide is predictive of perioperative cardiac events in patients undergoing vascular surgery. Background/aims: Identification of patients at high risk for perioperative cardiac events (POCE) is clinically important. This study aimed to determine whether preoperative measurement of plasma N-terminal pro-B-type natriuretic peptide (NT-proBNP) could predict POCE, and compared its predictive value with that of conventional cardiac risk factors and stress thallium scans in patients undergoing vascular surgery. Methods: Patients scheduled for non-cardiac vascular surgery were prospectively enrolled. Clinical risk factors were identified, and NT-proBNP levels and stress thallium scans were obtained. POCE was the composite of acute myocardial infarction, congestive heart failure including acute pulmonary edema, and primary cardiac death within 5 days after surgery. A modified Revised Cardiac Risk Index (RCRI) was proposed and compared with NT-proBNP; a positive result for ischemia and a significant perfusion defect (≥ 3 walls, moderate to severely decreased, reversible perfusion defect) on the thallium scan were added to the RCRI. Results: A total of 365 patients (91% males) with a mean age of 67 years had a median NT-proBNP level of 105.1 pg/mL (range of quartile, 50.9 to 301.9). POCE occurred in 49 (13.4%) patients. After adjustment for confounders, an NT-proBNP level of &gt; 302 pg/mL (odds ratio [OR], 5.7; 95% confidence interval [CI], 3.1 to 10.3; p &lt; 0.001) and a high risk by the modified RCRI (OR, 3.9; 95% CI, 1.6 to 9.3; p = 0.002) were independent predictors for POCE. Comparison of the area under the curves for predicting POCE showed no statistical differences between NT-proBNP and RCRI. Conclusions: Preoperative measurement of NT-proBNP provides information useful for prediction of POCE as a single parameter in high-risk patients undergoing noncardiac vascular surgery. abstract_id: PUBMED:15653109 Prognostic value of cardiac troponin T in patients with both acute and chronic stable congestive heart failure: comparison with atrial natriuretic peptide, brain natriuretic peptide and plasma norepinephrine. Background: The prognostic value of cardiac troponin T (cTn-T) in a mixture of patients with both acute and chronic congestive heart failure (CHF), simultaneously assessed and compared with neurohormonal factors, has not yet been thoroughly evaluated. Thus, we focused on the prognostic value of cTn-T in comparison with atrial natriuretic peptide (ANP), brain natriuretic peptide (BNP) and plasma norepinephrine (PNE) in this population. Methods: Prognostic correlates of elevation of cTn-T, ANP, BNP, PNE were analyzed in 63 acute and chronic CHF patients followed up to record worsening CHF and cardiac death. Results: cTn-T (&gt; or =0.03 microg/L) was found in 17.4% (11 of 63) of patients. cTn-T correlated with ANP, BNP, PNE. Acute CHF patients were more positive for cTn-T and BNP. In our cohort, neither cTn-T (&gt; or =0.03 microg/L) nor PNE were associated with increased mortality and worsening HF in CHF patients. After adjustment, BNP was the only independent predictor of cardiac events (RR, 3.23; p=0.01). Conclusions: BNP emerged as the only independent predictor of cardiac events in a mixture of patients with both acute and chronic CHF, suggesting that it is the analyte that best reflects long-term prognosis in a diverse population enrolled to mirror the "real world" situation. Answer: Yes, plasma endothelin in congestive heart failure appears to be a predictor of cardiac death. Studies have shown that patients with congestive heart failure (CHF) who have increased plasma endothelin concentrations have a higher risk of cardiac mortality. For instance, one study found that cardiac mortality was 27% in patients with plasma endothelin concentrations above 3 pg/mL, compared to 0% in patients with concentrations below this threshold (PUBMED:8798108). Another study reported that patients with plasma endothelin-1 levels greater than or equal to 5 pg/mL had a higher mortality rate than those with levels below 5 pg/mL (21% vs 4%, P = 0.001), suggesting that elevated endothelin-1 plasma levels are associated with a poor prognosis in CHF (PUBMED:9043842). These findings indicate that plasma endothelin can serve as an important prognostic marker in patients with CHF, aiding in the assessment of the risk of cardiac death.
Instruction: Does brain volume loss predict cognitive and narrative discourse performance following traumatic brain injury? Abstracts: abstract_id: PUBMED:24686463 Does brain volume loss predict cognitive and narrative discourse performance following traumatic brain injury? Purpose: In this study, the authors investigated the relationship between brain volume loss and performance on cognitive measures, including working memory, immediate memory, executive functions, and intelligence, and a narrative discourse production task. An underlying goal was to examine the prognostic potential of a brain lesion metric for discourse outcomes. It was hypothesized that brain volume loss would correlate with and predict cognitive and narrative discourse measures and have prognostic value for discourse outcomes. Method: One hundred sixty-seven individuals with penetrating head injury participated. Correlational and regression analyses were performed for the percentages of total brain and hemispheric volume loss and scores on 4 cognitive measures (WMS-III Working Memory and Immediate Memory primary indexes, D-KEFS Sorting Test, and WAIS-III Full Scale IQ) and 7 narrative discourse measures (T-units, grammatical complexity, cohesion, local and global coherence, story completeness, and story grammar). Results: The volumetric measures had significant small-to-moderate correlations with all cognitive measures but only one significant correlation with the discourse measures. Findings from regression analyses were analogous but revealed several models that approached significance. Conclusion: Findings suggest that an overall measure of brain damage may be more predictive of general cognitive status than of narrative discourse ability. Atrophy measures in specific brain regions may be more informative. abstract_id: PUBMED:31661629 Patterns of narrative discourse in early recovery following severe Traumatic Brain Injury. Primary Objective: To investigate the nature and patterns of narrative discourse impairment in people with severe Traumatic Brain Injury (TBI) during early recovery.Methods and Procedures: A single image picture description task was administered to 42 participants with severe TBI at 3 and 6-months post-injury. The same task was administered to 37 control participants. Discourse samples were analyzed with measures of productivity, informativeness and story organization. The performance of people with TBI was compared with the control group at both 3 and 6 months, and the performance of the participants with TBI was also compared across the two time points. Individual patterns of performance were also examined.Results: Inferential analyses revealed significant differences between the control group and the group with TBI on informativeness at both time points and number of complete episodes at 3 months, but no significant differences for productivity measures. There was no significant change for the group with TBI between 3 and 6 months. However, individual improvement over time was observed.Conclusions: People with TBI have discourse difficulties early post TBI that are also present at 6-months post-injury. In order to understand longer-term discourse recovery, it is necessary to examine participant patterns over further time points on this narrative task. abstract_id: PUBMED:30176301 Changes in discourse structure over time following traumatic brain injury. The objective of the present study was to investigate structural changes in the narrative discourse of individuals with penetrating traumatic brain injury (pTBI) following immediate and delayed story retellings. Additionally, the potential influence of immediate memory, working memory, and executive functions on narrative discourse performance were examined. The narrative discourse of two groups, 123 with pTBI and 44 non-brain injured (NBI), was sampled. Participants were asked to retell a wordless picture story immediately after viewing it and again 30-min later. Story narratives were analyzed using a variety of microlinguistic and macrostructural measures. Results revealed significant group differences of both microlinguistic and macrostructural measures following the immediate retell, but not following the delayed retell. Regression analyses revealed that immediate memory accounted for a modest degree of the explained variance for the production of critical content during narrative discourse. The observed deficits were relatively stable over a short delay (30 min) and appeared attributable, in part, to difficulty with the encoding and consolidation of story content. abstract_id: PUBMED:36377239 Telehealth administration of narrative and procedural discourse: A UK and US comparison of traumatic brain injury and matched controls. Background: Impaired discourse production is commonly reported for individuals with traumatic brain injury (TBI). Discourse deficits can negatively impact community integration, return to employment and quality of life. COVID-19 restrictions have reduced in-person assessment services for people with communication impairments. Advances in telehealth may help speech and language therapists (SLTs) to assess monologic discourse more systematically and improve access to services for patients who may find it difficult to attend in-person. Aims: To examine the feasibility of telehealth administration of narrative and procedural discourse tasks with individuals with TBI and matched controls. Methods & Procedures: A total of 20 individuals with TBI and 20 healthy controls, aged 18-55 years, were directly recruited from the UK and indirectly recruited from the US. For participants with TBI, time post-injury was at least 3 months with no diagnosis of aphasia. Control participants were matched for sex and as closely as possible for age. Feasibility of measures was based upon the time to administer both narrative tasks, the report of any technological problems, and participant feed. Discourse samples were transcribed verbatim and analysed using story grammar analysis (for narrative discourse) and identification of propositions (for procedural discourse). Interrater reliability was calculated using percentage agreement for 50% of the data. Non-parametric analyses were used to analyse the performance of the two groups. Outcomes & Results: Narrative and procedural discourse samples were collected via telehealth in approximately 10 min with no reported technical difficulties or complaints from any participants. For narrative discourse performance, there were significant differences for the TBI and control groups for measures of complete episodes (p &lt; 0.001) and missing episodes (p = 0.005). No significant group differences were noted for any of the procedural discourse measures. Conclusions & Implications: Results support the feasibility of collecting discourse samples via telehealth. Although the participants' discourse performance distinguished the TBI and control groups on the narrative task, no differences between the groups were noted for the procedural task. The narrative discourse task may have been more difficult than the procedural task, or video cue support reduced the cognitive load of the procedural task. This finding suggests the use of more complex procedural tasks without video cue support may be needed. What This Paper Adds: What is already known on this subject Although little research has explored the feasibility of administering discourse assessments for individuals with TBI via telehealth, some studies have found that discourse interventions can be feasibly administered via telehealth. It is also well established that individuals with TBI struggle with the supra-structural and macro-linguistic elements of discourse production. Both procedural and narrative discourse tasks have been found to differentiate individuals with TBI from healthy controls. What this paper adds to existing knowledge Few studies have investigated the feasibility of, and procedures for, administering discourse tasks via telehealth. Additionally, the inclusion of multiple types of discourse tasks to parse cognitive-communication abilities is lacking in the current literature. Findings from this study support that narrative and procedural discourse can be feasibly sampled via telehealth and that international collaboration for research on this topic can facilitate such studies. Individuals with TBI performed more poorly on three measures of narrative discourse. No differences between groups were identified for the procedural task. What are the potential or actual clinical implications of this work? Telehealth assessment for discourse provides flexibility for both the individual with TBI and the speech-language therapist and does not compromise the quality of data collected. The administration of discourse tasks and collection of data was not time-consuming and was well accepted by the study participants. Additionally, international research collaboration not only expands potential participation in research but increases the opportunity to recruit and study more diverse groups. abstract_id: PUBMED:29781173 Discourse and cognition in speakers with acquired brain injury (ABI): a systematic review. Background: Cognitive impairment, particularly of executive functioning, has been implicated in deficits in spoken discourse production following acquired brain injury (ABI). However, due to variation in the methodologies and heterogeneity of findings across studies, the nature and extent of this association is not well understood. Aims: This review aims to synthesize the literature investigating associations between cognitive deficits and discourse impairment after ABI. It is reported in accordance with guidelines of The Preferred Reporting Items of Systematic Reviews and Meta-Analyses. Data Sources: Searches were conducted of a variety of databases including Medline, PsycINFO, EMBASE, CINAHL, ProQuest, Cochrane and ERIC. Additional studies were identified via reference harvesting. Eligibility Criteria: Studies were included if they reported data on participants with ABI, assessed spoken discourse production and cognitive function, and performed statistical analyses to determine the association between discourse and cognitive variables. This review excluded non-English sources and those not published in peer-reviewed journals. Meta-analyses were not conducted due to variability across tools and terminology used to describe participant injury- and non-injury-related characteristics and outcomes. Results: Twenty-five observational studies were included in the review. Findings revealed significant associations between multiple cognitive functions and discourse across micro-linguistic to super-structural measures. Methodological and terminological inconsistencies were identified across studies, which limited systematic comparison of the results. Conclusions: This review revealed present, yet heterogeneous, relationships between cognitive and discourse deficits in speakers with ABI. Associations were interpreted in light of a well-established model of discourse processing. Greater comparison across studies would have been facilitated by a standard nomenclature in relation to cognitive constructs and standardized discourse assessment. Future research should explore the influence of injury- and participant-related factors on discourse-cognitive relationships. The lack of information on conversational discourse and paediatric ABI limits the generalizability of this review to daily interaction following ABI. While applicable across the lifespan, in-depth investigation of discourse following ABI in childhood and adolescence is a priority due to complex changes in language and cognition, and the potential for impairments profoundly impact social, emotional and academic progression into adulthood. Given the centrality of remediating cognitive-communication difficulties in ABI, the interrelationships between discourse and cognition should retain a critical focus of research. This will inform clinical management and future research with this population. Findings have implications for our theoretical understanding of discourse and the nature of its breakdown in ABI. abstract_id: PUBMED:37257416 Contribution of Working Memory and Inferencing to Narrative Discourse Comprehension and Production in Traumatic Brain Injury. Purpose: The goal of this study was to identify some potential key cognitive and communicative processes underlying narrative discourse ability following traumatic brain injury (TBI). Specifically, this study (a) investigated the contribution of working memory (WM) and inferencing to narrative discourse comprehension and production; (b) tested key assumptions posited by the Structure Building Framework (SBF), a discourse model; and (c) evaluated the potential for inferencing to contribute to discourse ability beyond a shared variance with WM. Method: Twenty-one individuals with TBI completed six tasks yielding seven measures: verbal and nonverbal WM updating (WMU-V and WMU-NV, respectively), predictive inferencing, the Discourse Comprehension Test (DCT), a picture story comprehension (PSC) task, and story retelling (story grammar and story completeness). Regression analyses were performed using WM and inferencing as predictors for narrative performance. Results: WM measures were significant predictors of DCT performance and approached significance as predictors of PSC. Inferencing approached significance as a unique predictor for the DCT and story completeness. WMU-V and WMU-NV were highly collinear, and neither WM measure predicted discourse outcomes over and above the other's contribution. Conclusions: WM was more strongly associated with comprehension processes, whereas inferencing may be associated with both comprehension and production outcomes. Findings were interpreted as supporting SBF assumptions of domain generality of cognitive processes and mechanisms involved in discourse while also challenging assumptions that the same cognitive substrates are marshaled for comprehension and production processes. Supplemental Material: https://doi.org/10.23641/asha.23148647. abstract_id: PUBMED:25281884 Cognitive correlates of narrative impairment in moderate traumatic brain injury. Traumatic brain injuries (TBIs) are often associated with communicative deficits. The incoherent and impoverished language observed in non-aphasic individuals with severe TBI has been linked to a problem in the global organization of information at the text level. The present study aimed to analyze the features of narrative discourse impairment in a group of adults with moderate TBI (modTBI). 10 non-aphasic speakers with modTBI and 20 neurologically intact participants were recruited for the experiment. Their cognitive, linguistic and narrative skills were thoroughly assessed. The persons with modTBI exhibited normal phonological, lexical and grammatical skills. However, their narratives were characterized by lower levels of Lexical Informativeness and more errors of both Local and Global Coherence that, at times, made their narratives vague and ambiguous. Significant correlations were found between these narrative difficulties and the production of both perseverative and non-perseverative errors on the WCST. These disturbances confirm previous findings which suggest a deficit at the interface between cognitive and linguistic processing rather than a specific linguistic disturbance in these patients. abstract_id: PUBMED:7929876 Developmental changes in narrative and non-narrative discourse in children with and without brain injury. This study presents a set of narrative and non-narrative tasks and analytic procedures for examining the discourse development of children with perinatal brain injury and typically developing children. Three oral discourse genres were collected at ages 5, 6, and 7: script, picture description, and replica play narration. Genre performances were assessed for the presence of hypothesized genre features. Results suggest these tasks and procedures are able to characterize development in discourse abilities for both a normative group and for children with perinatal brain injury. The group of children with brain injury produced shorter discourse performance with more off-task talk. This group also showed difficulty in fully differentiating the various genre types and in creating integrated discourse performances. However, most of these children demonstrated considerable growth in control of genre features over this time period. The possible utility of these tasks and procedures for clinical assessment is discussed. abstract_id: PUBMED:30978116 Long term effects of early childhood traumatic brain injury on narrative discourse gist and psychosocial functioning. Purpose: To examine the long-term consequences of early childhood traumatic brain injury (TBI), approximately 6.8 years post-injury, on cognitive communication competency within a complex interplay of individual, contextual, and psychosocial factors. Additionally, we were interested in looking at the association of communication competence with social competence and overall functioning.Methods: 64 children with orthopedic injury, 40 children with moderate traumatic brain injury, and 14 children with severe traumatic brain injury who were between 3 years and 6 years 11 months at injury completed a narrative discourse task and clinical measures in a single visit at a longer term follow up in early adolescence, an average of 6.8 years post injury. Analyses of covariance were conducted to compare groups on the discourse task, and hierarchical linear regressions were conducted to examine the association of discourse measures with clinical measures of cognitive and psychosocial functioning.Results: Children with severe traumatic brain injury performed worse than children with moderate traumatic brain injury and orthopedic injury on all discourse indices and clinical measures. Injury severity, pragmatic skills, and socioeconomic status were associated with discourse gist. Discourse gist was the most sensitive measure of communication competence, and it was significantly associated with psychosocial outcomes independent of group.Conclusion: Children who sustain a severe traumatic brain injury in early childhood are at risk for long-term cognitive communication impairments that may be related to a complex interplay of injury, individual, and social factors.Implications for rehabilitationThe assessment and treatment of cognitive communication impairments after traumatic brain injury are complex and require consideration of individual, contextual, and psychosocial factors.Discourse, in general, and gist reasoning specifically, is a sensitive measure for detecting cognitive communication competence several years after pediatric traumatic brain injury.Treatments to address gist reasoning deficits may support improvements in global and social functioning. abstract_id: PUBMED:35125058 The MAKRO Screening - an assessment tool for discourse deficits in adults with dysexecutive symptoms following TBI. Introduction: Discourse impairments are common sequelae following TBI. Even though handling discourse is thought to be a basic requirement for social participation and quality of life, few test procedures to assess discourse disorders have been developed so far. Objective: The main aim of this prospective cohort study was to evaluate the use of the MAKRO Screening for detecting deficits in discourse production and reception in a group of participants with TBI in the post-acute and chronic phase and their relation to executive functions (EF) and severity of brain injury. Method: Twenty individuals with TBI and a control group of healthy speakers performed on the MAKRO and on tests of EF (Regensburger Verbal Fluency Test; Tower of London; WAIS-IV digit span index). Group performance was evaluated on the basis of a scoring system and qualitative discourse analysis with focus on main concepts and coherence. Further, MAKRO scores were correlated with measures of EF. Outcomes: Individuals with TBI demonstrated significantly poorer performance within all MAKRO subtests. Discourse analysis revealed fewer main concepts and more frequent use of thematically inappropriate utterances. Performance can be partly explained by severity of initial injury and executive disorders. MAKRO presents a reliable and functional measure for discourse impairments. Answer: Brain volume loss has been investigated as a potential predictor of cognitive and narrative discourse performance following traumatic brain injury (TBI). The study by PUBMED:24686463 found that brain volume loss had significant small-to-moderate correlations with all cognitive measures but only one significant correlation with the discourse measures. The findings suggest that an overall measure of brain damage may be more predictive of general cognitive status than of narrative discourse ability, indicating that atrophy measures in specific brain regions may be more informative for predicting discourse outcomes. This implies that while brain volume loss does have some predictive value for cognitive performance, its prognostic value for narrative discourse ability is limited. Other studies have explored the relationship between cognitive functions and discourse impairment after TBI. For example, PUBMED:29781173 found significant associations between multiple cognitive functions and discourse across micro-linguistic to super-structural measures, suggesting a complex relationship between cognitive deficits and discourse impairment. PUBMED:37257416 specifically investigated the contribution of working memory (WM) and inferencing to narrative discourse comprehension and production, finding that WM was more strongly associated with comprehension processes, whereas inferencing may be associated with both comprehension and production outcomes. In summary, while brain volume loss does correlate with cognitive performance following TBI, its predictive value for narrative discourse performance is less clear and may be overshadowed by the influence of specific cognitive functions such as working memory and inferencing. Therefore, brain volume loss alone may not be a sufficient predictor of narrative discourse performance following TBI.
Instruction: Does the Clinical Context Improve the Reliability of Rheumatologists Grading Digital Ulcers in Systemic Sclerosis? Abstracts: abstract_id: PUBMED:26748825 Does the Clinical Context Improve the Reliability of Rheumatologists Grading Digital Ulcers in Systemic Sclerosis? Objective: Digital ulcers (DUs) are often a primary end point in systemic sclerosis (SSc; scleroderma) clinical trials, although the reliability of rheumatologists grading DUs is poor to moderate at best. DU assessment in recent trials has been based upon visual inspection alone, which potentially misses "real-world" clinical contextual information. Our aim was to investigate whether this clinical information improves the reliability of rheumatologists grading DUs. A secondary aim was to assess agreement between patients and rheumatologists. Methods: Eighty images of a range of digital lesions were collected from patients with SSc with the clinical context: pain (severity and temporal relationship), lesion duration, and discharge (patient reported and clinician observed). Raters received all images either with or without the clinical context, and graded these images (using a custom-built interface) on an ordinal scale of severity: no ulcer, inactive ulcer, or active ulcer. Patients also graded their lesion(s) on the same scale. Results: Fifty-one rheumatologists from 15 countries completed the study (26 without and 25 with context): 4,590 (including 510 repeated) image gradings were obtained. Context did not significantly increase (without and with context) either intra- (0.64, 0.71) or interrater (0.32, 0.36) reliability. Pain (visual analog scale and temporal relationship) and discharge (patient reported and clinician observed) were associated with increased lesion severity, and duration with reduced severity. Agreement between individual patients and rheumatologists was poor without and with context (0.19, 0.28). Conclusion: The overall intra- and interrater reliability of DU grading did not significantly improve with the clinical context. Agreement between patients and rheumatologists was poor. abstract_id: PUBMED:29876526 Reliability of digital ulcer definitions as proposed by the UK Scleroderma Study Group: A challenge for clinical trial design. Introduction: The reliability of clinician grading of systemic sclerosis-related digital ulcers has been reported to be poor to moderate at best, which has important implications for clinical trial design. The aim of this study was to examine the reliability of new proposed UK Scleroderma Study Group digital ulcer definitions among UK clinicians with an interest in systemic sclerosis. Methods: Raters graded (through a custom-built interface) 90 images (80 unique and 10 repeat) of a range of digital lesions collected from patients with systemic sclerosis. Lesions were graded on an ordinal scale of severity: 'no ulcer', 'healed ulcer' or 'digital ulcer'. Results: A total of 23 clinicians - 18 rheumatologists, 3 dermatologists, 1 hand surgeon and 1 specialist rheumatology nurse - completed the study. A total of 2070 (1840 unique + 230 repeat) image gradings were obtained. For intra-rater reliability, across all images, the overall weighted kappa coefficient was high (0.71) and was moderate (0.55) when averaged across individual raters. Overall inter-rater reliability was poor (0.15). Conclusion: Although our proposed digital ulcer definitions had high intra-rater reliability, the overall inter-rater reliability was poor. Our study highlights the challenges of digital ulcer assessment by clinicians with an interest in systemic sclerosis and provides a number of useful insights for future clinical trial design. Further research is warranted to improve the reliability of digital ulcer definition/rating as an outcome measure in clinical trials, including examining the role for objective measurement techniques, and the development of digital ulcer patient-reported outcome measures. abstract_id: PUBMED:32959188 Anti-phospholipid syndrome leading to digital ischaemia and rare organ complications in systemic sclerosis and related disorders. Antiphospholipid syndrome (APS) is an acquired, autoimmune thrombophilia that can occur as a primary disorder (with no associated disease) or secondary to infection, medication usage and autoimmune rheumatic diseases (ARDs). The association between APS and systemic lupus erythematosus (SLE) is well established, and practicing rheumatologists check for APS antibodies in the routine assessment of SLE, particularly if clinical features such as thrombotic events or pregnancy loss are present. APS secondary to systemic sclerosis (SSc)-related disorders is less widely recognised and easily overlooked. We describe 5 cases that highlight the varied breadth of clinical manifestations of APS in the context of SSc and related disorders. These cases range from uncomplicated Raynaud's phenomenon, digital ulceration/necrosis, critical digital ischaemia/gangrene and rare internal organ complications of APS in SSc-spectrum disorders. To our knowledge, our cases include the first reported case of secondary APS contributing to digital necrosis in the context of RACAND syndrome (Raynaud's phenomenon, anti-centromere antibodies and necrosis of the digits) and the first reported case of secondary APS in SSc causing posterior reversible encephalopathy syndrome (PRES). The case series is accompanied by a comprehensive review of the literature relevant to each case. Rheumatologists should be alert to the possibility of APS in SSc-spectrum disorders and should routinely check APS antibodies in all patients at diagnosis, and again later in the disease course if new features emerge that could indicate the presence of thrombotic events or other recognised APS manifestations. Key points • APS should be considered in all patients with digital ischaemic symptoms. • APS may be an important driver of SSc-related digital ulceration/necrosis. • Identification of SSc-associated APS opens up new therapeutic options for acute management and secondary prevention. abstract_id: PUBMED:19248100 Lack of agreement between rheumatologists in defining digital ulceration in systemic sclerosis. Objective: To test the intra- and interobserver variability, among clinicians with an interest in systemic sclerosis (SSc), in defining digital ulcers. Methods: Thirty-five images of finger lesions, incorporating a wide range of abnormalities at different sites, were duplicated, yielding a data set of 70 images. Physicians with an interest in SSc were invited to take part in the Web-based study, which involved looking through the images in a random sequence. The sequence differed for individual participants and prevented cross-checking with previous images. Participants were asked to grade each image as depicting "ulcer" or "no ulcer," and if "ulcer," then either "inactive" or "active." Images of a range of exemplar lesions were available for reference purposes while participants viewed the test images. Intrarater reliability was assessed using a weighted kappa coefficient with quadratic weights. Interrater reliability was estimated using a multirater weighted kappa coefficient. Results: Fifty individuals (most of them rheumatologists) from 15 countries participated in the study. There was a high level of intrarater reliability, with a mean weighted kappa value of 0.81 (95% confidence interval [95% CI] 0.77, 0.84). Interrater reliability was poorer (weighted kappa = 0.46 [95% CI 0.35, 0.57]). Conclusion: The poor interrater reliability suggests that if digital ulceration is to be used as an end point in multicenter clinical trials of SSc, then strict definitions must be developed. The present investigation also demonstrates the feasibility of Web-based studies, for which large numbers of participants can be recruited over a short time frame. abstract_id: PUBMED:27094599 Digital ulcers in systemic sclerosis. Digital ulcers (DUs) are a common visible manifestation of the progressive vascular disease that characterizes the SSc disease process. DUs not only impact significantly on patients' quality of life and hand function, but are also a biomarker of internal organ involvement and of disease severity. The aetiology of (digital) vascular disease in SSc is multifactorial, and many of these factors are potentially amenable to therapeutic intervention. The management of DU disease in SSc is multifaceted. Patient education and non-pharmacological interventions (e.g. smoking cessation) should not be neglected. There are a number of drug therapies available to prevent (e.g. phosphodiesterase type-5 inhibitors and ET receptor-1 antagonists) and treat (e.g. i.v. iloprost) DUs. DUs are also important for two other reasons: firstly, as a primary end point in SSc-related clinical trials; and secondly, DUs are included in the ACR/EULAR SSc classification criteria. However, the reliability of rheumatologists to grade DUs is poor to moderate at best, and this poses challenges in both clinical practice and research. The purpose of this review is to provide the reader with a description of the spectrum of DU disease in SSc including pathophysiology, epidemiology and clinical burden, all of which inform the multifaceted approach to management. abstract_id: PUBMED:31955323 Digital ulcer debridement in systemic sclerosis: a systematic literature review. Optimal wound care is an essential component in the management of systemic sclerosis (SSc) digital ulcers (DUs). DU debridement has been suggested to reduce ulcer-related pain and improve tissue healing. However, only a minority of rheumatologists perform DU debridement, and there is no standard of care/protocol. Our objectives were to (i) evaluate the current evidence for the use of debridement in DU management and (ii) assess whether there are any specific protocols. A systematic literature review was performed searching the PubMed database (between 01/01/1950-01/03/2019) in accordance with PRISMA guidelines. Two independent reviewers screened and extracted the abstracts/full manuscripts. Articles in English, which focussed on SSc-DU debridement/curettage, were included. Exclusion criteria included studies of juvenile/paediatric patients and basic/non-clinical research. Our search identified 1497 studies of which 4 studies were included in our final analysis. Three studies used scalpel debridement, and one study used this in combination with autolytic debridement. No studies specifically reported the effect on DU healing from debridement. Autolytic debridement with hyaluronate-based products was associated with significant ulcer pain and inflammation. Local anaesthetic significantly reduces pain both during and after debridement. Combined local and oral analgesia is often required for more severe or infected DUs. DU (scalpel and autolytic) debridement is being used by some clinicians in rheumatology; however, there are no standardised protocols. To improve wound care for SSc-DUs, future research should focus on developing a standardised protocol for SSc-DU debridement, with a view to facilitate randomised controlled trials to demonstrate safety and treatment efficacy.Key Points• Optimal wound care is an essential component in the management of systemic sclerosis-digital ulcers.• 'Sharp' debridement uses a scalpel, whereas 'autolytic' debridement uses dressings to optimize endogenous tissue lysis.• There is significant variation in the use of digital ulcer debridement in systemic sclerosis.• A standardized protocol and randomized controlled trials are needed to demonstrate debridement the safety and efficacy of digital ulcer debridement in systemic sclerosis. abstract_id: PUBMED:28575546 Quantifying Digital Ulcers in Systemic Sclerosis: Reliability of Computer-Assisted Planimetry in Measuring Lesion Size. Objective: Digital ulcers are a major problem in patients with systemic sclerosis (SSc), causing severe pain and impairment of hand function. In addition, digital ulcers heal slowly and sometimes become infected, which can lead to gangrene and necessitate amputation if appropriate intervention is not taken. A reliable, objective method for assessing digital ulcer healing or progression is needed in both the clinical and research arenas. This study was undertaken to compare 2 computer-assisted planimetry methods of measurement of digital ulcer area on photographs (ellipse and freehand regions of interest [ROIs]), and to assess the reliability of photographic calibration and the 2 methods of area measurement. Methods: Photographs were taken of 107 digital ulcers in 36 patients with SSc spectrum disease. Three raters assessed the photographs. Custom software allowed raters to calibrate photograph dimensions and draw ellipse or freehand ROIs. The shapes and dimensions of the ROIs were saved for further analysis. Results: Calibration (by a single rater performing 5 repeats per image) produced an intraclass correlation coefficient (intrarater reliability) of 0.99. The mean ± SD areas of digital ulcers assessed using ellipse and freehand ROIs were 18.7 ± 20.2 mm2 and 17.6 ± 19.3 mm2 , respectively. Intrarater and interrater reliability of the ellipse ROI were 0.97 and 0.77, respectively. For the freehand ROI, the intrarater and interrater reliability were 0.98 and 0.76, respectively. Conclusion: Our findings indicate that computer-assisted planimetry methods applied to SSc-related digital ulcers can be extremely reliable. Further work is needed to move toward applying these methods as outcome measures for clinical trials and in clinical settings. abstract_id: PUBMED:32508995 Dynamics of digital ulcers in systemic sclerosis. Systemic sclerosis (SSc) is a collagenosis with insufficiently known etiopathogenesis, characterized by microvasculopathy and excessive fibrosis in the context of an autoimmune disorder. The incompletely elucidated pathogenesis and limited therapeutic options, disabling aspects, skin lesions and pain determine important functional and psychological deficiencies which affect the quality of life. It is imperative to observe and correlate individual clinical and paraclinical data to optimize disease management. A group of 22 patients diagnosed with SSc, hospitalized in a university clinic in Bucharest was included in an observational study. The evolution of digital ulcers was evaluated as an indicator of vasculopathy and their status and dynamics were correlated with clinical elements reflecting the fibrotic aspect of the disease. The present study shows that the Raynaud phenomenon is almost always present during the course of the disease, but its presence is not always associated with digital ulcers. The existing data in the literature show that fibrosis is subsequent to vasculopathy, but this study did not reveal causality between these two aspects of pathogenesis. The presence of microstomia and digital contracture was identified in the presence of digital ulcers, but also in their absence. The etiopathogenic mechanisms with multiple unknown involved factors open the opportunity to investigate many aspects of SSc for optimal aiming of therapeutic interventions. abstract_id: PUBMED:22121375 PDE-5 Inhibitors in Scleroderma Raynaud Phenomenon and Digital Ulcers: Current Status of Clinical Trials. Systemic sclerosis- (SSc-) related vasculopathy, as manifested by Raynaud's Phenomenon (RP) and digital ulcers (DUs), is associated with significant impairment of the quality of life and morbidity. The current vasoactive approach for SSc-RP, although employing vasodilators, is entirely off-label. PDE-5 inhibitors improve peripheral circulation, are well tolerated, and are widely used for various forms of constrictive vasculopathies. This class of medications has become one of the first lines of treatment of SSc-RP and SSc-DUs among rheumatologists that routinely treat SSc patients. Due to the lack of robust randomized clinical trials of PDE-5 inhibitors in SSc-RP/DUs, the PDE-5 inhibitors have not been FDA approved for these particular indications, which constitutes a significant barrier to prescribing this category of drugs. This paper reviews the current state of evidence-based knowledge in SSc-related vasculopathy and the use of PDE-5 inhibitors. abstract_id: PUBMED:29857890 Should all digital ulcers be included in future clinical trials of systemic sclerosis-related digital vasculopathy? Digital ulcers (DU) are a common manifestation of systemic sclerosis (SSc) and occur at a variety of locations including the fingertips and over the extensor aspects of the hands. However, most recent clinical trials have included only fingertip DUs as these are believed to be ischaemic in aetiology, and therefore likely to benefit from treatment with vasoactive drug therapies. There is an emerging evidence base to suggest that all DUs in SSc could share an ischaemic component which is potentially responsive to vascular therapy. Our hypothesis is that is that DUs occurring at sites other than the fingertips, in particular, those overlying the extensor aspect of the hands, may also have a potentially reversible ischaemic component. We review the evidence under the headings: 'microvascular imaging', 'structural microvascular' and, 'functional vascular disease', 'macrovascular involvement' and 'vascular associates'. Based upon the current evidence, we would encourage the expert SSc community to reconsider the rationale for including only fingertip DUs in future SSc clinical trials, and suggest an agenda for future research. Answer: No, the clinical context does not significantly improve the reliability of rheumatologists grading digital ulcers (DUs) in systemic sclerosis (SSc). A study aimed to investigate whether the inclusion of clinical information such as pain severity, lesion duration, and discharge would improve the reliability of rheumatologists in grading DUs. However, the results showed that the overall intra- and interrater reliability of DU grading did not significantly improve with the clinical context. Additionally, the agreement between individual patients and rheumatologists was poor, both without and with the inclusion of clinical context (PUBMED:26748825).
Instruction: Management of non-parasitic splenic cysts: does size really matter? Abstracts: abstract_id: PUBMED:24871081 Management of non-parasitic splenic cysts: does size really matter? Background: Splenic cysts are relatively rare clinical entities and are often diagnosed incidentally upon imaging conducted for a variety of clinical complaints. They can be categorized as primary or secondary based on the presence or absence of an epithelial lining. Primary cysts are further subdivided into those that are and are not secondary to parasitic infection. The treatment of non-parasitic splenic cysts (NPSC) has historically been dictated by two primary factors: the presence of symptoms attributable to the cyst and cyst size greater or less than 5 cm. While it is appropriate to resect a symptomatic lesion, the premise of recommending operative intervention based on size is not firmly supported by the literature. Methods: In the current study, we identified 115 patients with splenic cysts and retrospectively reviewed their management that included aspiration, resection, or observation. Results: Our data reveal a negative overall growth rate of asymptomatic cysts, a high recurrence rate after percutaneous drainage, as well as demonstrate the safety of observing asymptomatic lesions over time. Conclusion: We conclude that observation of asymptomatic splenic cysts is safe regardless of size and that aspiration should be reserved for those who are not surgical candidates or in cases of diagnostic uncertainty. abstract_id: PUBMED:38106766 Watchful Waiting Phase As Window of Opportunities for Non-pharmaceutical and Non-surgical Management of a Small Splenic Cyst Complicated With Thrombocytopenia. An asymptomatic splenic cyst smaller than 50 mm was detected incidentally at a routine health checkup. Management of the cyst, affected and determined by multiple factors, including force majeure, became difficult and thrombocytopenia developed during watchful waiting. Spontaneous recovery of the spleen did not occur with continued watchful waiting, and thrombocytopenia worsened. However, when a three-month dietary intervention was subsequently implemented, the initiation of recovery was observed. The diet modification was adding to regular meals a daily serving of vegetables prepared following traditional Chinese culinary style. A second course of dietary intervention was undertaken, and accelerated recovery was detected thereafter, with eventual complete resolution of the splenic cyst and thrombocytopenia. This case demonstrates the feasibility and potential benefits of lifestyle intervention for the management of small splenic cysts, including those complicated with thrombocytopenia. Lifestyle intervention, such as dietary intervention, is particularly suitable for the watchful waiting phase since disease management during this time is non-pharmaceutical and non-surgical by nature. abstract_id: PUBMED:27167310 Spleen preserving management of a non-parasitic splenic cyst in pregnancy. Non-parasitic splenic cysts are rare entities. In pregnancy, they are rarer still, with as few as seven cases reported in the literature. There is little consensus regarding the optimal management of this condition in pregnancy. Although small, the theoretical risk of intrapartum splenic rupture is associated with a fetal mortality rate as high as 70%. The authors of at least three case reports advocate total splenectomy as first-line management of splenic cyst in pregnancy. Paradoxically, spleen conserving surgery is the recognised gold standard treatment for symptomatic splenic cysts in non-pregnant patients. We present a case of a large maternal splenic cyst that was treated successfully with a laparoscopic cystectomy. abstract_id: PUBMED:33661055 Two cases of non-parasitic splenic cysts and spleen-preserving therapy. We review two different presentations of non-parasitic splenic cysts, both of which are post-traumatic in aetiology. The first case was of slower onset and was managed electively. The second case was of acute onset and was managed as an emergency. Non-parasitic splenic cysts are uncommon and the optimal management strategy for them is not well defined. Historically, treatment has been with open splenectomy; however, infection rates following this surgery have been high, making it an unattractive management option. Both cases were managed successfully with laparoscopic fenestration with no recurrence at subsequent follow-up. abstract_id: PUBMED:12633039 Laparoscopic management of non-parasitic splenic cysts. Non-parasitic cysts are an infrequent disease of the spleen. Most likely congenital in their origin, epithelial splenic cysts require surgical treatment if enlarged or symptomatic. The aim of this study was to assess the impact of the laparoscopic technique in the management of splenomegaly due to cystic disease. From July 2001 to September 2002 4 patients underwent laparoscopic splenectomy in our department for symptomatic enlarged non-parasitic splenic cysts. Cystic lesions were evaluated by ultrasonography, CT scan and magnetic resonance of the abdomen. The choice of total laparoscopic splenectomy according to the "hanged spleen" procedure was conditioned by the technical difficulty of performing conservative treatment due to the site of the upper pole of the enlarged lesions, adherent to the diaphragm in 3 cases, and by the presence of multiple trabeculated splenic cysts in 1 case. Laparoscopy was always completed and all patients obtained relief of their symptoms. When conservative treatment such as partial splenectomy or decapsulation may jeopardize the good outcome of the operation, the management of nonparasitic splenic cysts can be successfully achieved by total laparoscopic splenectomy. abstract_id: PUBMED:23980875 Laparoscopic stapled excision of non-parasitic splenic cysts. Background: A laparoscopic spleen preserving surgical approach is preferred for the management of symptomatic non-parasitic splenic cysts. The aim of this study was to review our experience with managing this rare presentation. Methods: A retrospective review of all cases of splenic cysts was performed over a 10-year period (2001-2011). Demographic data, clinical history, investigations, operative details and the outcome of each case were reviewed with an emphasis on patients who underwent laparoscopic stapled cyst excision. Results: Eleven cases were identified. Seven patients were managed surgically; six by laparoscopic stapled cyst excision and one by open excision of remnant splenic tissue. Laparoscopic management was successful in all six cases and radiological and clinical follow-up (median: 28 months) revealed no evidence of cyst recurrence in five of six cases. One patient developed an asymptomatic, non-progressing and small recurrent anterior cyst and she continues to be observed. Conclusion: Laparoscopic stapled splenic cyst excision can be performed safely and is particularly effective for large superficial non-parasitic cysts. This technique allows spleen preservation with a low cyst recurrence rate. However, it may not be suitable for deeper intraparenchymal splenic cysts. Further studies are required to refine the management of specific subtypes of non-parasitic splenic cysts. abstract_id: PUBMED:9303637 Non-parasitic splenic cysts Nonparasitic cysts of the spleen are uncommon and often result from blunt abdominal trauma. Nonsurgical management of blunt splenic injuries increases the number of observations of the post-traumatic cysts. Complications (infection, rupture and hemorrhage) are lifethreatening, difficult to diagnose and require urgent surgical management. Until recently, splenectomy has been the primary choice of treatment of these cysts. Small (&lt; 4 cm) asymptomatic post-traumatic pseudocysts stand a reasonable chance of involution with time (3-36 months) and so may be initially observed. Splenic preservation by partial splenectomy, enucleation or by marsupialization is actually recommended in children when technically feasible. Splenectomy is required for voluminous, central, multifocal cysts, in the presence of complications and in the adults with low immunologic risk. The Authors report 5 cases of large cysts successfully treated by splenectomy with one 12-year-old girl treated in emergency for infection by Salmonella. abstract_id: PUBMED:37672828 Challenges in the management of a splenic pseudocyst by laparoscopic splenectomy in an adult patient: A case report. Introduction And Importance: Splenic cysts are classified as true cysts, or pseudocysts, and larger cysts tend to be symptomatic, requiring management which has evolved to include spleen-sparing procedures to minimize the risk of overwhelming post-splenectomy sepsis (OPSS) Pitiakoudis et al. (2011), Hansen and Moller (2004), Knook et al. (2019) [1-3]. Total splenectomy remains the gold standard management, and the importance of this case is the uncommon spontaneous occurrence of a pseudocyst, and the importance to pre-operatively consent and prepare the patient for total splenectomy would intra-operative conditions not allow for spleen-preserving techniques. Case Presentation: CS, a 21-year-old lady, had two presentations to the emergency department with left upper quadrant abdominal pain. The only abnormality on assessment was a large splenic cyst on CT scan, which increased in size on re-presentation. She was consented for a splenic cyst fenestration, and for total splenectomy and optimized with vaccines would intra-operative conditions not allow for spleen-sparing. During the operation, the planes between the cyst and spleen parenchyma were ill-defined, and decision was made to proceed with total splenectomy to avoid bleeding complications. She recovered well, and was discharged 5 days post-operatively, and histology confirmed a pseudocyst (Figs. 1 and 2). Clinical Discussion: The management of splenic cysts remains difficult and with no clear guidelines to uniform treatment. There are multiple spleen-preserving techniques developed to avoid OPSS (Agha RA, Franchi T, Sohrabi C, Mathew G, for the SCARE Group, 2020 [4]), however management remains individualized and case-specific. Conclusion: Pseudocysts can occur without splenic trauma or infarct. Management is case-based, and patients with large symptomatic cysts should be consented and prepared for total splenectomy would conditions not be safe for spleen-preserving interventions. abstract_id: PUBMED:29433866 Management of nonparasitic splenic cysts in children. Background: The management of nonparasitic splenic cysts in children is unclear. Options include observation, cystectomy, partial or total splenectomy and percutaneous aspiration with and without sclerotherapy. The aim of this study is to assess the outcomes of these interventions at a children's hospital. Materials And Methods: A retrospective review of patients aged &lt;18 y with splenic cysts over 7 y was performed. Demographics, mode of intervention, and outcome data were collected. Results: Forty-two patients were identified and their initial management was as follows: 32 patients were observed and 10 underwent intervention (four aspiration and sclerotherapy and six resection). Age (y) was higher for intervention patients than observation patients (P = 0.004), as was the cyst size (P &lt; 0.001). Incidental finding was the most common presentation in observation patients (n = 30; 94%) and abdominal pain for intervention groups: aspiration and sclerotherapy (n = 3; 75%) and resection (n = 5; 83%). Two patients failed observation and required aspiration and sclerotherapy due to persistence of symptoms or size increase. Median number of aspiration with and without sclerotherapy interventions was three (range 1-5). All six patients had persistence, with two requiring surgical resection due to symptomatic persistence. Surgical procedures included laparoscopic cystectomy (n = 3), laparoscopic partial (n = 2) or complete splenectomy (n = 1), and/or open splenectomy (n = 2). One laparoscopic cystectomy patient had persistence but the other two had no follow-up imaging. Partial and total splenectomy patients had no recurrence and/or persistence. Conclusions: Observation is an appropriate management strategy for small asymptomatic splenic cysts. Aspiration with and without sclerotherapy and laparoscopic cystectomy are associated with higher rates of recurrence; thus, partial splenectomy may provide the best balance of recurrence and spleen preservation. abstract_id: PUBMED:8815056 True benign non-parasitic cysts of the spleen: from diagnosis to treatment. Apropos of 5 new cases True non parasitary splenic cysts are uncommon. They are characterized histological by an epithelium on the inner surface of the cyst wall. This study presents, about 5 new cases of splenic cysts, the clinical presentation, the difficulties of the diagnosis based on the ultrasound exploration and computed tomography scan investigation, and the different possibilities of management of these cysts. The principal of the treatment is the resection of the totality of a cyst with a diameter higher than 5 cm. A partial splenectomy seems to be the best treatment, whenever possible, especially in children. Answer: The management of non-parasitic splenic cysts (NPSCs) has traditionally been influenced by the size of the cyst, with a common threshold for intervention being a size greater than 5 cm. However, the literature does not firmly support the recommendation of operative intervention based solely on cyst size (PUBMED:24871081). The study by PUBMED:24871081 found that asymptomatic cysts had a negative overall growth rate and a high recurrence rate after percutaneous drainage, suggesting that observation of asymptomatic lesions is safe regardless of size. Aspiration should be reserved for those who are not surgical candidates or in cases of diagnostic uncertainty. In the case of a small splenic cyst complicated with thrombocytopenia, dietary intervention during the watchful waiting phase was shown to initiate recovery, demonstrating the feasibility of non-pharmaceutical and non-surgical management during this period (PUBMED:38106766). For pregnant patients with NPSCs, spleen-preserving surgery such as laparoscopic cystectomy is a successful treatment option, despite the theoretical risk of intrapartum splenic rupture (PUBMED:27167310). Similarly, laparoscopic fenestration has been used successfully in both elective and emergency presentations of NPSCs, with no recurrence at follow-up (PUBMED:33661055). Laparoscopic management, including total laparoscopic splenectomy and laparoscopic stapled cyst excision, has been effective for symptomatic enlarged NPSCs, with a low cyst recurrence rate (PUBMED:12633039, PUBMED:23980875). However, it may not be suitable for deeper intraparenchymal cysts. In children, observation is appropriate for small asymptomatic cysts, while partial splenectomy may offer the best balance of recurrence and spleen preservation (PUBMED:29433866). For large cysts, partial splenectomy is recommended, especially in children, whenever possible (PUBMED:8815056). In conclusion, while size has been a traditional factor in the management of NPSCs, current evidence suggests that it should not be the sole determinant of intervention. Observation is safe for asymptomatic cysts, and spleen-preserving techniques are preferred when intervention is necessary, with the choice of management being case-specific and influenced by factors such as symptoms, cyst characteristics, and patient factors, including pregnancy and pediatric status.
Instruction: Trends in racial/ethnic disparities in medical and oral health, access to care, and use of services in US children: has anything changed over the years? Abstracts: abstract_id: PUBMED:23339566 Trends in racial/ethnic disparities in medical and oral health, access to care, and use of services in US children: has anything changed over the years? Introduction: The 2010 Census revealed the population of Latino and Asian children grew by 5.5 million, while the population of white children declined by 4.3 million from 2000-2010, and minority children will outnumber white children by 2020. No prior analyses, however, have examined time trends in racial/ethnic disparities in children's health and healthcare. The study objectives were to identify racial/ethnic disparities in medical and oral health, access to care, and use of services in US children, and determine whether these disparities have changed over time. Methods: The 2003 and 2007 National Surveys of Children's Health were nationally representative telephone surveys of parents of 193,995 children 0-17 years old (N = 102,353 in 2003 and N = 91,642 in 2007). Thirty-four disparities indicators were examined for white, African-American, Latino, Asian/Pacific-Islander, American Indian/Alaskan Native, and multiracial children. Multivariable analyses were performed to adjust for nine relevant covariates, and Z-scores to examine time trends. Results: Eighteen disparities occurred in 2007 for ≥1 minority group. The number of indicators for which at least one racial/ethnic group experienced disparities did not significantly change between 2003-2007, nor did the total number of specific disparities (46 in 2007). The disparities for one subcategory (use of services), however, did decrease (by 82%). Although 15 disparities decreased over time, two worsened, and 10 new disparities arose. Conclusions: Minority children continue to experience multiple disparities in medical and oral health and healthcare. Most disparities persisted over time. Although disparities in use of services decreased, 10 new disparities arose in 2007. Study findings suggest that urgent policy solutions are needed to eliminate these disparities, including collecting racial/ethnic and language data on all patients, monitoring and publicly disclosing disparities data annually, providing health-insurance coverage and medical and dental homes for all children, making disparities part of the national healthcare quality discussion, ensuring all children receive needed pediatric specialty care, and more research and innovative solutions. abstract_id: PUBMED:18195000 Racial and ethnic disparities in medical and dental health, access to care, and use of services in US children. Background: Not enough is known about the national prevalence of racial/ethnic disparities in children's medical and dental care. Objective: The purpose of this work was to examine racial/ethnic disparities in medical and oral health, access to care, and use of services in a national sample. Methods: The National Survey of Children's Health was a telephone survey in 2003-2004 of a national random sample of parents and guardians of 102,353 children 0 to 17 years old. Disparities in selected medical and oral health and health care measures were examined for white, African American, Latino, Asian/Pacific Islander, Native American, and multiracial children. Multivariate analyses were performed to adjust for primary language at home, age, insurance coverage, income, parental education and employment, and number of children and adults in the household. Forty measures of medical and oral health status, access to care, and use of services were analyzed. Results: Many significant disparities were noted; for example, uninsurance rates were 6% for whites, 21% for Latinos, 15% for Native Americans, 7% for African Americans, and 4% for Asians or Pacific Islanders, and the proportions with a usual source of care were as follows: whites, 90%; Native Americans, 61%; Latinos, 68%; African Americans, 77%; and Asians or Pacific Islanders, 87%. Many disparities persisted for &gt; or = 1 minority group in multivariate analyses, including increased odds of suboptimal health status, overweight, asthma, activity limitations, behavioral and speech problems, emotional difficulties, uninsurance, suboptimal dental health, no usual source of care, unmet medical and dental needs, transportation barriers to care, problems getting specialty care, no medical or dental visit in the past year, emergency department visits, not receiving mental health care, and not receiving prescription medications. Certain disparities were particularly marked for specific racial/ethnic groups: for Latinos, suboptimal health status and teeth condition, uninsurance, and problems getting specialty care; for African Americans, asthma, behavior problems, skin allergies, speech problems, and unmet prescription needs; for Native Americans, hearing or vision problems, no usual source of care, emergency department visits, and unmet medical and dental needs; and for Asians or Pacific Islanders, problems getting specialty care and not seeing a doctor in the past year. Multiracial children also experienced many disparities. CONCLUSIONS; Minority children experience multiple disparities in medical and oral health, access to care, and use of services. Certain disparities are particularly marked for specific racial/ethnic groups, and multiracial children experience many disparities. abstract_id: PUBMED:22417169 Racial/ethnic disparities in health and health care among U.S. adolescents. Objective: To examine racial/ethnic disparities in medical and oral health status, access to care, and use of services in U.S. adolescents. Data Source: Secondary data analysis of the 2003 National Survey of Children's Health. The survey focus was children 0-17 years old. Study Design: Bivariate and multivariable analyses were conducted for white, African American, Latino, Asian/Pacific Islander, American Indian/Alaskan Native, and multiracial adolescents 10-17 years old (n = 48,742) to identify disparities in 40 measures of health and health care. Principal Findings: Certain disparities were especially marked for specific racial/ethnic groups and multiracial youth. These disparities included suboptimal health status and lack of a personal doctor or nurse for Latinos; suboptimal oral health and not receiving all needed medications in the past year for African Americans; no physician visit or mental health care in the past year for Asian/Pacific Islanders; overweight/obesity, uninsurance, problems getting specialty care, and no routine preventive visit in the past year for American Indian/Alaska Natives; and not receiving all needed dental care in multiracial youth. Conclusions: U.S. adolescents experience many racial/ethnic disparities in health and health care. These findings indicate a need for ongoing identification and monitoring of and interventions for disparities for all five major racial/ethnic groups and multiracial adolescents. abstract_id: PUBMED:33565343 Medicaid Expansion and Racial and Ethnic Disparities in Access to Health Care: Applying the National Academy of Medicine Definition of Health Care Disparities. Since 2014, 32 states implemented Medicaid expansion by removing the categorical criteria for childless adults and by expanding income eligibility to 138% of the federal poverty level (FPL) for all non-elderly adults. Previous studies found that the Affordable Care Act (ACA) Medicaid expansion improved rates of being insured, unmet needs for care due to cost, number of physician visits, and health status among low-income adults. However, a few recent studies focused on the expansion's effect on racial/ethnic disparities and used the National Academy of Medicine (NAM) disparity approach with a limited set of access measures. This quasi-experimental study examined the effect of Medicaid expansion on racial/ethnic disparities in access to health care for U.S. citizens aged 19 to 64 with income below 138% of the federal poverty line. The difference-in-differences model compared changes over time in 2 measures of insurance coverage and 8 measures of access to health care, using National Health Interview Survey (NHIS) data from 2010 to 2016. Analyses used the NAM definition of disparities. Medicaid expansion was associated with significant decreases in uninsured rates and increases in Medicaid coverage among all racial/ethnic groups. There were differences across racial/ethnic groups regarding which specific access measures improved. For delayed care and unmet need for care, decreases in racial/ethnic disparities were observed. After the ACA Medicaid expansion, most access outcomes improved for disadvantaged groups, but also for others, with the result that disparities were not significantly reduced. abstract_id: PUBMED:37029903 Changes in Racial and Ethnic Disparities in Use of Mental Health Services under the Affordable Care Act: Evidence from California. Background: The Affordable Care Act (ACA) aimed to expand mental health service use in the US, by expanding access to health insurance. However, the gap in mental health utilization by race and ethnicity is pronounced: members of racial and ethnic minoritized groups remain less likely to use mental health services than non-Hispanic White individuals even after the ACA. Aims Of The Study: This study assessed the effect of the Affordable Care Act (ACA) on mental health services use in one large state (California), and whether that effect differed among racial and ethnic groups. Also, it tested for change in racial and ethnic disparities after the implementation of the ACA, using four measures of mental health care. Methods: Using pooled California Health Interview Survey (CHIS) data from 2011-2018, logistic regression and Generalized Linear Models (GLM) were estimated. Disparities were defined using the Institute of Medicine (IOM) definition. Primary outcomes were any mental health care in primary settings; in specialty settings, any prescription medication for mental health problems, and number of annual visits to mental health services. Results: Findings suggested that the change in Hispanic-non-Hispanic White disparities in prescription medication use under the ACA was statistically significant, narrowing the gap by 7.23 percentage points (p&lt;.05). However, the disparity in other measures was not significantly reduced. Discussion: These findings suggest that the magnitude of the increase in primary and specialty mental health services among racial and ethnic minorities was not large enough to significantly reduce racial and ethnic disparities. One possible explanation is that non-financial factors played a role, such as language barriers, attitudinal barriers from home culture norms, and systemic barriers due to mental health professional shortages and a limited number of mental health care providers of color. Implications For Health Care Provision And Use: Integrated approaches that coordinate specialty and primary care mental health services may be needed to promote mental healthcare access for members of racial and ethnic minoritized groups. Implications For Health Policies: Federal and state policies aiming to improve mental health services use have historically given more weight to financial determinants, but this has not been enough to significantly reduce racial/ethnic disparities. Thus, policies should pay more attention to non-financial determinants. Implications For Further Research: Assessing underlying mechanisms of non-financial factors that moderate the effectiveness of the ACA is a worthwhile goal for future research. Future studies should examine the extent to which non-financial factors intervene in the relationship between the implementation of the ACA and mental health services use. abstract_id: PUBMED:27809570 Racial/ethnic differences in perceived need for mental health care and disparities in use of care among those with perceived need in 1990-1992 and 2001-2003. Objectives: This study examines whether there are racial/ethnic differences in perceived need for mental health care among those with a mood and/or anxiety disorder in 1990-1992 and 2001-2003 in the US. Then among those with perceived need, we examine whether racial/ethnic disparities in use of mental health care existed in both time periods. Design: Using data from the 1990-1992 National Comorbidity Survey (NCS) and 2001-2003 National Comorbidity Survey - Replication (NCS-R), the study analyzes whether whites differed from blacks and Latinos in rates of perceived need among those with a mood and/or anxiety disorder in 1990-1992 and 2001-2003. Then among those with a disorder and perceived need, rates of mental health care use for whites are compared to black rates and Latino rates in within the 1990-1992 cohort and then within the 2001-2003 cohort. Results: There were no statistical racial/ethnic differences in perceived need in both time periods. Among those with perceived need in 1990-1992, there were no statistical racial/ethnic disparities in the use of mental health care. However, in 2001-2003, disparities in mental health care use existed among those with perceived need. Conclusions: The emergence of racial/ethnic disparities in use of mental health care among those with a perceived need for care in 2001-2003 suggests that personal/cultural belief along with issues concerning access and quality of mental health care may create barriers to receiving perceived needed care. More research is needed to understand why these disparities emerged among those with perceived need in the latter time period and whether these disparities continue to exist in more recent years. abstract_id: PUBMED:27271072 Racial/Ethnic Differences in the Use of Primary Care Providers and Preventive Health Services at a Midwestern University. Many universities seek to improve the health and wellbeing of their faculty and staff through employer wellness programs but racial/ethnic disparities in health care use may still persist. The purpose of this research was to identify racial/ethnic disparities in the use of preventive health services at a Midwestern university. A record review was conducted of self-reported health data from University employees, examining the use of primary care and common screening procedures collected in a Personal Health Assessment conducted by the University's wellness program. Results show that there were significant racial/ethnic differences in the use of primary care and participation in screening. Notably, Asian employees in this sample were less likely to have a primary care provider and participate in routine cancer screenings. The observed racial/ethnic differences in screening behavior were mediated by the use of primary care. Together, these data show that despite equal access to care, racial and ethnic disparities in screening persist and that having a primary care provider is an important predictor of screening behavior. Results suggest that health communications designed to increase screening among specific racial/ethnic minority groups should target primary care use. abstract_id: PUBMED:36894493 Assessing the difference in racial and ethnic disparities in access to and use of care between Traditional Medicare and Medicare Advantage. Objective: Test whether racial-ethnic disparities in the access and use of care differ between Traditional Medicare (TM) and Medicare Advantage (MA). Data Source: Secondary data from the 2015-2018 Medicare Current Beneficiary Survey (MCBS). Study Design: Measure Black-White and Hispanic-White disparities in access to care and use of preventive services within TM, within MA, and assess the difference-in-disparities between the two programs with and without controls for factors that could influence enrollment, access, and use. Data Collection/extraction: Pool 2015-2018 MCBS data and restrict to non-Hispanic Black, non-Hispanic White, or Hispanic respondents. Principal Findings: Black enrollees have worse access to care relative to White enrollees in TM and MA, particularly for cost-related measures such as not having problems paying medical bills (11-13 pp. lower for Black enrollees; p &lt; 0.05) and satisfaction with out-of-pocket costs (5-6 pp. lower; p &lt; 0.05). We find no difference in Black-White disparities between TM and MA. Hispanic enrollees have worse access to care relative to White enrollees in TM but similar access relative to White enrollees in MA. Hispanic-White disparities in not delaying care due to cost and not reporting problems paying medical bills are narrower in MA relative to TM by about 4 pp (significant at the p &lt; 0.05 level) each. We find no consistent evidence that Black-White or Hispanic-White differences in the use of preventive services differ between TM and MA. Conclusions: Across the measures of access and use studied here, racial and ethnic disparities in MA are not substantially narrower than in TM for Black and Hispanic enrollees relative to White enrollees. For Black enrollees, this study suggests that system-wide reforms are required to reduce existing disparities. For Hispanic enrollees, MA does narrow some disparities in access to care relative to White enrollees but, in part, because White enrollees do not do as well in MA as they do in TM. abstract_id: PUBMED:34333722 Behavioral Health Services use Among Racial and Ethnic Groups: Results from the California Health Interview Survey (CHIS). Access and utilization of behavioral health services is a public health issue, yet disparities among racial/ethnic groups persist, resulting in fewer access points and lower utilization. Using pooled 2015 and 2016 California Health Interview Survey (N = 42,089) data of diverse adults, this study examines provider access points for behavioral health services use. Latinx (OR = 0.55, 95% CI, 0.38-0.80), Asian (OR = 0.32, 95% CI, 0.17-0.59), and first generation (OR = 0.56, 95% CI, .38-.83) individuals, reported lower odds of accessing specialty care behavioral health services, compared to no services. First generation adults reported lower odds accessing a primary care physician (OR = 0.66, 95% CI, 0.44-0.98), compared to none. Results advance knowledge of behavioral health services access points among racial, ethnic and immigrant groups, following passage of the California Mental Health Services Act. Findings suggest primary care may be an important entry point for behavioral health service use engagement among underserved populations. abstract_id: PUBMED:28480588 Racial/Ethnic and Gender Disparities in Health Care Use and Access. Objective: To document racial/ethnic and gender differences in health service use and access after the Affordable Care Act went into effect. Data Source: Secondary data from the 2006-2014 National Health Interview Survey. Study Design: Linear probability models were used to estimate changes in health service use and access (i.e., unmet medical need) in two separate analyses using data from 2006 to 2014 and 2012 to 2014. Data Extraction: Adult respondents aged 18 years and older (N = 257,560). Principal Findings: Results from the 2006-2014 and 2012-2014 analyses show differential patterns in health service use and access by race/ethnicity and gender. Non-Hispanic whites had the greatest gains in health service use and access across both analyses. While there was significant progress among Hispanic respondents from 2012 to 2014, no significant changes were found pre-post-health care reform, suggesting access may have worsened before improving for this group. Asian men had the largest increase in office visits between 2006 and 2014, and although not statistically significant, the increase continued 2012-2014. Black women and men fared the worst with respect to changes in health care access. Conclusions: Ongoing research is needed to track patterns of health service use and access, especially among vulnerable racial/ethnic and gender groups, to determine whether existing efforts under health care reform reduce long-standing disparities. Answer: The examination of trends in racial/ethnic disparities in medical and oral health, access to care, and use of services in US children reveals that disparities persist over time, with some changes noted in specific areas. According to the analysis of the 2003 and 2007 National Surveys of Children's Health, 18 disparities were identified in 2007 for at least one minority group, and the number of indicators for which at least one racial/ethnic group experienced disparities did not significantly change between 2003-2007. However, disparities in the use of services decreased by 82%, with 15 disparities decreasing over time, two worsening, and 10 new disparities arising (PUBMED:23339566). The 2003 National Survey of Children's Health also highlighted significant disparities across various racial/ethnic groups, with issues such as higher uninsurance rates for Latinos and Native Americans, and increased odds of suboptimal health status, overweight, asthma, and unmet medical and dental needs among minority children. Specific disparities were particularly marked for Latinos, African Americans, Native Americans, Asians/Pacific Islanders, and multiracial children (PUBMED:18195000). A study focusing on U.S. adolescents using data from the 2003 National Survey of Children's Health found that racial/ethnic disparities in health and health care were especially marked for specific groups and multiracial youth, with disparities including suboptimal health status, lack of a personal doctor or nurse, suboptimal oral health, and not receiving all needed medications in the past year (PUBMED:22417169). Overall, these findings suggest that while there have been some improvements in disparities in the use of services, multiple disparities in medical and oral health and healthcare for minority children continue to exist and have persisted over time. Urgent policy solutions are needed to eliminate these disparities, including collecting racial/ethnic and language data on all patients, monitoring and publicly disclosing disparities data annually, providing health insurance coverage and medical and dental homes for all children, ensuring all children receive needed pediatric specialty care, and more research and innovative solutions (PUBMED:23339566).
Instruction: Liver Stiffness Measurement in Psoriasis: Do Metabolic or Disease Factors Play the Important Role? Abstracts: abstract_id: PUBMED:27006950 Liver Stiffness Measurement in Psoriasis: Do Metabolic or Disease Factors Play the Important Role? Background: An increased prevalence of metabolic syndrome including nonalcoholic fatty liver disease (NAFLD) was reported in psoriasis. NAFLD can progress to nonalcoholic steatohepatitis and fibrosis. Transient elastography (TE) is a noninvasive liver fibrosis assessment. We evaluated the prevalence of significant liver fibrosis or high liver stiffness measurement (LSM) using the LSM cutoff over 7 kPa and its associated factors in psoriatic patients. Methods: Subjects underwent TE and ultrasonography. Univariate and multivariate analysis were performed for the associated factors. Results: One hundred and sixty-eight patients were recruited. Three patients were excluded due to TE failure. Mean BMI was 24.8 ± 4.7 kg/m(2). NAFLD, metabolic syndrome, and diabetes were seen in 105 (63.6%), 83 (50.3%), and 31 (18.8%) patients. The total cumulative dose of methotrexate over 1.5 g was seen in 39 (23.6%) patients. Mean LSM was 5.3 ± 2.9 kPa. High LSM was found in 18 (11.0%) patients. Waist circumference (OR: 1.24; 95% CI: 1.11-1.38; P = 0.0002), diabetes (OR: 12.70; 95% CI: 1.84-87.70; P = 0.010), and AST (OR: 1.08; 95% CI: 1.02-1.16; P = 0.017) were associated with high LSM. Conclusion: 11% of psoriatic patients had significant liver fibrosis by high LSM. Waist circumference, diabetes, and AST level were the independent predictors. abstract_id: PUBMED:34229047 Arterial stiffness, the hidden face of cardiovascular risk in autoimmune and chronic inflammatory rheumatic diseases. Background And Objective: Cardiovascular diseases (CVD) are the leading causes of death in chronic inflammatory rheumatic diseases and are not solely explained by the increased prevalence of cardiovascular (CV) risk factors in this population. Arterial stiffness, assessed primarily by pulse wave velocity (PWV) and more indirectly by augmentation index (AIx), is a surrogate marker of CVD that should be considered. The objective of this review was to investigate the relationship between arterial stiffness and chronic inflammatory and/or autoimmune diseases. Methods: We performed a systemic literature review of articles published in Medline from January 2012 to April 2020 restricted to English languages and to human adults. We selected relevant articles about the relationship between arterial stiffness and rheumatoid arthritis, systemic lupus erythematosus, psoriasis, Sjogren's syndrome and ankylosing spondylitis. For each selected article, data on PWV and AIx were extracted and factors that may have an impact on arterial stiffness were identified. Results: A total of 214 references were identified through database searching and 82 of them were retained for analysis. Arterial stiffness is increased in chronic inflammatory and autoimmune diseases. Traditional CV risk factors such as hypertension and dyslipidemia accentuate this relationship. Current data are insufficient to determine whether disease activity significantly influences arterial stiffness, whereas disease duration seems rather critical. TNF-alpha inhibitors and cardiorespiratory fitness tend to decrease arterial stiffness. Finally, increased arterial stiffness leads to diastolic dysfunction, which is the main mechanism of heart failure in chronic inflammatory rheumatic diseases. Conclusion: CV risk assessment in chronic inflammatory and autoimmune diseases should also rely on PWV and AIx. abstract_id: PUBMED:36702175 Risk of liver fibrosis associated with long-term methotrexate therapy may be overestimated. Background & Aims: The risk of significant liver fibrosis from prolonged methotrexate (MTX) exposure has been estimated at around 5%, prompting intensive monitoring strategies. However, the evidence is derived from retrospective studies that under-reported risk factors for liver disease. We evaluated the risk of long-term MTX therapy on liver fibrosis in a longitudinal cohort study using two non-invasive markers. Method: Between 2014-2021, adult patients diagnosed with rheumatoid arthritis (RA) or psoriasis for ≥2 years were recruited prospectively from six UK sites. The MTX group included patients who received MTX for ≥6 months, whereas the unexposed group included those who never received MTX. All patients underwent full liver profiling, with transient elastography (TE) and enhanced liver fibrosis (ELF) marker measurements. Results: A total of 999 patients (mean age 60.8 ± 12 years, 62.3% females) were included. Of 976 with valid TE values, 149 (15.3%) had liver stiffness ≥7.9 kPa. Of 892 with a valid ELF, 262 (29.4%) had ELF ≥9.8. Age and BMI were independently associated with elevated liver stiffness and ELF. Neither MTX cumulative dose nor duration was associated with elevated liver stiffness. Diabetes was the most significant risk factor associated with liver stiffness ≥7.9 kPa (adjusted odds ratio = 3.19; 95% CI 1.95-5.20; p &lt;0.001). Regular use of non-steroidal anti-inflammatory drugs showed the strongest association with ELF ≥9.8 (odds ratio = 1.76; 95% CI 1.20-2.56; p = 0.003), suggesting the degree of joint inflammation in RA may confound ELF as a non-invasive marker of liver fibrosis. Conclusion: The risk of liver fibrosis attributed to MTX itself might have been previously overestimated; there is a need to consider modifying current monitoring guidelines for MTX. Impact And Implications: Current guidelines recommend intensive (2-3 monthly) monitoring strategies for patients on long-term methotrexate therapy due to the potential risk of liver fibrosis. Evaluation of the association using two validated non-invasive markers of liver fibrosis, liver stiffness and enhanced liver fibrosis score, in a large cohort of patients with rheumatoid arthritis or psoriasis shows that the reported risk has previously been overestimated. The clinical focus should be to improve patients' metabolic risk factors, diabetes and BMI, that are independently associated with liver stiffness. There is a need to consider modifying current treatment monitoring guidelines for methotrexate. abstract_id: PUBMED:23636854 Aortic arterial stiffness is a moderate predictor of cardiovascular disease in patients with psoriasis vulgaris. Psoriasis is associated with an increased risk of atherosclerosis. Endothelial dysfunction is the critical early step in the process of atherogenesis, and it is commonly investigated by measuring arterial stiffness. We aimed to investigate the relationship between arterial stiffness and high-sensitivity C-reactive protein (hsCRP) in patients with psoriasis. A total of 32 patients with psoriasis and 35 patients with other skin diseases were included in the study. The hsCRP levels and arterial stiffness measurements were compared. Arterial stiffness was significantly different between the 2 groups (P = .01). Arterial stiffness was not associated with the duration of the disease or the disease activity (P = .34 and .64, respectively). In patients with psoriasis, arterial stiffness correlated positively with age, sex, body mass index, diastolic blood pressure, and hsCRP level (P &lt; .05). These findings provide further evidence of a link between inflammation, premature atherosclerosis, and psoriasis. abstract_id: PUBMED:27273540 Assessment of Arterial Stiffness in Korean Patients With Psoriasis by Cardio-Ankle Vascular Index. Many recent studies have suggested that psoriasis, a chronic, inflammatory, immune-mediated skin disease, is associated with increased risk of cardiovascular disease (CVD). We aimed to compare arterial stiffness in psoriasis and control patient groups using the cardio-ankle vascular index (CAVI) and to determine whether psoriasis is an independent risk factor for CVD. A total 103 patients with psoriasis and 103 age- and sex-matched controls were enrolled. Compared with controls, the patients showed a higher CAVI ( P = .03), particularly patients older than 40 years. The duration of psoriasis exhibited a positive correlative tendency with CAVI ( P = .066). Moreover, psoriasis is an independent predictor of arterial stiffness after adjusting for other factors ( P = .011). We suggest that psoriasis can be a risk factor for CVD, and older patients with psoriasis of longer disease duration should be monitored carefully for cardiovascular risk. abstract_id: PUBMED:31254236 Metabolic syndrome, non-alcoholic fatty liver disease and liver stiffness in psoriatic arthritis and psoriasis patients. Objectives: Non-alcoholic fatty liver disease (NAFLD), potentially evolving into liver fibrosis (LF), is frequent in psoriasis (PsO), but data in psoriatic arthritis (PsA) are lacking. Our study aimed to investigate the prevalence of NALFD and LF in PsA/PsO and the contribution of arthritis in their onset. Method: PsA and PsO patients were consecutively enrolled. Exclusion criteria were liver diseases causing fibrosis (except NAFLD), alcohol ≥ 20 g/day, daily use of non-steroidal anti-inflammatory drugs and current/previous methotrexate use. Clinical history, biochemical and clinimetrical data and insulin-resistance index HOMA (homeostatic model assessment) were assessed. Patients underwent a liver ultrasound to identify steatosis (therefore NAFLD) and transient elastography, to evaluate LF (stiffness≥ 7 kPa = fibrosis). Statistical analysis included basic statistics, logistic and linear regression analyses (to assess the contribution of arthritis to NAFLD and LF grading, respectively) and Spearman's correlations; p ≤ 0.05 was considered significant. Results: Seventy-six patients were enrolled (PsA/PsO 43/33). MetS and LF prevalence were similar between PsA and PsO (35% vs 33%, p = 0.88; 31% vs 28%, p = 0.77, respectively). NAFLD was more frequent in PsO (65% vs 35%, p = 0.044). In multivariable models with NAFLD and LF grading as outcomes, arthritis was not a significant predictor, while HOMA was independently associated with both (OR 1.34; 95%CI 1.06, 1.69; beta 0.88; 95%CI 0.54, 1.21, respectively). Female sex was independently associated with LF grading (beta 1.81; 95%CI 0.05, 3.57). Conclusions: NAFLD was more frequent in PsO, but MetS and LF prevalence were similar in PsA and PsO. Insulin resistance is the main determinant of NAFLD and LF, while additional contribution of arthritis seems small. Key Points • The prevalence of metabolic comorbidities, including liver fibrosis, is overall quite similar between psoriatic arthritis and psoriasis. • NAFLD is more frequently found in psoriasis than psoriatic arthritis. • The contribution of arthritis in the onset of metabolic comorbidities seems small. abstract_id: PUBMED:20178698 Psoriasis: can statins play a dual role? Recently, a strong association between "psoriasis" and "atherosclerosis" has emerged. Psoriasis patients have an increased prevalence of atherosclerotic disease including coronary artery, cerebrovascular, and peripheral vascular diseases. The exact connection between psoriasis and atherosclerosis remains unclear, but it is thought that inflammation, which plays an important role in both diseases, may be the causative link. Nevertheless, psoriasis patients suffer from an increased burden of atherosclerotic disease and most commonly die from "coronary artery disease" (CAD). Psoriatic patients have an increased prevalence of CAD risk factors and an increased risk of myocardial infarction (MI). One CAD risk factor in psoriasis patients that can easily be managed is "hyperlipidemia." "Statins" are safe, cost-effective, and have been proven to be highly effective in preventing CAD, including MI, in patients with hyperlipidemia. Furthermore, in addition to their lipid lowering properties, statins have anti-inflammatory immunomodulator activities that may be beneficial in several autoimmune diseases including psoriasis. Considering the safety and cost-effectiveness of statins, we feel that it is worth investigating if statins can play a dual role in psoriasis by treating the increased atherosclerotic disease burden in these patients through their lipid lowering effects and by decreasing psoriatic disease activity through their anti-inflammatory immunomodulatory properties. abstract_id: PUBMED:25102800 The linkage between psoriasis and non-alcoholic fatty liver disease: a literature review. Psoriasis is a chronic systemic inflammatory disease mainly affecting the skin. Population-based surveys have shown a higher prevalence of non-alcoholic fat liver disease (NAFLD) in patients with psoriasis compared with the general population, especially in those with a greater psoriasis area and severity index (PASI). It is speculated that similar pathogenic bases may play a role in this association, highlighting insulin resistance and the release of inflammatory cytokines as the most likely causes. In the present work, we review basic aspects of the relationship between psoriasis and NAFLD. abstract_id: PUBMED:36751545 Does HSP90 play an important role in psoriasis? Introduction: Psoriasis is a chronic, inflammatory skin disease. Environmental, genetic, autoinflammatory and autoimmune factors play a role in the pathogenesis of disease. It is believed that heat shock protein 90 (HSP90) is an interleukin-17 (IL-17) receptor, which plays an essential role in the psoriasis pathogenesis. Aim: To evaluate the expression of the gene encoding HSP90 protein in keratinocytes of patients with psoriasis depending on its duration, recurrences, exacerbating factors, therapy form, and the coexistence of metabolic disorders and cardiovascular diseases. Material And Methods: Skin samples from 40 psoriatic patients were investigated in this study. Control skin biopsies were collected from 20 healthy volunteers. HSP90 expression level was measured by qRT-PCR reaction. Results: This study has shown an increased mRNA expression of HSP90 in psoriatic patients as compared to healthy volunteers. A positive correlation of HSP90 expression and the frequency of exacerbations was found. A negative correlation between the HSP90 activity and the age of patients was demonstrated in the coexistence of psoriasis with hyperlipidaemia or diabetes. Among the factors exacerbating psoriasis, acute infections induced HSP90 expression most significantly. Conclusions: HSP90 plays a role in the pathogenesis of psoriasis. The expression of HSP90 increases with the frequency of exacerbations of psoriasis throughout the year. Hyperlipidaemia or diabetes associated with psoriasis in young adults, and acute infections and emotional stress increase the expression of HSP90. The expression of HSP90 in psoriatic patients is not dependent on the type of psoriasis, comorbidity of cardiovascular diseases, smoking and alcohol addiction. abstract_id: PUBMED:25635116 Evaluation of the Beta Stiffness Index and Carotid Intima-Media Thickness in Asian Patients With Psoriasis. The risk of cardiovascular (CV) events is reportedly elevated in patients with psoriasis. Evaluation of the beta stiffness index (BSI) and carotid intima-media thickness (cIMT) are noninvasive methods of assessing arterial stiffness and subclinical atherosclerosis. We aimed to determine carotid arterial stiffness and cIMT, using high-resolution ultrasonography, in patients with psoriasis compared with controls, and to analyze whether psoriasis is an independent risk factor for atherosclerosis. A total of 54 consecutive patients with psoriasis and 60 age and gender-matched healthy controls were enrolled. Compared with control participants, patients with psoriasis exhibited a significantly higher BSI (P &lt; .001). The cIMT tended to be higher in patients with psoriasis but statistically not significant (P = .076). BSI was positively correlated with age, systolic blood pressure, disease severity defined according to history of systemic treatment, and traditional CV risk factors. Also, psoriasis was independently associated with BSI. These findings suggest that psoriasis is an independent risk factor for arterial stiffness, and the BSI is an earlier indicator of atherosclerosis than cIMT in these patients. Answer: Liver stiffness measurement (LSM) in psoriasis patients has been evaluated to determine the prevalence of significant liver fibrosis and the factors associated with high LSM. The study by PUBMED:27006950 found that 11% of psoriatic patients had significant liver fibrosis as indicated by high LSM, with waist circumference, diabetes, and AST level being independent predictors. This suggests that metabolic factors play an important role in liver fibrosis among psoriatic patients. Another study, PUBMED:36702175, evaluated the risk of liver fibrosis associated with long-term methotrexate (MTX) therapy, which had been previously estimated at around 5%. The study concluded that the risk attributed to MTX itself might have been overestimated, and that metabolic risk factors such as diabetes and BMI are independently associated with liver stiffness. This further supports the notion that metabolic factors are significant in the context of liver stiffness in psoriatic patients. Additionally, PUBMED:31254236 found that the prevalence of metabolic comorbidities, including liver fibrosis, is quite similar between psoriatic arthritis and psoriasis. Insulin resistance was identified as the main determinant of non-alcoholic fatty liver disease (NAFLD) and liver fibrosis, while the additional contribution of arthritis seems small. This indicates that metabolic syndrome and related factors like insulin resistance are more influential than disease-specific factors in the development of liver stiffness in these patients. In summary, the evidence suggests that metabolic factors, particularly waist circumference, diabetes, BMI, and insulin resistance, play a more significant role in liver stiffness measurement in psoriasis than disease-specific factors such as the duration or activity of psoriasis itself.
Instruction: Is it justified to classify patients to Stage IIIC epithelial ovarian cancer based on nodal involvement only? Abstracts: abstract_id: PUBMED:19471577 Stage IIIC epithelial ovarian cancer classified solely by lymph node metastasis has a more favorable prognosis than other types of stage IIIC epithelial ovarian cancer. Objective: To verify whether it can be justified to classify patients to stage IIIC epithelial ovarian cancer based on nodal involvement only. Methods: This study included all consecutive patients with stage IIIC epithelial ovarian cancer who underwent upfront cytoreductive surgery according to the FIGO guideline followed by platinum based chemotherapy from September 1989 to September 2006 at Asan Medical Center. Results: During the study period, a total of 272 patients met the inclusion criteria. Optimal cytoreduction was achieved in 213 patients, and complete cytoreduction was achieved in 85 patients. Median follow-up time was 37 months (range, 6-181 months). The 5-year disease free survival (DFS) and overall survival (OS) rate of all patients were 23% and 57%, respectively. Forty-one patients were allocated to stage IIIC by positive nodes only. Patients with stage IIIC disease due to positive nodes only had significantly longer DFS and OS compared to other stage IIIC patients (p&lt;0.001 and p&lt;0.001). The DFS and OS of these patients was significantly better than those of other stage IIIC patients who achieved complete or optimal cytoreduction (p&lt;0.001 and p&lt;0.001). The outcome was even better than that of stage IIIA and IIIB patients (p&lt;0.05 and p&lt;0.05). Conclusion: Patients with stage IIIC epithelial ovarian cancer due to positive nodes only had a more favorable prognosis compared to other stage IIIC patients. Therefore, reevaluation of the current FIGO staging system for stage IIIC epithelial ovarian cancer is required. abstract_id: PUBMED:17052746 Is it justified to classify patients to Stage IIIC epithelial ovarian cancer based on nodal involvement only? Background: Stage IIIC epithelial ovarian cancer is generally associated with upper abdominal tumor implants of greater than 2 cm and carries a grave prognosis. A subset of patients is upstaged to Stage IIIC because of lymph node metastases, in which prognosis is not well defined. We undertook this study to describe the clinical behavior of occult Stage IIIC. Methods: All consecutive patients found to have Stage IIIC epithelial ovarian cancer during a 9-year period (1994-2002) were analyzed for surgical procedures, pathology, and disease-free (DFS) and overall survival (OS). Results: Thirty-six patients were upstaged to Stage IIIC by virtue of positive nodes. Nine had small volume upper abdominal disease (IIIA/B before upstaging), 15 had disease limited to the pelvis and 12 had disease confined to the ovaries. 32/36 patients had no gross residual disease at the conclusion of surgery. The 5-year DFS and OS survivals were 52% and 76% respectively, for all patients. We observed no significant difference in outcomes between patients upstaged from IIIA/B versus I-II stage disease. The outcomes were superior to a control group of patients cytoreduced to either no gross RD or RD&lt;1 cm, who had large volume upper abdominal disease at beginning of surgery (p&lt;0.001). Conclusions: Patients upstaged to Stage IIIC epithelial ovarian cancer for node involvement have an excellent 5-year OS relative to all patients with Stage IIIC disease. These data demonstrate the necessity for stratifying patients classified as having Stage IIIC disease based solely on nodal disease when comparing outcomes. This information is particularly valuable when counseling patients regarding prognosis. abstract_id: PUBMED:28779965 Assessing the risk of pelvic and para-aortic nodal involvement in apparent early-stage ovarian cancer: A predictors- and nomogram-based analyses. Objective: To estimate the prevalence of lymph node involvement in early-stage epithelial ovarian cancer in order to assess the prognostic value of lymph node dissection. Methods: Data of consecutive patients undergoing staging for early-stage epithelial ovarian cancer were retrospectively evaluated. Logistic regression and a nomogram-based analysis were used to assess the risk of lymph node involvement. Results: Overall, 290 patients were included. All patients had lymph node dissection including pelvic and para-aortic lymphadenectomy. Forty-two (14.5%) patients were upstaged due to lymph node metastatic disease. Pelvic and para-aortic nodal metastases were observed in 22 (7.6%) and 42 (14.5%) patients. Lymph node involvement was observed in 18/95 (18.9%), 1/37 (2.7%), 4/29 (13.8%), 11/63 (17.4%), 3/41 (7.3%) and 5/24 (20.8%) patients with high-grade serous, low-grade-serous, endometrioid G1, endometrioid G2&amp;3, clear cell and undifferentiated, histology, respectively (p=0.12, Chi-square test). We observed that high-grade serous histology was associated with an increased risk of pelvic node involvement; while, histology rather than low-grade serous and bilateral tumors were independently associated with para-aortic lymph node involvement (p&lt;0.05). Nomograms displaying the risk of nodal involvement in the pelvic and para-aortic areas were built. High-grade serous histology and bilateral tumors are the main characteristics suggesting lymph node positivity. Conclusions: Our data suggested that high-grade serous and bilateral early-stage epithelial ovarian cancer are at high risk of having disease harboring in the lymphatic tissues of both pelvic and para-aortic area. After receiving external validation, our data will help to identify patients deserving comprehensive retroperitoneal staging. abstract_id: PUBMED:20920397 Perioperative care of patients with stage IIIC ovarian cancer Introduction: Besides intraoperative efforts, the perioperative care is essential to improve the postoperative outcome following ovarian cancer surgery. Multimodal evidence-based care within the fast-track methodology significantly enhances postoperative recovery and reduces morbidity. The purpose of this study was to assess and discuss perioperative care principles in patients undergoing ovarian cancer surgery with stage IIIC at the six centres in Denmark performing this surgical procedure. Material And Methods: The study was a retrospective audit. The study included 90 consecutive patients, 15 from each of the centres at which operations were performed in 2007. Data was obtained from medical and nursing files. The extent of the surgical procedure was recorded. Results: For patients without extensive surgery effort, the median hospital stay was six days; for patients with extensive surgery, the median hospital stay was 13 days. A total of 48 percent of the patients without maximal surgery effort fulfilled the goals for mobilization on the day of surgery, and 19 percent on the first postoperative day. A total of 35 percent of the patients with maximal surgery effort fulfilled the goals for mobilization on the day of the surgery, and 24 percent on the first postoperative day. Less than half of the patients had regular oral diet and protein enriched drinks on or before the third postoperative day. Conclusion: Mobilization, nutrition, nausea and pain are severe problems for patients undergoing ovarian cancer surgery. Optimized perioperative care is needed including a procedure-specific, evidence-based clinical guideline for patients receiving extensive surgery. abstract_id: PUBMED:27703680 Prevalence and distribution pattern of nodal metastases in advanced ovarian cancer. The objective of this study was to examine the relevance of pelvic and para-aortic lymph node involvement and the tumour characteristics affecting nodal metastases and survival in primary advanced ovarian cancer. A total of 130 consecutive patients were retrospectively investigated. All the patients received stage-related surgery with pelvic and para-aortic lymphadenectomy. The median follow-up was 53.5 months. The clinicopathological parameters and distribution pattern of nodal metastases were evaluated. Lymph node metastases were detectable in 74.62% of the cases. Overall, both pelvic and para-aortic nodes were affected in 35.9% of the patients, whereas 13.3% had metastases only in the pelvic and 13.3% only in the para-aortic lymph nodes. Histological grade 1/2 and 3, serous and endometrioid histology were independent predictors of nodal metastasis. Serous and endometrioid cancers have shown a predilection for metastasis to the pelvic lymph nodes alone, both to the pelvic and the para-aortic nodes, or the para-aortic nodes alone. Overall survival was significantly positively affected by serous histology with positive nodes (P=0.043). It is crucial to investigate the risk factors and metastatic patterns of such patients in a multicenter analysis to evaluate individual subgroups. Prospective studies are required to investigate the prognostic effect of lymphadenectomy in advanced ovarian cancer and its association with histology and distribution pattern of nodal metastasis. abstract_id: PUBMED:30886499 Correlation Between Pelvic Peritoneal Disease and Nodal Metastasis in Advanced Ovarian Cancer: Can Intraoperative Findings Define the Need for Systematic Nodal Dissection? To explore the relationship of peritoneal, and rectal involvement with lymph nodal metastases to identify clinical parameters to guide systematic nodal dissection in advanced ovarian cancer (stage 3c). It is a retrospective study of stage III C epithelial ovarian cancers undergoing cytoreductive surgery with systematic nodal dissection, from January 2011 to December 2016. LS3 score is a cumulative score given for the presence of size 3 lesion (peritoneal disease measuring more than 5 cm) in regions 5, 6, and 7. The depth of rectal involvement was assigned progressive numerical values from 1 (for serosa) to maximum 4 (for mucosa) to generate rectal involvement score. There were 91 patients. 48.35% patients had LS3 lesions in regions 5, 6, 7. Of these, 36% (27/44) had positive nodes. Of the 41 node-positive cases, 43.9% had single and 34.14% had two station involvements. Rectum was involved in 47 patients (51.64%), serosal involvement being the most common type (50.57%). Twenty patients had positive mesorectal nodes (42.55%). The presence of rectal involvement was influenced by the Peritoneal Carcinomatosis Index (PCI) score, the presence of LS3 in lower quadrants (p = 0.008), and LSE score of lower quadrants (p = 0.003). With the increasing depth of rectal infiltration, mesorectal positivity increased significantly (p = 0.000). In multivariate analysis, lower quadrant (regions 5, 6, 7) PCI, LS3 in lower quadrants, LS3 score, rectal involvement score, and the total number of lines of chemotherapy significantly affected different nodal disease parameters. In advanced ovarian cancer, LS3 disease in regions 5, 6, and 7 and rectal involvement directly impact the nodal metastasis and hence mandates a systematic nodal dissection. Mesorectal nodal involvement significantly increases with the increasing depth of rectal involvement necessitating systematic mesorectal nodal clearance for all rectal resections. abstract_id: PUBMED:19088481 Positive adnexal or uterine serosal involvement in stage IIIC endometrial cancer is an adverse factor for recurrence. Objectives: Clinical and pathological significance of stage IIIC endometrial cancer is unclear. Our study was designed to determine the risk of recurrence among patients with stage IIIC endometrial cancer according to different pathological findings. Methods: We retrospectively reviewed all patients with FIGO IIIC endometrial carcinoma (n = 48) treated in our institution between 1996 and 2005. Patients without comprehensive surgical staging were excluded. Patients were classified into two groups: with adnexae and/or uterine serosal metastasis (group A, n = 18) and without metastasis (group B, n = 20). Cox proportional hazards model was used for multiple regression analysis. Results: Mean age was 64 years (range 46-90). Eighteen patients received adjuvant chemotherapy and pelvic radiotherapy, 17 received pelvic radiotherapy alone, and 11 received chemotherapy or hormonotherapy. At a median follow-up of 26.7 months, 12 had recurrence of the disease. Serosal and/or adnexal involvement was a negative independent prognostic factor for disease-free survival [relative risk = 3.75 (1.01-13.9); p = 0.04], whereas histological type, grade, depth of invasion and age at diagnosis had no influence. Conclusion: Patients with stage IIIC endometrial cancer and metastasis to adnexae and/or serosa have a higher risk of recurrence than those with node metastasis alone. Optimal adjuvant therapy for these groups remains unclear. abstract_id: PUBMED:10329040 The validity and significance of substage IIIC by node involvement in epithelial ovarian cancer: impact of nodal metastasis on patient survival. Objective: The aim of this study was to examine whether there is a possible difference in patient survival (1) between node-positive and node-negative groups for Stage I, II, IIIA+B, and IIIC disease and (2) between node-positive Stage I through IIIB disease and Stage IIIC disease. Methods: Medical records of 125 patients with Stage I, II, and III epithelial ovarian cancer were retrospectively analyzed for clinical and histological characteristics with special reference to node-positive or -negative status, and patient survival by lymph node status was compared for FIGO intraperitoneal stage. Results: One hundred eleven of 125 patients were explored for retroperitoneal and inguinal nodes and divided into a node-positive group [n = 35 (31.5%)] and a node-negative group [n = 76 (68.5%)]. Stage III disease and histological grade 3 disease were significantly more frequent in the node-positive group in comparison with the corresponding figures in the node-negative group (P = 0.0007 and P = 0.0005). The 10-year survival was not significantly different between the node-positive and the node-negative group for each of Stage I, II, IIIA+B, and IIIC disease. However, survival was considerably worse in the node-positive group compared to that in the node-negative group for each of Stage I and II disease, and was significantly worse for Stage I+II disease (P = 0.0172). The survival was significantly worse in the node-positive group compared to that in the node-negative group also for Stage I through IIIB disease taken as a whole (P = 0.0212), although not for Stage IIIA+B disease alone. On the other hand, patient survival of node-positive Stage I through IIIB disease, taken as a whole, was significantly better than that in Stage IIIC disease (P &lt; 0.0001). Conclusions: It might not be valid, on the basis of patient prognosis, to group node-positive Stage I, II, and IIIA+B disease into a criterion of Substage IIIC together with Stage III disease of abdominal implants greater than 2 cm in diameter outside the pelvis. For further evaluation of the prognostic significance of node involvement, survivals should be compared among patients with de novo node disease, with node disease removed, and otherwise similar patients without node disease, ideally in a prospective randomized trial with a large number of patients. abstract_id: PUBMED:15032274 Are nodal metastases in ovarian cancer chemoresistant lesions? Analysis of nodal involvement in 105 patients treated with preoperative chemotherapy. Background: To report the rates of nodal involvement in epithelial ovarian cancer (EOC) in patients who underwent initial lymphadenectomy (before chemotherapy) and patients who underwent lymphadenectomy after chemotherapy. Study Design: The rates of nodal involvement in 205 patients with EOC who underwent complete bilateral pelvic and para-aortic lympadenectomy between 1985 and 2001 were analyzed: 100 women underwent this surgical procedure before chemotherapy (initial surgery) and 105 at the end of chemotherapy (second-look surgery for 77 patients with 6 courses of a platinum-based regimen) or during chemotherapy (interval debulking surgery for 28 patients with 3 courses of a platinum-based regimen containing paclitaxel). Results: The overall frequency of lymph-node involvement was 35% (35/100) in patients treated with initial surgery, 54% (15/28) in the interval debulking surgery group and 36% (28/77) in the second-look surgery group. In patients with Stage III disease, the rates of nodal involvement in patients treated with initial surgery, interval debulking surgery (with paclitaxel-based regimen) and second-look surgery were respectively: 53% (15/28), 58% (15/26) and 48% (20/42). The rates of nodal involvement in patients who underwent lymphadenectomy prior to or after chemotherapy were not statistically different whatever the stage of the disease. Adding paclitaxel to the platinum-based regimen does not seem to improve node sterilization rates. Conclusions: The rates of nodal involvement seem to be similar in patients treated before or after chemotherapy. Such results suggest that nodal metastases are not as chemosensitive as peritoneal lesions. However, further studies are needed to evaluate the therapeutic value of lymphadenectomy in patients with nodal involvement. abstract_id: PUBMED:25254047 Survival Advantage Associated with Decrease in Stage at Detection from Stage IIIC to Stage IIIA Epithelial Ovarian Cancer. Objective. The aim of this study was to document the survival advantage of lowering stage at detection from Stage IIIC to Stage IIIA epithelial ovarian cancer. Methods. Treatment outcomes and survival were evaluated in patients with Stage IIIA and Stage IIIC epithelial ovarian cancer treated from 2000 to 2009 at the University of Kentucky Markey Cancer Center (UKMCC) and SEER institutions. Results. Cytoreduction to no visible disease (P &lt; 0.0001) and complete response to platinum-based chemotherapy (P &lt; 0.025) occurred more frequently in Stage IIIA than in Stage IIIC cases. Time to progression was shorter in patients with Stage IIIC ovarian cancer (17 ± 1 months) than in those with Stage II1A disease (36 ± 8 months). Five-year overall survival (OS) improved from 41% in Stage IIIC patients to 60% in Stage IIIA patients treated at UKMCC and from 37% to 56% in patients treated at SEER institutions for a survival advantage of 19% in both data sets. 53% of Stage IIIA and 14% of Stage IIIC patients had NED at last followup. Conclusions. Decreasing stage at detection from Stage IIIC to stage IIIA epithelial ovarian cancer is associated with a 5-year survival advantage of nearly 20% in patients treated by surgical tumor cytoreduction and platinum-based chemotherapy. Answer: The classification of patients with Stage IIIC epithelial ovarian cancer based on nodal involvement alone has been a subject of investigation in several studies. The prognosis for patients with Stage IIIC disease due to positive nodes only appears to be more favorable compared to other Stage IIIC patients. A study at Asan Medical Center found that patients with Stage IIIC disease due to positive nodes only had significantly longer disease-free survival (DFS) and overall survival (OS) compared to other Stage IIIC patients, and even better outcomes than those of stage IIIA and IIIB patients (PUBMED:19471577). Similarly, another study reported excellent 5-year OS relative to all patients with Stage IIIC disease for those upstaged to Stage IIIC because of lymph node metastases, suggesting the necessity for stratifying patients classified as having Stage IIIC disease based solely on nodal disease when comparing outcomes (PUBMED:17052746). Moreover, the prevalence of lymph node involvement in early-stage epithelial ovarian cancer was found to be significant, with high-grade serous and bilateral tumors being the main characteristics suggesting lymph node positivity (PUBMED:28779965). This indicates that nodal involvement is an important factor in the staging of ovarian cancer. In the context of perioperative care, optimized care is needed for patients undergoing ovarian cancer surgery, including those with stage IIIC disease, to improve postoperative outcomes (PUBMED:20920397). The relevance of pelvic and para-aortic lymph node involvement was also highlighted in a study that found lymph node metastases in 74.62% of primary advanced ovarian cancer cases, with histological grade and serous and endometrioid histology being independent predictors of nodal metastasis (PUBMED:27703680). Furthermore, the correlation between pelvic peritoneal disease and nodal metastasis was explored, suggesting that certain intraoperative findings, such as LS3 disease in regions 5, 6, and 7 and rectal involvement, directly impact nodal metastasis and hence mandate a systematic nodal dissection (PUBMED:30886499). Additionally, positive adnexal or uterine serosal involvement in stage IIIC endometrial cancer was found to be an adverse factor for recurrence, emphasizing the significance of nodal involvement in prognosis (PUBMED:19088481). The validity of substage IIIC by node involvement was questioned in a study that found no significant difference in survival between node-positive and node-negative groups for Stage I, II, IIIA+B, and IIIC disease. However, survival was considerably worse in the node-positive group compared to that in the node-negative group for each of Stage I and II disease, and was significantly worse for Stage I+II disease. The study suggested that it might not be valid to group node-positive Stage I, II, and IIIA+B disease into a criterion of Substage IIIC together with Stage III disease of abdominal implants greater than 2 cm in diameter outside the pelvis based on patient prognosis (PUBMED:10329040). An analysis of nodal involvement in patients treated with preoperative chemotherapy indicated that the rates of nodal involvement were similar in patients treated before or after chemotherapy, suggesting that nodal metastases may not be as chemosensitive as peritoneal lesions (PUBMED:15032274). This could have implications for the staging and treatment of ovarian cancer. Lastly, a study documented a survival advantage of nearly 20% for patients whose stage at detection decreased from Stage IIIC to stage IIIA epithelial ovarian cancer when treated by surgical tumor cytoreduction and platinum-based chemotherapy (PUBMED:25254047). In summary, the evidence suggests that while nodal involvement is a significant prognostic factor in epithelial ovarian cancer, there is a need for careful consideration and possibly stratification when classifying patients as Stage IIIC based solely on nodal involvement. The prognosis for patients with Stage IIIC disease due to positive nodes only is more favorable, which may justify a different classification or at least a stratified approach within the current staging system.
Instruction: First-trimester Down syndrome screening performance in the Dutch population; how to achieve further improvement? Abstracts: abstract_id: PUBMED:19294632 First-trimester Down syndrome screening performance in the Dutch population; how to achieve further improvement? Objective: To study the performance of the first-trimester combined test between 2004 and 2006 compared to a previous period to investigate changes in time and identify reasons for sub-optimal performance. Methods: Serum samples were analysed for pregnancy-associated plasma protein A (PAPP-A) and the free beta subunit of human chorionic gonadotrophin (f beta-hCG). Nuchal translucency (NT) was measured between 10 and 14 weeks. Tests were considered screen positive, if their calculated Down syndrome (DS) risk was at least 1 in 250 at term. Results: A total of 20,293 singleton pregnancies were included in the analysis. The median maternal age fell from 35.7 to 34.3 years. The overall median weight-corrected multiple of the median (MoM) values of PAPP-A and f beta-hCG were 1.12 and 1.03, respectively. The median MoM value of NT was 0.89 and increased from 0.82 to 0.96. Sixty-six DS cases were detected by the screening test. The detection rate (DR) for DS was 75.9%, with a FPR of 3.3%. Conclusion: The performance of the first-trimester test has improved over the years. A better performance of the NT measurement was the main reason, although NT assessment should further be improved. In addition, a better setting of the medians for the biochemical parameters may contribute to a higher DR. abstract_id: PUBMED:33228595 Prenatal screening for trisomy 21: a comparative performance and cost analysis of different screening strategies. Background: Prenatal screening for chromosome aneuploidies have constantly been evolving, especially with the introduction of cell-free fetal DNA (cfDNA) screening in the most recent years. This study compares the performance, costs and timing of test results of three cfDNA screening implementation strategies: contingent, reflex and primary. Methods: We modelled enhanced first trimester screening (eFTS) as the first-tier test in contingent or reflex strategies. cfDNA test was performed contingent on or reflex from eFTS results. A comparison was made between cfDNA screening using sequencing technology and Rolling Circle Amplification (RCA)/imaging solution. All model assumptions were based on results from previous publications or information from the Ontario prenatal screening population. Results: At an eFTS risk cut-off of ≥1/1000, contingent and reflex cfDNA screening have the same detection rate (DR) (94%) for trisomy 21. Reflex cfDNA screening using RCA/Imaging solution provided the lowest false positive rate and cost. The number of women requiring genetic counselling and diagnostic testing was significantly reduced and women received their cfDNA screening result 9 days sooner compared with the contingent model. While primary cfDNA screening improved the trisomy 21 DR by 3-5%, it was more costly and more women required diagnostic testing. Conclusion: Reflex cfDNA screening is the most cost-effective prenatal screening strategy. It can improve the efficiency of prenatal aneuploidy screening by reducing the number of patient visits and providing more timely results. abstract_id: PUBMED:31223317 Performance and Predictive Value of First Trimester Screening Markers for Down Syndrome in Iranian Pregnancies. Objective: To investigate the performance of first trimester Down syndrome (DS) screening markers in Iranian pregnancies.Although sonographic and serum markers are currently recommended for the first trimester screening of Down syndrome, the screening performance of the markers depends on the race and ethnicity. Materials and methods: A retrospective case-control study using first trimester screening results recorded with the prenatal diagnostic multi-centers in Iran. A total of 6,384 pregnant women were examined from March 2012 to February 2017. Totally 100 Down syndrome cases and 266 matched controls were selected and the maternal characteristics, sonographic and biochemical screening data were collected. Statistical analysis was performed using logistic regression and descriptive statistics. A decision tree model was designed using the chi-squared automatic interaction detection method based on serum markers. Results: For screening of DS pregnancies, PAPP-A (cut-off 0.795 MoM) yielded the highest sensitivity (86%) and NB marker presented highest specificity (96.24%). combination of the biochemical markers PAPP-A and β-hCG (cut-off: 1.55 MoM) showed the highest sensitivity over other combined markers. The decision-tree model based on serum markers improved (91% DR For a 5% FPR) first trimester screening performance. Conclusion: The novel decision-tree model base on serum markers revealed a better predictive value to achieve high sensitivity and specificity of first trimester Down syndrome screening in Iranian population. abstract_id: PUBMED:12014236 Ultrasound screening for Down's syndrome early in pregnancy: nuchal translucency thickness Screening for Down's syndrome (DS) in the Netherlands is based on maternal age. Women aged 36 years or above are offered amniocentesis or chorionic villus sampling to determine if the foetus has Down's syndrome or other chromosomal abnormalities. Disadvantages of this method are a low detection rate of affected pregnancies (no more than 15%) and iatrogenic miscarriages (0.5-1%). Pregnant women under the age of 36 are not eligible for screening, even though 70% of the children with Down's syndrome are born in this age group. Screening by measuring nuchal translucency thickness, an early ultrasound marker for Down's syndrome carried out in the first trimester of pregnancy, has a higher detection rate with a less invasive method and therefore fewer iatrogenic miscarriages. However, Dutch law prohibits the offer of screening to women with a low individual risk of DS (Population Screening Act), because of the anxiety this may provoke in these women. The Dutch Health Council's Prenatal Screening Committee has advised the minister of Public Health, Welfare and Sport that risk-assessment screening, in particular the triple test, should no longer be postponed and that research into (other) screening options should be stimulated. abstract_id: PUBMED:22430720 Evaluation of population parameters and mathematical strategies for the calculation of prenatal risk of Down syndrome in the first trimester of pregnancy. Objective: To evaluate the population parameters applied to the calculation of risk for Down syndrome (DS) in the first trimester screening (FTS) and the comparison of performance obtained including or excluding maternal age from the mathematical algorithm. Methods: Three different calculation engines for prenatal risk of DS were developed on the basis of the population parameters from the Serum, Urine and Ultrasound Screening Study, the Fetal Medicine Foundation, and a combination of both of them. These calculators were evaluated in 14,645 first trimester pregnant women, including 59 DS affected fetuses, comparing their performance with that obtained by our commercial software Elipse® (Perkin Elmer Life and Analytical Sciences, Turku, Finland). Advanced first trimester screening (AFS) strategy was also analyzed, and a hybrid strategy (FTS + AFS) was evaluated. Results: By selecting population parameters from the Serum, Urine and Ultrasound Screening Study, the detection rate increased from 76% (Elipse) to 86% with a small increase in the false positive rate (FPR), from 3.3% to 3.7%, respectively. DS screening performance significantly improved by using the hybrid strategy (AFS in pregnant women under 35 years and FTS in pregnant women over 35 years), with a 92% detection rate (FPR: 3.9%). Conclusions: In the present study, a new hybrid screening strategy has been proposed to achieve DS detection rates higher than 90%, for a convenient &lt;4% FPR. abstract_id: PUBMED:26266002 Retrospective study evaluating the performance of a first-trimester combined screening for trisomy 21 in an Italian unselected population. Objectives: to assess the performance of a combined first-trimester screening for trisomy 21 in an unselected Italian population referred to a specialized private center for prenatal medicine. Methods: a retrospective validation of first-trimester screening algorithms [risk calculation based on maternal age and nuchal translucency (NT) alone, maternal age and serum parameters (free β-hCG and PAPP-A) alone and a combination of both] for fetal aneuploidies evaluated in an unselected Italian population at Artemisia Fetal-Maternal Medical Centre in Rome. All measurements were performed between 11(+0) and 13(+6) weeks of gestation, between April 2007 and December 2008. Results: of 3,610 single fetuses included in the study, we had a complete follow-up on 2,984. Fourteen of 17 cases of trisomy 21 were detected when a cut-off of 1:300 was applied [detection rate (DR) 82.4%, 95% confidence interval (CI) 64.2-100; false-positive rate (FPR) 4.7%, 95% CI 3.9-5.4; false-negative rate (FNR) 17.6%, 95% CI 0-35.8%]. Conclusion: in our study population the detection rate for trisomy 21, using the combined risk calculation based on maternal age, fetal NT, maternal PAPP-A and free β-hCG levels, was superior to the application of either parameter alone. The algorithm has been validated for first trimester screening in the Italian population. abstract_id: PUBMED:25914417 Screening Performance and Costs of Different Strategies in Prenatal Screening for Trisomy 21. Objective: Cell-free fetal DNA (cffDNA) testing has opened new options in prenatal screening for trisomy 21. Due to the higher costs of cffDNA testing there is an ongoing debate on how to combine different screening strategies. Methods: For this study, a model-based approach was used to evaluate all births in Germany in 2012 together with the percentage of euploid and trisomic pregnancies. Detection rates (DR), false positive rates (FPR), the costs of different screening strategies for trisomy 21 and combinations of these strategies were compared. The number of fetuses with trisomy 21 at 12 + 0 weeks of gestation was estimated based on maternal age distribution. We examined the screening performance of a screening strategy based on maternal age, first trimester screening (FTS) and cffDNA testing as well as the combinations "maternal age and cffDNA" and "FTS and cffDNA". Results: In 2012 673 544 children were born. Median maternal age at delivery was 30.2 years (25th-75th quartile: 27.0-34.0). Based on maternal age distribution the expected number of fetuses with trisomy 21 at 12 weeks' gestation was 1788. Our study population therefore consisted of 675 332 pregnancies. Screening based only on maternal age or FTS or cffDNA resulted in detection rates of 63.3 %, 92.2 % and 99.0 % and false positive rates of 21.8 %, 8.0 % and 0.1 %, respectively. When maternal age was combined with cffDNA, cffDNA testing was only offered to women over a certain age; if a cut-off of 30 years was used, this resulted in a DR of 85.2 % and a FPR of 1.7 %. If primary screening consisted of FTS with cffDNA testing only done when the risk was between 1 : 10 and 1 : 1000, the detection rate was 96.7 % and the false positive rate was 1.2 %. Conclusion: In this model-based study we showed that prenatal screening for trisomy 21 can be improved even more by combining FTS and cffDNA. Further studies are necessary to examine whether these results can be reproduced in reality. abstract_id: PUBMED:23682007 Age independent first trimester screening for Down syndrome: improvement in test performance. Objective: The aim of this study was to compare screening performance for Down syndrome of the absolute risk (AR) method to the first trimester combined test (FCT) at different maternal ages. Methods: There was a retrospective analysis of 32,448 FCT. AR was defined as final risk divided by maternal age risk. Results: The likelihood of receiving a true prediction was comparable between both methods in all age groups. With the AR method, two extra Down syndrome cases were detected in women &lt;30 years, three cases were missed in women ≥36 years, and the likelihood of receiving a false prediction decreased overall (OR 0.82, CI 0.77-0.87; P &lt; 0.0001), in women aged 36-40 years (0.45, CI 0.41-0.51; P &lt; 0.0001), in women aged 41-45 years (0.18, CI 0.13-0.26; P &lt; 0.0001) and increased in women aged ≤25 years (2.12, CI 1.52-2.96; P &lt; 0.004). Conclusions: The AR method results in a significant decreased likelihood of receiving a false prediction with a comparable likelihood of receiving a true prediction. Thus, fewer invasive diagnostic tests will be performed. It will take away the misunderstanding about differences in screening performance for women of different ages. This might lead to a higher uptake of first trimester screening resulting in a more efficient screening policy. abstract_id: PUBMED:22433342 Practitioner-specific medians for nuchal translucency to improve first-trimester screening performance. Objective: Nuchal translucency medians are used to convert nuchal translucency measurements to multiple of the median values for use in risk assessment. There has been much debate about the use of practitioner-specific medians instead of population-based medians to correct for practitioner bias. This study examined data from the California Prenatal Screening Program to evaluate the effect of individual medians on screening performance. Methods: For this retrospective study, we identified first-trimester specimens in the California Prenatal Screening Program database for which screening results were calculated using a practitioner-specific median (n=55,286) and recalculated risk assessment using a population-based median. We looked at positive rates for Down syndrome and Trisomy 18 as well as overall screening results. We also looked at the recalculated risks for cases in which there has been a positive diagnosis of a chromosomal abnormality to determine if cases would have been missed if practitioner-specific medians had not been used. Results: Nuchal translucency multiples of the median would be lower for the majority of nuchal translucency practitioners if a population-based median were used instead of practitioner-specific medians. Outcomes data from the California Prenatal Screening Program's database indicate that 14 positively diagnosed cases would have been missed by the screening program if population-based medians had been used. Conclusion: The use of practitioner-specific medians corrects for practitioner bias in measuring nuchal translucency. Practitioner-specific medians improve screening performance and help meet detection rate goals. Level Of Evidence: II. abstract_id: PUBMED:28099747 Observational study comparing the performance of first-trimester screening protocols for detecting trisomy 21 in a North Indian population. Objective: To evaluate first-trimester screening protocols for detecting trisomy 21 in an Indian population. Methods: The present prospective study collected data from women with singleton pregnancies and a crown-to-rump length of 45-84 mm who presented at the fetal medicine unit of a tertiary care center in North India between June 1, 2006, and December 31, 2015, for combined first-trimester screening. Maternal age, nuchal translucency, nasal bone, and maternal serum levels of free beta human chorionic gonadotropin and pregnancy-associated plasma protein A were assessed for calculating the risk of trisomy 21. Tricuspid regurgitation and qualitative analysis of ductus venosus data were available from June 2010, and were included where available. Trisomy-21 detection rates were calculated for various screening protocols and were compared. Results: There were 4523 women screened and 24 records of trisomy 21. Combined screening with maternal age, nuchal translucency, nasal bone, tricuspid regurgitation, and ductus venosus demonstrated optimal detection and false-positive rates of 93.8% and 1.9%, respectively. Screening using only maternal age yielded a detection rate of 37.5%; using fixed nuchal translucency cut-off values of 2.5 and 3 mm resulted in detection rates of 66.7% and 37.5%, respectively. Conclusion: Combined first-trimester screening performed well in an Indian population; combining maternal age, nuchal translucency, nasal bone, ductus venosus, and tricuspid regurgitation yielded the most accurate screening. Answer: To achieve further improvement in the first-trimester Down syndrome screening performance in the Dutch population, several strategies can be considered based on the findings from various studies: 1. Enhancing Nuchal Translucency (NT) Measurement: The performance of the first-trimester test has improved over the years, with a better performance of NT measurement being the main reason. However, NT assessment should be further improved to contribute to a higher detection rate (DR) (PUBMED:19294632). 2. Adjusting Biochemical Parameters: A better setting of the medians for biochemical parameters, such as pregnancy-associated plasma protein A (PAPP-A) and the free beta subunit of human chorionic gonadotrophin (f beta-hCG), may contribute to a higher DR (PUBMED:19294632). 3. Implementing Reflex cfDNA Screening: Reflex cfDNA screening using Rolling Circle Amplification (RCA)/Imaging solution provided the lowest false positive rate and cost, reducing the number of women requiring genetic counseling and diagnostic testing, and providing more timely results (PUBMED:33228595). 4. Utilizing Decision-Tree Models: A novel decision-tree model based on serum markers revealed a better predictive value to achieve high sensitivity and specificity of first-trimester Down syndrome screening in the Iranian population, which could be adapted for the Dutch population (PUBMED:31223317). 5. Hybrid Screening Strategy: A hybrid screening strategy (advanced first trimester screening (AFS) in pregnant women under 35 years and first trimester screening (FTS) in pregnant women over 35 years) has been proposed to achieve DS detection rates higher than 90%, for a convenient <4% false positive rate (FPR) (PUBMED:22430720). 6. Practitioner-Specific Medians: The use of practitioner-specific medians for NT can correct for practitioner bias in measuring NT and improve screening performance, helping meet detection rate goals (PUBMED:22433342). 7. Comprehensive Combined Screening: Combining maternal age, NT, nasal bone, ductus venosus, and tricuspid regurgitation yielded the most accurate screening, with optimal detection and false-positive rates (PUBMED:28099747). By implementing these strategies, the Dutch population could see further improvements in the performance of first-trimester Down syndrome screening.
Instruction: Reliability of knee joint range of motion and circumference measurements after total knee arthroplasty: does tester experience matter? Abstracts: abstract_id: PUBMED:20024893 Reliability of knee joint range of motion and circumference measurements after total knee arthroplasty: does tester experience matter? Background And Purpose: Two of the most utilized outcome measures to assess knee joint range of motion (ROM) and intra-articular effusion are goniometry and circumference, respectively. Neither goniometry nor circumference of the knee joint have been examined for both intra-tester and inter-tester in patients with total knee arthroplasty (TKA). The purpose of this study was to determine the intra-tester and inter-tester reliability of active and passive knee joint ROM and circumference in patients with TKA when administered by physiotherapists (testers) with different clinical experience. Method: The design was an intra-tester, inter-tester and intra-day reliability study. Nineteen outpatients (10 females) having received a TKA were examined by an inexperienced and an experienced physiotherapist. Following a standardized protocol, active and passive knee joint ROM and circumference measurements were obtained using a universal goniometer and a tape measure, respectively. To establish reliability, intraclass correlation coefficients (ICC(2,1)) and smallest real difference (SRD) were calculated. Results: The knee joint ROM and circumference measurements were generally reliable (ICC &gt; 0.8) within and between physiotherapists (except passive knee extension). Changes in knee joint ROM of more than 6.6 degrees and 10 degrees (except active knee flexion) and knee joint circumference of more than 1.0 cm and 1.63 cm represent a real clinical improvement (SRD) or deterioration for a single individual within and between physiotherapists, respectively. Generally, the experienced tester recorded larger knee joint ROM and lower circumference values than that of the inexperienced tester. Conclusions: In clinical practice, we suggest that repeated knee goniometric and circumferential measurements should be recorded by the same physiotherapist in individual patients with TKA. Tester experience appears not to influence the degree of reliability. abstract_id: PUBMED:19374303 Reliability of digital compass goniometer in knee joint range of motion measurement. Objective: To compare the reliability of range of motion measurement in the knee joint using a digital compass goniometer combined with inclinometer with standard range of motion measurement from roentgenographic picture. Material And Method: Range of flexion and extension of the knee joint in volunteer participants was measured by the newly developed digital compass goniometer combined with inclinometer (DCG). The results were compared with range of knee joint motion obtained from standard roentgenographic picture by intraclass correlation coefficient. Results: Range of motion of knee joint measured by DCG correlated very well with the data obtained from standard knee roentgenographic picture. The intraclass correlation coefficient equals 0.973. Conclusion: The digital compass goniometer was a reliable tool to measure knee joint range of motion in flexion and extension plane. abstract_id: PUBMED:32660574 Does contralateral knee range of motion predict postoperative knee range of motion after total knee arthroplasty? Purpose: The aim of this study was to determine if contralateral knee range of motion is associated with postoperative range of motion in the operative knee after total knee arthroplasty. Methods: Contralateral (nonoperative) knee range of motion was compared to postoperative knee range of motion after total knee arthroplasty using linear regression models in 59 patients who had undergone primary total knee arthroplasty with a minimum of 4 months postoperative follow-up data (range 4-13 months). Results: A strong linear relationship was observed between contralateral knee ranges of motion of 115° or greater and postoperative knee ranges of motion after total knee arthroplasty (slope 0.93, 95% CI 0.58-1.29, P &lt; 0.0001), with a mean difference of -7.44° (95% CI -10.3 to -4.63, P &lt; 0.0001). However, there was no association between contralateral knee range of motion and postoperative knee range of motion when contralateral knee range of motion was less than 115°. Conclusion: Contralateral knee range of motion of 115° or greater correlates linearly with postoperative range of motion after total knee arthroplasty, and thus may be predictive in such cases. abstract_id: PUBMED:33107154 Virtual assessments of knee and wrist joint range motion have comparable reliability with face-to-face assessments. Background: There has been increased usage of virtual telerehabilitation approach during the COVID-19 pandemic. It is crucial to establish reliability of conducting virtual assessments for musculoskeletal conditions. Objectives: This research determined the intra- and interrater reliability of measuring knee and wrist range of motion (ROM) assessed virtually and obtained face-to-face (F2F) using a goniometer (UG) for a student and an experienced examiner. Method: Knee and wrist joint ROM for 54 healthy participants was assessed virtually and F2F by a student examiner and an experienced physical therapist. Intra- (virtual vs. UG assessment) and inter-rater (virtual or UG assessment between examiners) reliabilities were examined for all ROM using Intraclass correlation coefficient (ICC). The ICC values were considered good (&gt;0.75) or excellent (&gt;0.90). Bland and Altman plots determined the limits of agreement (LOA) in assessing joint ROM. Results/findings: Student examiner had good reliability in virtually estimating knee extension (ICC = 0.79), wrist flexion (ICC = 0.82) and wrist extension (ICC = 0.78), whereas the experienced examiner had excellent reliability in virtually estimating all knee and wrist ROM (ICC &gt; 0.90). The LOA in assessing knee and wrist ROM for the student examiner were wider indicating higher disagreement between virtual and UG-obtained ROM in some cases. Conclusions: Virtual estimation of knee and wrist ROM is a reliable technique, however experience level impacts the precision of measurement. It is suggested that the same examiner conducts all the ROM assessments throughout the clinical course of a patient receiving virtual interventions for knee or wrist pathologies. abstract_id: PUBMED:37259662 The acute effect of roller massager on knee joint range of motion and muscle pain in older adults with total knee arthroplasty. Objectives: Older adults with total knee arthroplasty (TKA) have a limited range of knee joint motion due to pain and stiffness. A roller massager (RM) has recently been suggested to decrease pain and increase joint range of motion (ROM). Therefore, this study aimed to investigate the acute effect of RM intervention on pain and knee joint ROM in older adults with TKA in the second postoperative week. Methods: The participants were 23 patients (76.3±5.4 years) who had undergone TKA for knee OA. The degree of pain during the ROM measurements was measured using the visual analog scale before and immediately after the RM intervention. The RM intervention was performed on the thigh's lateral, central, and medial parts for three sets of 60 s using an RM. Results: RM intervention could significantly increase knee flexion ROM (p&lt;0.01, d=0.41, Δchange: 4.1±3.2°) and decrease pain during the knee ROM measurements (p&lt;0.01, d=-0.53, Δchange: -11.9±21.0). Conclusions: The three sets of 60-s RM intervention significantly increased knee flexion ROM and reduced pain in older adults with TKA in the second postoperative week. These results show that RM intervention is an effective tool for treating stiffness and pain after TKA. abstract_id: PUBMED:34776193 Assessing the knee flexion range of motion after total knee arthroplasty: Technology versus senses. Introduction: Following total knee arthroplasty surgery, attention should be paid to post-operative knee range of motion to achieve daily activities. Goniometer assessment is widely used to assess the range of motion in the post-operative period. This study aimed to determine the inter-rater ability of a smartphone application and visual estimation of the knee joint after total knee arthroplasty among different professions that commonly work together and compare whether any method is superior to another. Method: Range of motion measurements was performed by four clinicians as two physiotherapists and two orthopedic fellows. They utilized the Goniometer Reports application for smartphones, universal goniometer, and visual estimation to measure angles of knees which was operated. A two-way mixed model of intra-class correlation coefficient (ICC) with a 95% confidence level was used to assess inter-rater reliability. Results: Thirteen patients (11 female) and 20 knees (10 right) were assessed. The ICCs were found excellent both for between methods and between raters. Conclusion: Our results show that technology seems a more accurate way to determine the knee range of motion after knee arthroplasty compared to senses. However, in lack of technological resources or time, or to avoid possible infection, visual estimation also could provide useful information. abstract_id: PUBMED:34632852 The effect of patient positioning on the accuracy and reliability of assessment of knee range of motion over a telemedicine platform. Introduction: Diagnostic accuracy is one of the key considerations of telemedicine usage in orthopedic surgery. The aim of this study was to determine the optimal patient positioning to achieve accurate and reliable visual estimation of the knee joint range of motion over a digital platform for telemedicine. Methods: A single volunteer was recruited to perform a total of 120 discrete and random knee range of motion angles in three patient positions: sitting, standing and supine. The patient image was broadcast over a digital platform to six raters. The raters recorded their visual estimation of each discrete knee flexion angle independently. After each discrete knee flexion angle, a physical goniometer was used to obtain the actual flexion angle of the knee. Results: A total of 120 discrete measurements (40 measurements in the sitting, standing, and supine positions each) were recorded by each of the six raters. The supine position resulted in the highest intraclass correlation of 0.97 (95% confidence interval: 0.98, 0.99). All three patient positions achieved low absolute difference between the goniometer and the raters with 5.6 degrees (95% limits of agreement: -21.0, 9.8) in sitting, 2.7 degrees (95% limits of agreement: -10.1, 15.4), and 1.2 degrees (95% limits of agreement: -9.8, 12.3) in the supine position. The supine position had the highest accuracy and reliability. Discussion: Visual estimation of the knee joint range of motion over telemedicine is clinically accurate and reliable. Patients should be assessed in a supine position to obtain the highest accuracy and reliability for visual estimation of the knee joint range of motion during telemedicine. abstract_id: PUBMED:34539072 Factors affecting early knee-flexion range of motion after total knee arthroplasty. [Purpose] To investigate the factors affecting the knee-flexion range of motion in the early period after total knee arthroplasty. [Participants and Methods] Ninety-nine patients who had undergone total knee arthroplasty at our hospital between 2016 and 2019 were allocated into two groups based on the presence of a 110° knee-flexion range of motion at 14 days post-surgery. From medical records, we extracted data for the participants' basic attributes and preoperative/postoperative physical function (knee-flexion range of motion, Timed Up &amp; Go Test results, resting/walking pain according to a numerical rating scale, and knee-extension muscle strength). Postoperative physical function was measured 14 days post-surgery. [Results] Preoperative knee-flexion range of motion, preoperative femorotibial angle, postoperative knee-extensor strength, and postoperative Timed Up &amp; Go Test value differed significantly as factors related to achieving a 110° knee-flexion range of motion. Through further statistical analyses, we selected the preoperative knee-flexion range of motion, preoperative femorotibial angle, preoperative Timed Up &amp; Go Test result, and postoperative knee-extension strength as factors affecting the knee-flexion range of motion at 14 days post-surgery. [Conclusion] Preoperative knee-flexion range of motion, preoperative femorotibial angle, preoperative Timed Up &amp; Go Test result, and postoperative knee-extension strength influence knee-flexion range of motion at 14 days after total knee arthroplasty, and our findings indicate the effectiveness of active physiotherapy interventions. abstract_id: PUBMED:31282401 Interleukin-6 serum concentration in the elderly undergoing total knee arthroplasty: A potential predictor of the early postoperative knee joint range of motion. Purpose: To analyze the relevance between serum IL-6 with the early postoperative (post-op) knee joint range of motion (ROM) in the elderly undergoing total knee arthroplasty (TKA). To evaluate the factors affecting early post-op knee joint ROM in the elderly after TKA. Methods: One hundred and forty-three patients undergoing TKA were included. The patients' general data, perioperative knee joint ROM and serum IL-6, pre-op HSS, WOMAC, SF-36 scores and Wells scores were collected. Repeated data variance analyses were performed to analyze the relevance between serum IL-6 with post-op knee joint ROM. Correlative regression and regression analysis were performed to evaluate the multiple factors affecting knee joint ROM in the elderly undergoing TKA. Results: The 2-week-post-op (2w-post-op) knee joint ROM was significantly higher in the low IL-6 serum concentration group (&lt; 46.45 ng/L) than in the high group (⩾ 46.45 ng/L) (97.78 ± 7.44∘VS 91.85 ± 7.42∘, F= 16.860, P= 0.000). A significant negative correlation was found between 1d-post-op serum IL-6 concentration with 2w-post-op knee joint ROM (R=-0.513, P= 0.000). The factors influencing the 2w-post-op knee joint ROM included pre-op HSS scores, pre-op WOMAC scores and whether to retain PCL (posterior cruciate ligament) (R2= 0.451, P= 0.009). Conclusion: A higher 1d-post-op IL-6 serum concentration indicated a lower knee joint ROM in the elderly two weeks after TKA. abstract_id: PUBMED:28241434 Data Collection and Analysis Using Wearable Sensors for Monitoring Knee Range of Motion after Total Knee Arthroplasty. Total knee arthroplasty (TKA) is the most common treatment for degenerative osteoarthritis of that articulation. However, either in rehabilitation clinics or in hospital wards, the knee range of motion (ROM) can currently only be assessed using a goniometer. In order to provide continuous and objective measurements of knee ROM, we propose the use of wearable inertial sensors to record the knee ROM during the recovery progress. Digitalized and objective data can assist the surgeons to control the recovery status and flexibly adjust rehabilitation programs during the early acute inpatient stage. The more knee flexion ROM regained during the early inpatient period, the better the long-term knee recovery will be and the sooner early discharge can be achieved. The results of this work show that the proposed wearable sensor approach can provide an alternative for continuous monitoring and objective assessment of knee ROM recovery progress for TKA patients compared to the traditional goniometer measurements. Answer: The reliability of knee joint range of motion (ROM) and circumference measurements after total knee arthroplasty (TKA) has been investigated in various studies, and the impact of tester experience on these measurements has been considered. One study found that knee joint ROM and circumference measurements were generally reliable within and between physiotherapists, regardless of their experience level (ICC > 0.8), except for passive knee extension. It was suggested that repeated measurements should be recorded by the same physiotherapist for individual patients with TKA, and tester experience did not significantly influence the degree of reliability (PUBMED:20024893). Another study focused on the reliability of virtual assessments of knee joint ROM and found that while virtual estimation of knee and wrist ROM is reliable, the experience level of the examiner impacts the precision of measurement. The experienced examiner had excellent reliability in virtually estimating all knee and wrist ROM (ICC > 0.90), while the student examiner had good reliability. It was recommended that the same examiner conduct all ROM assessments throughout the clinical course of a patient receiving virtual interventions for knee or wrist pathologies (PUBMED:33107154). Additionally, a study assessing the knee flexion ROM after TKA using different methods found that technology, such as a smartphone application, was more accurate compared to visual estimation. However, visual estimation could still provide useful information when technological resources are not available (PUBMED:34776193). In summary, while tester experience does not seem to significantly influence the reliability of knee joint ROM and circumference measurements after TKA, the precision of virtual assessments and the use of technology can be affected by the examiner's experience level. Consistency in the examiner conducting the measurements is recommended for optimal reliability.
Instruction: Do corticosteroids reduce the mortality of influenza A (H1N1) infection? Abstracts: abstract_id: PUBMED:25888424 Do corticosteroids reduce the mortality of influenza A (H1N1) infection? A meta-analysis. Introduction: Corticosteroids are used empirically in influenza A (H1N1) treatment despite lack of clear evidence for effective treatment. This study aims to assess the efficacy of corticosteroids treatment for H1N1 infection. Methods: Systematic review and meta-analysis were used to estimate the efficacy of corticosteroids for the prevention of mortality in H1N1 infection. Databases searched included MEDLINE, EMBASE, PubMed, Cochrane Central Register of Controlled Clinical Trials and so on, and bibliographies of retrieved articles, from April 2009 to October 2014. We included both cohort studies and case-control studies reported in English or Chinese that compared treatment effects between corticosteroids and non-corticosteroids therapy in inpatients with H1N1 virus infection. Cohort studies employed mortality as outcome, and case-control studies employed deaths as cases and survivors as controls; both were assessed in this meta-analysis. Results: In total twenty-three eligible studies were included. Both cohort studies (nine studies, n = 1,405) and case-control studies (14 studies, n = 4,700) showed a similar trend toward increased mortality (cohort studies relative risk was 1.85 with 95% confidence interval (CI) 1.46 to 2.33; case-control studies odds ratio was 4.22 with 95% CI 3.10 to 5.76). The results from both subgroup analyses and sensitive analyses were consistent with each other, showing that steroid treatment is associated with mortality. However, considering the fact that corticosteroids were tend to be used in sickest case-patients and heterogeneity was observed between studies, we cannot make a solid conclusion. Conclusions: Available evidence did not support the use of corticosteroids as standard care for patients with severe influenza. We conclude that further research is required. abstract_id: PUBMED:28403799 Descriptive Analysis of Mortality Predictors in H1n1 Influenza in South Indian Patients. Background: H1N1 (hemagglutinin-H-neuroaminidase-N) influenza infection is associated with high morbidity and mortality because of associated complications and related factors. Predictors of mortality in H1N1 patients are studied with very few without seasonal/pandemic declaration. This study was carried out to describe the clinical features, complications and different risk factors that affect the outcome in the patients with confirmed H1N1influenza infection. Methods: A retrospective study was done in Kasturba Medical College Hospital, Manipal, India by analyzing the medical records of 141 patients admitted from January, 2011 to June, 2015. Results: Of the 141 patients in the study, 51.1% of the patients were female with a mean age of 32±16.2 years. Fever with headache was observed in 92.9% patients while cough in 78.7% patients and breathlessness in 54.6% patients. On the basis of disease severity, 53.2% of the patients were put on mechanical ventilation. For all the patients, treatment for influenza management began with oseltemivir. Diuretics, antianxiety and corticosteroids were given as supportive and symptomatic care which contributed to high mortality in hospitalized patients. Mean hospitalization period was 8.5 days. During the hospitalization, patients developed different complications i.e. 31.20% patients developed respiratory tract infections, while 17.7% patients developed ARDS and 14.4% patients developed sepsis. The mortality rate of this study population was found to be 29.1 %. Conclusion: It was observed that low oxygen saturation during admission, high blood urea level, use of diuretics, corticosteroids, anti-anxiety drugs and complications like ARDS, sepsis influence the mortality rate of patients with H1N1 infection. abstract_id: PUBMED:20799934 Hospitalized adult patients with 2009 influenza A(H1N1) in Beijing, China: risk factors for hospital mortality. Background: In April 2009, the pandemic influenza A(H1N1) virus emerged and spread globally. The objective of this study was to describe the independent risk factors for hospital mortality and the treatment effect of corticosteroids among patients with 2009 influenza A(H1N1) infection. Methods: We retrospectively obtained clinical data of 155 adult patients with confirmed infection of 2009 influenza A(H1N1) in 23 hospitals in Beijing, China from October 1 to December 23, 2009. Risk factors for hospital mortality were identified with multivariate logistic regression analysis. Results: Among the 155 patients, 90 (58.1%) were male, and mean age was 43.0 ± 18.6 years, and comorbidities were present in 81 (52.3%) patients. The most common organ dysfunctions included acute respiratory failure, altered mental status, septic shock, and acute renal failure. Oseltamivir was initiated in 125 patients (80.6%), only 16 patients received antiviral therapy within 48 hours after symptom onset. Fifty-two patients (33.5%) were treated with systemic corticosteroids, with a median daily dose of 80 mg. Twenty-seven patients (17.4%) died during hospital stay. Diabetes [odds ratio (OR) 8.830, 95% confidence interval [CI] 2.041 to 38.201, p = 0.004) and lactate dehydrogenase (LDH) level (OR 1.240, 95% CI 1.025 to 1.500, p = 0.027) were independent risk factors of hospital death, as were septic shock and altered mental status. Corticosteroids use was associated with a trend toward higher hospital mortality (OR 3.668, 95% CI 0.987 to 13.640, p = 0.052). Conclusions: Hospitalized patients with 2009 H1N1 influenza had relative poor outcome. The risk factors at hospitalization may help clinicians to identify the high-risk patients. In addition, corticosteroids use should not be regarded as routine pharmacologic therapy. abstract_id: PUBMED:24339638 Predictors of mortality and length of stay in hospitalized cases of 2009 influenza A (H1N1): Experiences of a tertiary care center. Aim: To study the clinical characteristics and outcome of admitted patients of H1N1 (hemagglutinin -H neuraminidase -N) influenza in a tertiary level hospital, from Oct 2009 to Dec 2010. Materials And Methods: A retrospective analysis of 77 confirmed patients admitted in this unit with H1N1 infection. Results: Of the 77 patients studied, 33 (42.8%) were female. Mean age was 40.88 ± 13.45 years, majority (70.13%) being less than 50 years. Thirty eight (49.3%) patients had at least one co-morbidity, diabetes mellitus being the most common (n = 15, 19.5%). The most common presenting symptom was fever in 75 (97.4%) patients, cough in 67 (87%) and dyspnoea in 59 (76.6%) patients. At admission, mean PaO2/FiO2 ratio was 213.16 ± 132.75 mmHg (n = 60) while mean PaCO2 was 40.14 ± 14.86 mmHg. One or more organ failure was present in 45 (58.4%) patients. Nineteen (24.60%) patients required invasive mechanical ventilation. Circulatory failure was observed in 10 (13%) patients while 2 patients required hemodialysis. Overall, 13% mortality (n = 10) was observed. PaCO2 level at admission (OR 1.093; 95% confidence interval: 1.002-1.193; P = 0.044) and number of organ failure (OR 8.089; 95% confidence interval: 1.133-57.778; P = 0.037) were identified as independent risk- factors for mortality. Conclusion: Increased duration of dyspnoea prior to admission, pneumonia, low PaO2/FiO2 ratio at admission and 24 hours later, higher PaCO2 values on admission, higher O2 requirement, number of organ failures and use of corticosteroids and delay in specialized treatment were associated with a poorer outcome. abstract_id: PUBMED:25147700 Prevalence and Mortality of Influenza A (H1N1) Virus Among Patients With Acute Respiratory Infection in Southwest Iran. Background: Khuzestan and other parts of Iran were involved with Influenza A (H1N1) pandemic in 2009. Objectives: The aim of this study was to describe the prevalence and mortality of H1N1 in Behbahan, a city in Khuzestan, Southwest of Iran. Patients And Methods: The study population consisted of cases of influenza, hospitalized or referred to the city health centers. Diagnosis of H1N1 virus infection was based on rapid antigen testing (RT-PCR) of nasopharyngeal swabs. Data extracted from epidemiological survey forms, including demographic and clinical characteristics, laboratory results, risk factors and underlying diseases, medications, and treatment outcomes of patients were analyzed using SPSS 16 software by using Pearson chi-square test. Results: From a total of 318 patients, 180 (56.6%) were male and 138 (43.4%) female. Total number of patients with positive H1N1 tests was 167 (52.5%) with a male: female ratio of 1.2:1. Of total 318 admitted patients, 173 (96.1%) males and 135 (97.8%) females recovered and 10 people (7 (3.9%) males and 3 (2.3%) females) died, among which, three had positive test results for H1N1. The most prevalent signs and symptoms were: fever in 308 (96%) patients, cough in 278 (86.6%), lower respiratory symptoms in 208 (64.8%), gastrointestinal symptoms in 90 (28%), respiratory distress in 45 (13.7), and flu-like symptoms in 65 (20.2%) patients. Conclusions: Prevalence rate of H1N1 infection in the study region was higher compared to other part of Iran; but, close to the expected rate. The H1N1-associated mortality rate was lower than the reported rates in Iran and other parts of the world. abstract_id: PUBMED:24216158 Maternal mortality due to pandemic influenza A H1N1 2009 virus in Colombia. Aims: The 2009 H1N1 pandemic illustrated the higher morbidity and mortality from viral infections in peripartum women. We describe clinical features of women who recently died of H1N1 in Colombia. Methods: This is a case series study that was gathered through a retrospective record review of all maternal H1N1 deaths in the country. The national mortality database of confirmed mortality from H1N1 in pregnancy and up to 42 days after delivery was reviewed during the H1N1 season in 2009. Women with H1N1 infections were confirmed by the laboratory of virology. Demographic, clinical, and laboratory data were reviewed. Statistical analyses were performed and median values of non-parametric data were reported with inter-quartile range (IQR). Results: A total of 23 H1N1 maternal deaths were identified. Eighty-three percent occurred in the third trimester. None of the mothers who died had received influenza vaccination. The median time from symptom onset to the initiation of antiviral treatment was 8.8 days (IQR 5.8-9.8). Five fatalities did not receive any anti-viral therapy. Median PaO2/FiO2 on day 1 was 80 (IQR, 60-98.5). All patients required inotropic support and mechanical ventilation with barotrauma-related complications of mechanical ventilation occurring in 35% of patients. Conclusion: In Colombia, none of the women suffering H1N1-related maternal deaths had received vaccination against the disease and most had delayed or had no anti-viral therapy. Given the lack of evidence-based clinical predictors to identify women who are prone to die from H1N1 in pregnancy, following international guidelines for vaccination and initiation of antiviral therapy in suspected cases would likely improve outcomes in developing countries. abstract_id: PUBMED:21077745 Novel influenza A (H1N1) in patients with hematologic disease. Patients with hematologic disease are likely to be at increased risk for infection with influenza. We retrospectively analyzed 11 cases of patients with hematologic disease who were infected with pandemic H1N1 virus in our department, including their clinical manifestations, laboratory and imaging findings, outcomes of antiviral therapy, and factors associated with mortality. Notably, nine patients had lower respiratory tract disease. Five patients progressed to respiratory failure and eventually died, despite treatment with antivirals and/or corticosteroids and/or mechanical ventilation. We concluded that H1N1 2009 infection was associated with a severe course and high rate of mortality in patients with hematologic disease, and early diagnosis and antiviral treatment were important to reduce the rate of severe complications and mortality. abstract_id: PUBMED:32637295 Clinical characteristics and outcomes of patients with H1N1 influenza pneumonia admitted at a tertiary care hospital in Karachi, Pakistan. Introduction: Influenza viruses specifically, A and B mainly contribute to seasonal outbreaks that occur globally. However, due to limited diagnostics for influenza there is little data regarding clinical outcomes of patients with H1N1 pneumonia in our region. Our objective was to determine the clinical characteristics and outcomes of patients hospitalized with H1N1 pneumonia at a tertiary care facility in Karachi, Pakistan. Methods: A retrospective study of adult patients admitted with influenza pneumonia from November 2017 to February 2018 at a tertiary care hospital in Karachi, Pakistan. Patient characteristics were compared between influenza A H1N1 and other types of influenza using multivariable logistic regression analysis and subgroup analysis for factors associated with mortality in H1N1 Pneumonia was performed. Results: Out of 497 adult patients with community acquired pneumonia (CAP), 172 fulfilled the criteria for Influenza like illness (ILI). 88 patients had PCR confirmed Influenza pneumonia of whom n = 57 (65%) had Influenza A H1N1. The mean age of patients 53.5 years (SD: 17.3) and 60% were male. The overall mortality from Influenza in this study was 15.9% (n = 14); out of these 11 (78.5%) had Influenza A H1N1. Multivariable analysis showed that the increase in length of hospital admission was significantly associated with H1N1 Influenza A infection (OR: 1.47 CI: 1.2-1.8). Factors associated with mortality showed that presence of ARDS, Septic shock and multi-organ failure was highly significantly associated with death (p-value &lt; 0.001) along with deranged liver function tests (p-value 0.01) and presence of nosocomial infection (p-value 0.027). Conclusion: Influenza A H1N1 is associated with greater length of stay compared with infection due to other types of Influenza and mortality in H1N1 Pneumonia was found to be associated with presence of nosocomial infection among several other factors which may have implications given higher rates in a low-middle income country. abstract_id: PUBMED:23826795 Pandemic influenza A(H1N1)pdm09: an unrecognized cause of mortality in children in Pakistan. The role of influenza virus as a cause of child mortality in South Asia is under-recognized. We aimed to determine the incidence and case fatality rate of influenza A(H1N1)pdm09 infections in hospitalized children in Karachi, Pakistan. Children less than 5 y old admitted with respiratory illnesses to the Aga Khan University Hospital, Karachi, from 17 August 2009 to 16 September 2011, were tested for influenza A(H1N1)pdm09 using a real-time reverse transcriptase polymerase chain reaction. Out of 2650 children less than 5 y old admitted with a respiratory illness during the study period, 812 (31%) were enrolled. Influenza A(H1N1)pdm09 virus was detected in 27 (3.3%) children. There were 4 deaths in children who tested positive for influenza A(H1N1)pdm09 (case fatality rate of 15%). Children with influenza A(H1N1)pdm09 were 5 times more likely to be admitted or transferred to the intensive care unit, 5.5 times more likely to be intubated, and 12.9 times more likely to die as compared to children testing negative for influenza A(H1N1)pdm09. abstract_id: PUBMED:30216935 Clinical characteristics, evolution, and treatment-related risk factors for mortality among immunosuppressed patients with influenza A (H1N1) virus admitted to the intensive care unit. Purpose: Information about immunocompromised patients infected with influenza A (H1N1) virus and requiring admission to the ICU is lacking. Our objective was to know the clinical characteristics of these patients and to identify treatment-related variables associated with mortality. Material And Methods: A prospective multicenter observational cohort study was based on data from a Spanish registry (2009-2015) collected by 148 Spanish ICUs. All patients admitted to the ICU with the diagnosis of influenza A (H1N1) virus infection were included. Immunosuppression was clearly defined. Factors associated with mortality in immunocompromised patients were assessed by conventional logistic regression analysis and by a propensity score (PS) adjusted-multivariable analysis. Results: Of 1899 patients with influenza A (H1N1) infection, 238 (12.5%) were classified as immunocompromised. Mortality was significantly higher in immunosuppressed patients. Four variables independently associated with mortality were identified: SOFA score, need of vasopressor, use of corticosteroids, and acute renal failure, AKIN 3 stage. In the PS-adjusted model, corticosteroid therapy remained as an independent factor associated with increased mortality (OR 2.25;95%CI, 1.15-4.38;p = 0.017). In the subgroup of hematological patients (n = 141), corticosteroid therapy was also associated with increased mortality (OR 3.12; 95%CI, 1.32-7.41; p = 0.010). Conclusion: Immunocompromised individuals with influenza A (H1N1) admitted to the ICU have a poor outcome. In this population, the use of corticosteroids is strongly discouraged. Answer: The available evidence from the provided abstracts does not support the use of corticosteroids as standard care for patients with severe influenza A (H1N1) infection, and in fact, suggests that corticosteroids may be associated with increased mortality. A meta-analysis that included both cohort studies and case-control studies found that corticosteroid treatment is associated with increased mortality in patients with H1N1 infection. The relative risk from cohort studies was 1.85, and the odds ratio from case-control studies was 4.22, indicating a trend toward increased mortality with corticosteroid use. However, the authors noted that corticosteroids tended to be used in the sickest patients, and there was heterogeneity between studies, which means a solid conclusion could not be made (PUBMED:25888424). Another study from South India observed that the use of corticosteroids, among other factors, influenced the mortality rate of patients with H1N1 infection (PUBMED:28403799). Similarly, a study in Beijing found that corticosteroid use was associated with a trend toward higher hospital mortality (PUBMED:20799934). A tertiary care center experience also associated the use of corticosteroids with poorer outcomes in hospitalized cases of H1N1 (PUBMED:24339638). Furthermore, a multicenter observational cohort study in Spain found that in immunocompromised individuals with influenza A (H1N1) admitted to the ICU, the use of corticosteroids was strongly discouraged as it was associated with increased mortality (PUBMED:30216935). In summary, the evidence from these studies suggests that corticosteroids do not reduce mortality in influenza A (H1N1) infection and may be harmful, leading to higher mortality rates. Therefore, corticosteroids should not be routinely used for treating H1N1 influenza, especially in severe cases requiring ICU admission.
Instruction: Methods of surgical therapy in pulmonary hydatid disease: is capitonnage advantageous? Abstracts: abstract_id: PUBMED:16336395 Methods of surgical therapy in pulmonary hydatid disease: is capitonnage advantageous? Background: Hydatid disease of the lungs is still a serious health problem for some Mediterranean countries. The best surgical therapy for the treatment of this disease is still unclear. In this clinical retrospective study, we aimed to investigate whether capitonnage is an effective therapy method for a pulmonary hydatid cyst or not. Methods: Between 1990 and 2004, 89 patients (49 male and 40 female) each with a pulmonary hydatid cyst were treated surgically. The patients were divided into two groups based on whether capitonnage was carried out (group 1, n = 44) or not (group 2, n = 45). Cystotomy and the closure of the bronchial openings were carried out in both groups. The results were assessed in two groups during a 1 year follow-up period. The groups were comparable with regards to their clinical, operative and demographic characteristics. Results: There was no mortality, and no significant difference in hospitalisation time between the two groups. It was 8.2 +/- 3.2 days for group 1 and 8 +/- 3.1 days for group 2 (P = 0.89). The most serious complications were restricted bronchopleural fistula (only one patient in group 2), empyema (only one patient in group 1 and two patients in group 2), and prolonged air leak (five patients in group 1 and six patients in group 2). These results were not valuable statistically (P = 1.00, P = 1.00 and P = 0.78, respectively). Conclusions: We conclude that it is not an advantage to carry out capitonnage when treating pulmonary hydatid cysts. abstract_id: PUBMED:8035372 Surgical therapy for gestational trophoblastic disease. Current successful therapy for women with gestational trophoblastic disease (GTD) is based upon the use of sensitive assays for human chorionic gonadotropin to monitor disease status and individualized application of chemotherapy based upon recognized risk factors. However, surgical therapy continues to play a role in the management of GTD. Suction dilation and curettage and hysterectomy are used for primary therapy for hydatidiform mole. Hysterectomy may be used as an adjunct to the primary management of selected patients with malignant GTD. Other extirpative procedures, such as thoracotomy, may be integrated into the management of drug-resistant disease. abstract_id: PUBMED:2822920 Role of surgical therapy and radiotherapy in gestational trophoblastic disease. Surgical procedures and radiotherapy continue to play a significant role in the management of gestational trophoblastic disease (GTD) despite continuing advances in chemotherapy. Suction curettage and hysterectomy are preferred techniques for evacuation of hydatidiform mole. Although primary chemotherapy alone is usually successful in women with nonmetastatic or good-prognosis metastatic GTD, hysterectomy is useful in selected patients to decrease the amount of chemotherapy required to produce remission or as salvage therapy in patients who have failed primary chemotherapy. Even among patients with poor-prognosis metastatic GTD, such adjunctive surgical procedures as hysterectomy, thoracotomy and craniotomy may be useful. Whole brain and liver irradiation is employed as adjuvant therapy to reduce hemorrhagic complications of brain and liver metastases. abstract_id: PUBMED:3904273 Possibilities of surgical and drug therapy of parasitic cholestasis Out of 123 patients with hydatid-disease of the liver in 15 patients biliary engorgement by compression from cysts or parasites in the biliary system were observed. Diagnosis can be established by serology, sonography, scintigraphy, ERCP and CT. Invasive diagnostic procedures are not indicated. Surgical therapy of echinococcosis is in the first place; long-term adjuvant chemotherapy by Mebendazole is indicated if the resection of the cysts is not complete. abstract_id: PUBMED:31196743 Identity determination in diagnostic surgical pathology. From a technical perspective, specimen identity determination in surgical pathology over the last several decades has primarily focused on analysis of repetitive DNA sequences, specifically microsatellite repeats. However, a number of techniques have recently been developed that have similar, if not greater, utility in surgical pathology, most notably analysis of single nucleotide polymorphism (SNPs) and gene panels by next generation sequencing (NGS). For cases with an extremely limited sample or a degraded sample, sequence analysis of mitochondrial DNA continues to be the method of choice. From a diagnostic perspective, interest in identity determination in surgical pathology is usually centered on resolving issues of specimen provenance due to specimen labeling/accessioning deficiencies and possible contamination, but is also frequently performed in cases for which the patient's clinical course following definitive therapy is remarkably atypical, in cases of an unexpected diagnosis, and by patient request for "peace of mind". However, the methods used for identity determination have a much broader range of applications in surgical pathology beyond tissue provenance analysis. The methods can be used to provide ancillary information for cases in which the histomorphology is not definitively diagnostic, as for example for tumors that have a virtually identical microscopic appearance but for which the differential diagnosis includes synchronous/metachronous tumors versus a metastasis, and for the diagnosis of hydropic early gestations versus hydatidiform molar pregnancies. The methods also have utility in several other clinical settings, for example to rule out a donor-transmitted malignancy in a transplant recipient, to monitor bone marrow transplant engraftment, and to evaluate natural chimerism. abstract_id: PUBMED:22173260 Splenic recurrence of liver hydatid cyst and spleen preserving therapy. Hydatid cyst disease remains a considerable public health problem, especially in pastoral and farming regions. Although the spleen is the third most commonly affected organ after the liver and lungs, splenic hydatid cyst is an uncommon entity even in areas that are endemic for echinococcosis. The recurrence rate after surgical therapy of the liver hydatid cyst is reported as 6.8-22.3 percent. Recurrences most frequently occur in the liver. Extrahepatic recurrences occur in the lung or peritoneum and the serosa of the abdominal organs. Splenic recurrence of liver hydatid cyst has not previously been reported. The most common surgical therapy is splenectomy, and the other option is spleen preserving surgery. We report the first case of recurrent splenic hydatid cyst in the spleen and liver synchronously after surgical therapy for liver hydatid disease. The patient was treated with liver resection and spleen preserving surgery. abstract_id: PUBMED:19436793 Surgical management of cardiac hydatidosis. Cardiac hydatidosis is extremely uncommon; only a few case series have been reported in the worldwide medical literature. Whereas hydatid cysts develop in the liver or lungs in 90% of patients who have hydatidosis, only 0.5% to 2% of patients thus diagnosed have cysts of the heart. Herein, we present the clinical summaries of 4 patients who had cardiac hydatid cysts-0.5% of the 763 patients who underwent surgery for thoracic hydatidosis over a period of 20 years at Ibn-Alnafis Teaching Hospital in Baghdad, Iraq. In addition, we discuss our operative technique. A year after surgical treatment and medical therapy, all 4 patients were free of hydatid disease. abstract_id: PUBMED:10986988 Hydatid disease of the spine. A long-term followup after surgical treatment. Vertebral hydatidosis is a relatively silent and slowly progressive disease with a latent period of many years. The prognosis for neurologic recovery in hydatid disease of the spine generally is regarded as poor and posterior surgical decompression is associated with a high recurrence rate. Reported here are the results after a long-term followup of four patients with hydatid disease of the spine with neurologic deficit which significantly improved after anterior surgical decompression and anthelmintic therapy. Adequate preoperative evaluation with magnetic resonance imaging and computed tomography scanning is essential in localizing the lesions and planning the surgical approach. The authors recommend radical excision of the involved vertebra and anthelmintic drugs to control the disease process. abstract_id: PUBMED:34552640 Evaluating the rate of post-surgical complications between three surgical methods in management of pulmonary hydatid disease. Introduction: Hydatid disease is a zoonotic disease caused by Echinococcus. Surgery remains the gold standard approach of treatment. Aim: This study reports on 10 years of experience on surgical management of 78 cases of pulmonary hydatid disease and compares the rates of post-surgical complications between three approaches. Material And Methods: Three different surgical approaches - the Ugon method, cystectomy and lobectomy - were performed for pulmonary hydatid disease treatment during a 1-year follow-up program. The relationships between patient's age, sex, cyst location and surgical approach and occurrence of post-surgical complications were first assessed. Then post-surgical complications between these three methods were compared. Results: From 78 patients, 51.5% were female and 48.5% were male (whose average age was 36.1). Hydatid cysts were found in the right (43.58%) and left (37.17%) lung while 19.23% of patients had bilateral cysts. Patient's age, sex and cyst location did not have any significant effect on the occurrence of complications. Post-surgical complications were only dependent on the surgical approach. 23% of the patients had post-surgical complications (including air leak, fistula, empyema, seroma and wound infection) and air leak was the most frequent one. Conclusions: Since complications were only dependent on the surgical method, the rate of post-surgical complications were compared between the three approaches. Cystectomy and lobectomy had similar rates of complications, which were lower than that of the Ugon method. It can be concluded that cystectomy is the method of choice for management of pulmonary hydatid disease, with the lowest rate of complications. abstract_id: PUBMED:17582967 Conservative surgical treatment of pulmonary hydatid disease in children Background: Hydatide disease is endemic in sheep and cattle-raising areas world wide. Its prevalence is also high in the Mediterranean region including Bosnia and Herzegovina. Objective: Retrospective appraisal of the presentation, approach to surgical treatment and outcome of hydatid disease in children at the Clinic for Thoracic Surgery UCC Sarajevo between January 2000 and January 2007. Patients And Methods: We surgically treated 58 patients (48 male, 10 female) with a diagnosis of pulmonary hydatid disease with average age of 11,36 +/- 3,78 (range 4 to 15) years. Chest radiographs, computed tomograms and ultrasonography were the main methods of diagnosis. The basic principle of surgery was to preserve lung parenchyme as much as possible. Results: Clinical assessment of the symptoms, plain and lateral chest X-rays and computed tomograms led to the correct diagnosis in 100,0% (58/58) of patients. Solitary lung cyst was found in 68,97% (40/58) patients while the remaining 31,03% (18/58) were multiple cysts in one or more lobes of one or both lungs. Intact cysts were found in 51,35% (38/74) and ruptured cysts in 48,65% (36/74) cases. Lung preserving surgical interventions were done in all of the patients. Postoperative complications were seen in 5,17% (3/58) patients. There was no mortality. Recurrent hydatid cysts were observed in 3,45% (2/58) patients. Conclusion: Lung preserving surgical interventions are the treatment of choice for pulmonary hydatid cysts in children. Radical surgical procedures such as segmentectomy, lobectomy and pneumonectomy should be avoided as much as possible. Answer: The question of whether capitonnage is advantageous in the surgical therapy of pulmonary hydatid disease has been addressed in a clinical retrospective study. The study compared two groups of patients who underwent surgical treatment for pulmonary hydatid cysts, with one group receiving capitonnage (group 1) and the other not (group 2). Both groups underwent cystotomy and closure of bronchial openings. The results of the study showed no significant difference in hospitalization time between the two groups, with group 1 having an average of 8.2 ± 3.2 days and group 2 having an average of 8 ± 3.1 days (P = 0.89). Additionally, there were no statistically significant differences in the occurrence of serious complications such as bronchopleural fistula, empyema, and prolonged air leak between the two groups (P = 1.00, P = 1.00, and P = 0.78, respectively). Based on these findings, the study concluded that capitonnage does not offer an advantage when treating pulmonary hydatid cysts (PUBMED:16336395). This conclusion is supported by another study that evaluated the rate of post-surgical complications between three surgical methods in the management of pulmonary hydatid disease. The study found that post-surgical complications were dependent on the surgical approach, with cystectomy and lobectomy having similar rates of complications, which were lower than that of the Ugon method. The study suggested that cystectomy is the method of choice for the management of pulmonary hydatid disease, with the lowest rate of complications (PUBMED:34552640). In the context of children, conservative surgical treatment aimed at preserving lung parenchyma is recommended. The study on children with pulmonary hydatid disease found that lung-preserving surgical interventions were the treatment of choice, and radical procedures such as segmentectomy, lobectomy, and pneumonectomy should be avoided as much as possible (PUBMED:17582967). Overall, the evidence suggests that capitonnage may not be advantageous in the surgical treatment of pulmonary hydatid disease, and lung-preserving approaches such as cystectomy are preferred.
Instruction: 24-hour urine collection in the metabolic evaluation of stone formers: is one study adequate? Abstracts: abstract_id: PUBMED:25423010 Predicting Patients with Inadequate 24- or 48-Hour Urine Collections at Time of Metabolic Stone Evaluation. Purpose: We aimed to understand the characteristics of patients who are less likely to submit adequate urine collections at metabolic stone evaluation. Methods: Inadequate urine collection was defined using two definitions: (1) Reference ranges for 24-hour creatinine/kilogram (Cr/24) and (2) discrepancy in total 24-hour urine Cr between 24-hour urine collections. There were 1502 patients with ≥1 kidney stone between 1998 and 2014 who performed a 24- or 48-hour urine collection at Northwestern Memorial Hospital and who were identified retrospectively. Multivariate analysis was performed to analyze predictor variables for adequate urine collection. Results: A total of 2852 urine collections were analyzed. Mean age for males was 54.4 years (range 17-86), and for females was 50.2 years (range 8-90). One patient in the study was younger than 17 years old. (1) Analysis based on the Cr 24/kg definition: There were 50.7% of patients who supplied an inadequate sample. Females were nearly 50% less likely to supply an adequate sample compared with men, P&lt;0.001. Diabetes (odds ratio [OR] 1.42 [1.04-1.94], P=0.026) and vitamin D supplementation (OR 0.64 [0.43-0.95], P=0.028) predicted receiving an adequate/inadequate sample, respectively. (2) Analysis based on differences between total urinary Cr: The model was stratified based on percentage differences between samples up to 50%. At 10%, 20%, 30%, 40%, and 50% differences, inadequate collections were achieved in 82.8%, 66.9%, 51.7%, 38.5%, and 26.4% of patients, respectively. Statistical significance was observed based on differences of ≥40%, and this was defined as the threshold for an inadequate sample. Female sex (OR 0.73 [0.54-0.98], P=0.037) predicted supplying inadequate samples. Adequate collections were more likely to be received on a Sunday (OR 1.6 [1.03-2.58], P=0.038) and by sedentary workers (OR 2.3 [1.12-4.72], P=0.023). Conclusion: Urine collections from patients during metabolic evaluation for nephrolithiasis may be considered inadequate based on two commonly used clinical definitions. This may have therapeutic or economic ramifications and the propensity for females to supply inadequate samples should be investigated further. abstract_id: PUBMED:22967013 24-hour urine collection in the metabolic evaluation of stone formers: is one study adequate? Unlabelled: Abstract Purpose: To determine whether one vs two 24-hour urine collections is optimal in the metabolic evaluation of nephrolithiasis. Methods: We retrospectively reviewed all 24-hour urine collections performed at our tertiary stone clinic from July 1997 to February 2012. We identified patients with two 24-hour urine collections performed ≤10 days apart. Samples were analyzed by an outside laboratory for the standard urinary parameters. For each parameter, pairwise t tests were performed and Pearson correlation coefficients were calculated to compare samples 1 and 2. In addition, the number of cases that changed from normal to abnormal or vice versa was also evaluated for each parameter and the Kappa statistic was calculated. Results: A total of 813 subjects submitted two 24-hour urine collections ≤10 days apart. Mean age was 53.2 years, and mean body mass index was 28.8 kg/m(2). Based on creatinine 24/kg, subset analysis was performed for all properly collected samples (n=236). Using pairwise t test, 24-hour urine volume (P=0.0365) and phosphorus (P=0.0387) showed a statistically significant difference between samples 1 and 2. None of the other urinary parameters demonstrated a statistically significant difference when means were compared (pairwise t test, P&gt;0.05), (range 0.061-0.9983). Pearson correlation demonstrated a high degree of correlation between two 24-hour urines for all variables (r=0.66-0.95, each P&lt;0.0001). Depending on the urinary parameter assessed, 5.5% to 44.9% of patients changed from normalcy to abnormality, or vice versa. Conclusions: A single 24-hour urine collection may have changed clinical decision making in up to 45% of patients. Therefore, we recommend two collections to optimize the diagnostic yield and appropriately target stone prevention strategies. abstract_id: PUBMED:33599426 A comparison between 24h urine collection and overnight spot urines in evaluating the risk of stone disease Despite being recommended by most guidelines, the metabolic evaluation of patients with nephrolithiasis has limited diffusion due to difficulties relating both to the access to laboratory investigations and to urine collection modalities. Consequently, in addition to the classical 24-h collection, alternative and simplified collection modes have been proposed. We report here on the comparison between metabolic evaluation carried out on 24-h double collection (Lithotest) and overnight spot urines (RF test). Fifty-four patients with stone disease were enrolled, excluding patients with infection or cystine stones. For Lithotest, we measured all analytes necessary to calculate state of saturation (ß) with calcium oxalate, brushite and uric acid, by means of Lithorisk.com. For RF, we measured calcium, magnesium, oxalate, citrate, sulphate, phosphate, pH and creatinine. The comparison was made with creatinine ratios. An estimate of ßCaOx, ßbrushite and ßAU was obtained also on RF urines by using simplified algorithms. We found highly significant correlations between all parameters, despite quite different means. There was a nice correspondence between the two sets of measurements, assessed by the Bland-Altmann test, for calcium, oxalate, citrate, sulphate, urate and pH. Overnight urine had higher saturations compared to 24-h one owing to higher concentration of the former. In conclusion, RF test on overnight urine cannot completely replace Lithotest on 24-hr urine. However, it can represent a simplified tool for either preliminary evaluation or follow-up of patients with stone disease. abstract_id: PUBMED:27746283 The Role of the 24-Hour Urine Collection in the Prevention of Kidney Stone Recurrence. Purpose: Kidney stone prevention relies on the 24-hour urine collection to diagnose metabolic abnormalities and direct dietary and pharmacological therapy. While its use is guideline supported for high risk and interested patients, evidence that the test can accurately predict recurrence or treatment response is limited. We sought to critically reassess the role of the 24-hour urine collection in stone prevention. Materials And Methods: In addition to a MEDLINE® search to identify controlled studies of dietary and pharmacological interventions, evidence supporting the AUA (American Urological Association) and EAU (European Association of Urology) guidelines for metabolic stone prevention were evaluated. Additionally, the placebo arms of these studies were examined to assess the stone clinic effect, that is the impact of regular office visits without specific treatment on stone recurrence. Results: The 24-hour urine test has several limitations, including the complexity of interpretation, the need for repeat collections, the inability to predict stone recurrence with individual parameters and supersaturation values, the unclear rationale of laboratory cutoff values and the difficulty of determining collection adequacy. Only 1 prospective trial has compared selective dietary recommendations based on 24-hour urine collection results vs general dietary instructions. While the trial supported the intervention arm, significant limitations to the study were found. Placebo arms of intervention trials have noted a 0% to 61% decrease in stone recurrence rate and a remission rate during the study of 20% to 86%. Conclusions: Whether all recurrent stone formers benefit from 24-hour urine collection has not been established. Additional comparative effectiveness trials are needed to determine which stone former benefits from selective therapy, as guided by the 24-hour urine collection. abstract_id: PUBMED:34800932 Metabolic evaluation of patients with recurrent, multiple or bilateral renal stones in Jaffna. Introduction: A comprehensive evaluation to identify metabolic abnormalities will help design management strategies for prevention of renal stone recurrences. The objective of this study is to identify the metabolic risk factors in a series of patients with recurrent, multiple or bilateral renal stone disease from the Northern Province, Sri Lanka. Methods: This is a hospital based observational study on patients with recurrent, multiple or bilateral renal stones. Metabolic evaluation workout included, fasting non-tourniquet venous blood sample for serum ionized calcium, creatinine, and uric acid, freshly voided early morning urine sample for pH and laboratory urinalysis, and collection of two sets of 24 hour urine samples on two separate days, for phosphorous, creatinine, oxalate, uric acid, magnesium, calcium and citrate. Results: The mean values of each metabolic parameter measured on the two sets of 24 hour urine samples, were found to be within the normal reference range for the entire 30 patient study group taken as whole, although the mean values for 24 hour urine magnesium (79.79±51.36 mg) and urine calcium (125.02±73.32 mg) were found to be towards the lower limit of normal reference range and the 24 hour urine oxalate (0.38±0.22 mmol) was found to be towards the upper limit of normal reference range. However, 80% of patients individually showed abnormal metabolic results either as single derangement (13 patients, 43.3%) or in multiple combinations of derangements (11 patients, 36.7%). Six patients (20%) did not have any metabolic abnormalities. Hyperoxaluria, hyperuricosuria, hypomagnesuria and hypocitraturia were the metabolic abnormalities identified among the study group. Conclusion: A comprehensive metabolic evaluation of each patient with recurrent, multiple or bilateral renal stone disease can help identify metabolic abnormalities, either as a single abnormality or combinations of abnormalities, specific for them. Such patient profiling will help develop bespoke management strategy based on appropriate dietary advice and specific medical treatment towards prevention of further stone formation. abstract_id: PUBMED:24018242 Prevalence of 24-hour urine collection in high risk stone formers. Purpose: Secondary prevention has an important role in urinary stone disease. The core of secondary prevention is the identification of modifiable risk factors by a 24-hour urine collection, which then directs selective medical therapy. While this decreases the recurrence rate, little is known about the frequency with which 24-hour urine collections are obtained. Materials And Methods: Using medical claims from 2002 to 2006 we identified adults with incident urinary stone episodes. With appropriate diagnosis codes we determined those at high risk for recurrence. Of these patients we determined the proportion in whom a 24-hour urine collection was done within 6 months of diagnosis. Finally, we fitted regression models to measure associations between patient and provider level factors, and obtaining a 24-hour urine collection. Results: We identified 28,836 patients at high risk for recurrence. The prevalence of 24-hour urine testing increased from 7.0% in 2003 to 7.9% in 2006 (p = 0.011), although the overall prevalence was exceedingly low at 7.4%. Multivariable regression revealed that region of residence and level of comorbid illness were independently associated with 24-hour urine collection, as was the type of physician who performed the followup. For instance, the odds of metabolic evaluation were 2.9 times higher when a patient was seen by a nephrologist (OR 2.92, 95% CI 2.32-3.67), and more than threefold higher when seen by a urologist (OR 3.87, 95% CI 3.48-4.30). Conclusions: Obtaining 24-hour urine collections in stone formers at high risk is uncommon, raising a quality of care concern. abstract_id: PUBMED:30473637 Accuracy in 24-hour Urine Collection at a Tertiary Center. There is a paucity of studies addressing the accuracy of 24-hour urine collection for assessing stone risk parameters. Collection accuracy is thought to be essential for assigning optimal therapy for stone prevention. The objective of this study was to determine factors associated with accurate and inaccurate collections. During a 2-year period (2015-2016), 241 stone formers completed 24-hour urine collections. They were divided into accurate collectors (AC), defined as at least one accurate urine collection, and inaccurate collectors (IC). Accuracy was assessed by 24-hour urine creatinine (Cr) excretion indexed to body weight (normal: males, 20-25 mg Cr/kg; females, 15-20 mg Cr/kg). Demographic data analyzed included age, gender, race, insurance status, partner status, income, and education. Statistical analysis methods included the chi-square test, Fisher's exact test, and the two-group t-test. Average age was 50.7 years at the time of collection; 50.2% were men, 86% were white, and 14% were black. Overall, 51.0% of collections were inaccurate. There was no statistical significance between AC and IC for gender (P = 0.85), race (P = 0.90), insurance status (P = 0.85), recurrence (P = 0.87), stone type (P = 0.57), education (P = 0.35), income (P 5 0.42), or poverty (P = 0.35). Older age (P = 0.017) and having a partner (P = 0.022) were significantly associated with AC. The high rate of inaccurate 24-hour urine collections is a concern. The only factors we identified as influencing collection accuracy were age and partner status. These results underscore the importance of developing methods to improve the accuracy of collecting 24-hour urine samples. abstract_id: PUBMED:28318298 Factors Associated with Compliance to Increased Fluid Intake and Urine Volume Following Dietary Counseling in First-Time Kidney Stone Patients. Objective: Increasing fluid intake to achieve a urine volume (UV) of &gt;2.5 L/day decreases stone events. We assessed compliance rates and demographic and clinical variables associated with increased fluid intake and UV in patients with urolithiasis following dietary counseling. Materials And Methods: In a retrospective study, patients with a low baseline UV (&lt;2.5 L/day) on an initial 24-hour urine collection during metabolic stone evaluation were identified between 2010 and 2015. Patients received detailed standardized dietary counseling, including increasing fluid intake to &gt;3 L/day and titrating it to achieve UV &gt;2.5 L/day. A follow-up 24-hour urine collection was performed at 6 months (FU1) and 18 months (FU2) to assess compliance (UV &gt;2.5 L/day) and predictors associated with it. Results: Our cohort contained 363 patients, 53.4% men and 46.6% women. The mean UV of baseline, FU1, and FU2 urine collections was 1.63, 2.52, and 2.48 L, respectively. The compliance rate of obtaining UV of &gt;2.5 L was 50.1% at 6 months. On logistic regression, male sex (odds ratio [OR] 3.27, 95% confidence interval [CI] 1.94, 5.52, p &lt; 0.001), urolithiasis-related procedures such as ureteroscopy (OR 2.31, 95% CI 1.20, 4.42, p = 0.01) or percutaneous nephrolithotomy (OR 3.48, 95% CI 1.58, 7.63, p = 0.002), and baseline 24-hour UV &gt;1 L (OR 3.00, 95% CI 1.02, 8.76, p = 0.04) were associated with greater odds of compliance. Age &gt;58 years (OR 0.39, 95% CI 0.20, 0.75, p = 0.005) and presence of lower urinary tract symptoms (OR 0.50, 95% CI 0.26, 0.95, p = 0.03) were associated with lower odds of compliance. Ethnicity, body mass index, renal function, marital status, renal stone burden, stone type, and household income were not associated with compliance. Conclusions: In stone formers, compliance to fluid intake recommendations as measured by 24-hour UV is roughly 50%. Understanding risk factors for noncompliance, especially in at risk patients, can be used for quality improvement initiatives and reducing stone events. abstract_id: PUBMED:27840312 The role of the 24-h urine collection in the management of nephrolithiasis. Recurrent nephrolithiasis is a common chronic condition that is often preventable with dietary modification and pharmacologic therapy. Patients with recurrent kidney stones should have a metabolic evaluation, consisting of radiologic studies to assess stone burden, crystallographic stone analysis, and laboratory studies including standard serum chemistries and 24 h urine collection(s). This article focuses on the interpretation of urine chemistries to identify lithogenic risk factors and assess the contribution of diet to the formation of kidney stones. abstract_id: PUBMED:36913043 Metabolic stone workup abnormalities are not as important as stone culture in patients with recurrent stones undergoing percutaneous nephrolithotomy. To investigate the association between metabolic urinary abnormalities and urinary tract infection (UTI) and the stone recurrence status in patients undergoing percutaneous nephrolithotomy (PCNL). A prospective evaluation was performed for patients who underwent PCNL between November 2019 and November 2021 and met the inclusion criteria. Patients with previous stone interventions were classified as recurrent stone formers. Before PCNL, a 24 h metabolic stone workup and midstream urine culture (MSU-C) were done. Renal pelvis (RP-C) and stones (S-C) cultures were collected during the procedure. The association between the metabolic workup and UTI results with stone recurrence was evaluated using univariate and multivariate analyses. The study included 210 patients. UTI factors that showed significant association with stone recurrence included positive S-C [51 (60.7%) vs 23 (18.2%), p &lt; 0.001], positive MSU-C [37 (44.1%) vs 30 (23.8%), p = 0.002], and positive RP-C [17 (20.2%) vs 12 (9.5%), p = 0.03]. Other factors were mean ± SD GFR (ml/min) (65 ± 13.1 vs 59.5 ± 13.1, p = 0.003), calcium-containing stones [47 (55.9%) vs 48 (38.1%), p = 0.01], median (IQR) urinary citrate levels (mg/day) [333 (123-512.5) vs 221.5 (120.3-412), p = 0.04], and mean ± SD urinary pH (6.1 ± 1 vs 5.6 ± 0.7, p &lt; 0.001). On multivariate analysis, only positive S-C was the significant predictor of stone recurrence (odds ratio: 9.9, 95% confidence interval [CI] (3.8-28.6), p &lt; 0.001). Positive S-C, and not metabolic abnormalities, was the only independent factor associated with stone recurrence. A focus on preventing UTI might prevent further stone recurrence. Answer: The question of whether one 24-hour urine collection is adequate in the metabolic evaluation of stone formers has been addressed in the literature. According to a study (PUBMED:22967013), a single 24-hour urine collection may change clinical decision-making in up to 45% of patients. Therefore, the authors recommend two collections to optimize the diagnostic yield and appropriately target stone prevention strategies. This recommendation is based on the observation that there were statistically significant differences between the first and second 24-hour urine collections for certain parameters, such as urine volume and phosphorus, and a high degree of correlation for all variables. Additionally, depending on the urinary parameter assessed, a significant percentage of patients changed from normal to abnormal or vice versa between the two collections. In contrast, another study (PUBMED:33599426) compared 24-hour urine collection with overnight spot urines and found highly significant correlations between all parameters, despite different means. However, the study concluded that while overnight urine cannot completely replace the 24-hour urine collection, it can be a simplified tool for either preliminary evaluation or follow-up of patients with stone disease. The role of the 24-hour urine collection in stone prevention has been critically reassessed (PUBMED:27746283), with the conclusion that its ability to predict stone recurrence or treatment response is limited. The study suggests that additional comparative effectiveness trials are needed to determine which stone formers benefit from selective therapy guided by the 24-hour urine collection. In summary, while one 24-hour urine collection provides valuable information, there is evidence to suggest that two collections may be more optimal for accurately diagnosing metabolic abnormalities and guiding stone prevention strategies in stone formers (PUBMED:22967013). However, the necessity and frequency of 24-hour urine collections should be considered in the context of each patient's individual circumstances and the limitations of the test (PUBMED:27746283).
Instruction: Do alpha-synuclein aggregates in autonomic plexuses predate Lewy body disorders? Abstracts: abstract_id: PUBMED:17548551 Do alpha-synuclein aggregates in autonomic plexuses predate Lewy body disorders?: a cohort study. Objective: To determine the prevalence of alpha-synuclein (AS) aggregates in abdominopelvic autonomic plexuses in the general population and to evaluate the relationship between this finding and the subsequent development of neurologic dysfunction. Methods: First, surgical specimens from 100 patients (ages 44 to 84) undergoing a wide resection of an abdominopelvic organ were examined by anti-AS immunostaining. Second, 16 patients (6 AS+ and 10 randomly selected AS-) participated in yearly double-blinded neurologic assessments. Results: AS aggregates were found in autonomic plexuses in 9% of the whole sample (95% CI 3.4 to 14.6%) but were more common in vesicoprostatic (26%) than in digestive tract (4%) specimens. At 16 months after the biopsy, no prevalent cases of Parkinson disease, dementia, or autonomic failure were diagnosed among participants. One AS+ patient had previously been diagnosed with REM sleep behavior disorder. Seven of 10 control subjects but none of the 6 AS+ patients had a diagnosis of hypertension (p = 0.01). During phase IV of Valsalva maneuver, AS+ group exhibited a longer blood pressure recovery time (p = 0.03), with one patient showing absence of blood pressure overshoot. Cardiac [(123)I]metaiodobenzylguanidine uptake was reduced in the AS+ group (p = 0.03). Striatal [(123)I]ioflupane uptake was abnormally low in only one AS+ patient. At 30 months after the biopsy, lower cardiac and striatal uptake values tended to correlate with higher Unified Parkinson's Disease Rating Scale III scores (p = 0.07). Conclusion: The common presence of alpha-synuclein aggregates in peripheral autonomic neurons may represent an early presymptomatic phase in the development of Lewy body disorders. abstract_id: PUBMED:35260522 Autonomic Dysfunction in Lewy Body Disease Lewy body disease is a clinicopathological entity that includes Parkinson's disease, dementia with Lewy bodies, and pure autonomic failure with Lewy bodies, all of which are characterized by varied degrees of autonomic dysfunction. Research in animal models has shown transmission of α-synuclein from peripheral autonomic nerves to the central nervous system via the autonomic nervous system, and the autonomic nervous system plays an important role in the study of Lewy body disease. This paper outlines the pathophysiology, characteristics, and treatment of typical autonomic dysfunction in patients with Lewy body disease. abstract_id: PUBMED:26090077 (123)I-Metaiodobenzylguanidine Myocardial Scintigraphy in Lewy Body-Related Disorders: A Literature Review. Lewy body-related disorders are characterized by the presence of Lewy bodies and Lewy neurites, which have abnormal aggregations of α-synuclein in the nigral and extranigral areas, including in the heart. (123)I-metaiodobenzylguanidine (MIBG) scintigraphy is a well-known tool to evaluate cardiac sympathetic denervation in the Lewy body-related disorders. MIBG scintigraphy showed low uptake of MIBG in the Lewy body-related disorders, including Parkinson's disease, dementia with Lewy bodies, pure autonomic failure and rapid eye movement sleep behavior disorder. This review summarizes previous results on the diagnostic applications of MIBG scintigraphy in Lewy body-related disorders. abstract_id: PUBMED:28410662 Lewy Body Disorders. Dementia syndromes associated with Lewy bodies are subdivided into dementia with Lewy bodies (DLB), an underdiagnosed cause of dementia in the elderly, and Parkinson disease with dementia (PDD), cognitive impairment appearing in people diagnosed with Parkinson disease. Their neuropathologic substrates are the widespread distribution of aggregates of the protein α-synuclein in neurons in cortical brain regions, accompanied by variable Alzheimer pathology. Clinical features of DLB and PDD include distinctive changes in cognition, behavior, movement, sleep, and autonomic function. Diagnostic criteria for DLB and PDD incorporate these features. Current treatment options for DLB and PDD are symptomatic. abstract_id: PUBMED:24395122 Multiple organ involvement by alpha-synuclein pathology in Lewy body disorders. Lewy body (LB) diseases are characterized by alpha-synuclein (AS) aggregates in the central nervous system (CNS). Involvement of the peripheral autonomic nervous system (pANS) is increasingly recognized, although less studied. The aim of this study was to systematically analyze the distribution and severity of AS pathology in the CNS and pANS. Detailed postmortem histopathological study of brain and peripheral tissues from 28 brain bank donors (10 with Parkinson's disease [PD], 5 with dementia with LB [DLB], and 13 with non-LB diseases including atypical parkinsonism and non-LB dementia). AS aggregates were found in the pANS of all 15 LB disease cases (PD, DLB) in stellate and sympathetic ganglia (100%), vagus nerve (86.7%), gastrointestinal tract (86.7%), adrenal gland and/or surrounding fat (53.3%), heart (100%), and genitourinary tract (13.3%), as well as in 1 case of incidental Lewy body disease (iLBD). A craniocaudal gradient of AS burden in sympathetic chain and gastrointestinal tract was observed. DLB cases showed higher amounts of CNS AS aggregates than PD cases, but this was not the case in the pANS. No pANS AS aggregates were detected in Alzheimer's disease (AD) cases with or without CNS AS aggregates. All pathologically confirmed LB disease cases including 1 case of iLBD had AS aggregates in the pANS with a craniocaudal gradient of pathology burden in sympathetic chain and gastrointestinal tract. AS was not detected in the pANS of any AD case. These findings may help in the search of peripheral AS aggregates in vivo for the early diagnosis of PD. abstract_id: PUBMED:19501577 Dorsal motor nucleus of vagus protein aggregates in Lewy body disease with autonomic dysfunction. The Dorsal Motor Nucleus of Vagus (DMV) is degenerated in many patients with early stage Lewy Body Diseases (LBD). Many patients with LBD also develop symptomatic autonomic dysfunction prior to motor and cognitive symptoms. The DMV, along with the Nucleus Ambiguous (NA) and Raphe Obscurus (RO) regulates a variety of autonomic reflexes, suggesting that there may be an association between the degree of neurodegenerative protein aggregation in the DMV and symptomatic autonomic dysfunction in patients with LBD. Using digital in vivo pathology, we quantified alphasynuclein, tau, ubiquitin and Heat Shock Protein 27 (HSP27) containing neurons in the DMV, NA, RO, in addition to the hypoglossal nucleus in 12 LBD patients. alphaSynuclein, ubiquitin and tau aggregates most greatly affected the DMV followed by the NA, RO, but never the hypoglossal nucleus. There was a positive correlation between DMV alphasynuclein and tau aggregation (p&lt;0.05) and between DMV alphasynuclein and the patients' UPDRS scores (p&lt;0.05) suggesting incremental DMV degeneration with disease progression. However, there was no correlation between DMV alphasynuclein, tau, ubiquitin or HSP27 density and the patient's autonomic dysfunction scores. The specific incremental nature of degeneration in the DMV, suggests that by characterizing region specific molecular mechanisms underpinning DMV as opposed to NA degeneration in LBD, the pathogenesis of the disorder may be better understood. Whether DMV degeneration is causative of symptomatic autonomic dysfunction in LBD remains to be determined. abstract_id: PUBMED:29760287 Multi-Organ Distribution of Alpha-Synuclein Pathology in Dementia with Lewy Bodies In Parkinson's disease and dementia with Lewy bodies, neuronal α-synuclein aggregates (Lewy bodies and Lewy neurites) are distributed throughout the nervous system, including the brain, spinal cord, sympathetic ganglia, enteric nervous system, cardiac and pelvic plexuses, submandibular gland, adrenal medulla, and skin. Lewy bodies also occur in 10-20% of neurologically asymptomatic individuals older than 60 years. These cases are called incidental Lewy body disease (ILBD). In ILBD, Lewy bodies can be found in the brain, spinal cord, sympathetic ganglia, visceral autonomic nervous system and skin. In addition, neuronal loss in the substantia nigra is observed in ILBD. Thus, ILBD represents pre-symptomatic Parkinson's disease and/or dementia with Lewy bodies. The pathological process of Lewy body disease may affect the peripheral and central nervous systems at the same time. abstract_id: PUBMED:32170444 Autonomic dysfunction is associated with neuropsychological impairment in Lewy body disease. Objective: This study aimed to analyze the association of autonomic dysfunction with cognition, depression, apathy, and fatigue in Lewy body disease (LBD). Methods: We included 61 patients [49 with idiopathic Parkinson's disease, 7 with dementia with Lewy bodies, and 5 E46K-SNCA mutation carriers] and 22 healthy controls. All participants underwent a comprehensive battery of neuropsychological and clinical measures, autonomic symptom assessment with the SCOPA-AUT, analysis of non-invasive hemodynamic parameters during deep breathing, the Valsalva maneuver, and a 20-min tilt test, and electrochemical skin conductance measurement at rest (Sudoscan). Student's t tests were used to assess group differences, and bivariate correlations and stepwise linear regressions to explore associations between autonomic function, cognition, depression, apathy, and fatigue. Results: Compared to controls, patients who had significant impairment (p &lt; 0.05) in cognition, higher depression, apathy, and fatigue, more autonomic symptoms and objective autonomic dysfunction, reduced deep breathing heart rate variability [expiratory-to-inspiratory (E/I) ratio], prolonged pressure recovery time, and lower blood pressure in Valsalva late phase II and phase IV, while 24.1% had orthostatic hypotension in the tilt test. Autonomic parameters significantly correlated with cognitive and neuropsychiatric outcomes, systolic blood pressure during the Valsalva maneuver predicting apathy and depression. The E/I ratio was the main predictor of cognitive performance (17.6% for verbal fluency to 32.8% for visual memory). Conclusion: Cardiovascular autonomic dysfunction is associated with cognitive and neuropsychiatric impairment in LBD, heart rate variability during deep breathing and systolic blood pressure changes during the Valsalva procedure are the main predictors of neuropsychological performance and depression/apathy symptoms, respectively. abstract_id: PUBMED:29624752 Biomarkers for cognitive impairment in Lewy body disorders: Status and relevance for clinical trials. Biomarkers have the potential to improve diagnosis and prognosis, and guide clinical treatment decisions. In research, biomarkers can be used for patient selection and as outcome measures in clinical trials. A range of biochemical and imaging biomarkers are relevant to patients with Lewy body disorders, including PD, PD dementia, and dementia with Lewy bodies. Dopaminergic imaging is used for differential diagnosis of parkinsonian disorders versus tremor disorders without dopamingeric deficits, and also to differentiate dementia with Lewy bodies from Alzheimer's disease. Markers of underlying Alzheimer's disease pathology have been applied to PD patients experiencing cognitive decline to determine the extent of mixed pathology in these cases. Assessment of alpha-synuclein species in spinal fluid is possible, and more specific assays attempt to identify alpha-synuclein aggregates or phosphorylated alpha-synuclein. While alpha-synuclein markers are intended to measure the pathology most central to PD dementia and dementia with Lewy bodies, convincing evidence of robust reliability and validity from multiple laboratories is lacking. Similarly, alpha-synuclein imaging by PET or single-photon emission computed tomography, while an important research goal, is not yet available. In addition to their uses in the clinic, biomarkers have natural uses in therapeutic trials that target cognitive and neuropsychiatric features of Lewy body disorders. The biomarkers most likely to be incorporated into trials are dopaminergic and amyloid imaging for the purpose of accurate patient selection, and possibly to demonstrate the utility of antiamyloid treatments in Lewy body disorders patients with mixed pathology. © 2018 International Parkinson and Movement Disorder Society. abstract_id: PUBMED:20922808 Amyloid imaging of Lewy body-associated disorders. Clinicopathologic studies of Parkinson disease dementia (PDD) and dementia with Lewy bodies (DLB) commonly reveal abnormal β-amyloid deposition in addition to diffuse Lewy bodies (α-synuclein aggregates), but the relationship among these neuropathologic features and the development of dementia in these disorders remains uncertain. The purpose of this study was to determine whether amyloid-β deposition detected by PET imaging with Pittsburgh Compound B (PIB) distinguishes clinical subtypes of Lewy body-associated disorders. Nine healthy controls, 8 PD with no cognitive impairment, 9 PD with mild cognitive impairment, 6 DLB, and 15 PDD patients underwent [(11)C]-PIB positron emission tomography imaging, clinical examination, and cognitive testing. The binding potential (BP) of PIB for predefined regions and the mean cortical BP (MCBP) were calculated for each participant. Annual longitudinal follow-up and postmortem examinations were performed on a subset of participants. Regional PIB BPs and the proportion of individuals with abnormally elevated MCBP were not significantly different across participant groups. Elevated PIB binding was associated with worse global cognitive impairment in participants with Lewy body disorders but was not associated with any other clinical or neuropsychological features, including earlier onset or faster rate of progression of cognitive impairment. These results suggest that the presence of fibrillar amyloid-β does not distinguish between clinical subtypes of Lewy body-associated disorders, although larger numbers are needed to more definitively rule out this association. Amyloid-β may modify the severity of global cognitive impairment in individuals with Lewy body-associated dementia. Answer: Yes, alpha-synuclein (AS) aggregates in autonomic plexuses may predate Lewy body disorders. A cohort study found AS aggregates in abdominopelvic autonomic plexuses in 9% of the general population sample, suggesting that the presence of these aggregates could represent an early presymptomatic phase in the development of Lewy body disorders (PUBMED:17548551). Additionally, research in animal models has shown transmission of α-synuclein from peripheral autonomic nerves to the central nervous system, indicating the autonomic nervous system's significant role in the study of Lewy body disease (PUBMED:35260522). Furthermore, multi-organ involvement by alpha-synuclein pathology in Lewy body disorders has been observed, with AS aggregates found in the peripheral autonomic nervous system of all Lewy body disease cases, including Parkinson's disease and dementia with Lewy bodies, as well as in one case of incidental Lewy body disease (PUBMED:24395122). This suggests that the pathological process of Lewy body disease may affect both the peripheral and central nervous systems simultaneously (PUBMED:29760287). Therefore, the presence of alpha-synuclein aggregates in peripheral autonomic neurons may indeed predate the clinical manifestations of Lewy body disorders.
Instruction: Are patients of low socioeconomic status receiving suboptimal management for pancreatic adenocarcinoma? Abstracts: abstract_id: PUBMED:19998350 Are patients of low socioeconomic status receiving suboptimal management for pancreatic adenocarcinoma? Background: The objective of this study was to define the effects of socioeconomic status (SES) and other demographic variables on outcomes for patients with pancreatic adenocarcinoma. Methods: Florida cancer registry and inpatient hospital data were queried for pancreatic adenocarcinoma diagnosed from 1998 to 2002. Results: In total, 16,104 patients were identified. Low SES (LSES) patients were younger at diagnosis (P &lt; .001) but presented with similar disease stage and tumor grade. LSES patients were less likely to receive surgical extirpation (16.5% vs 19.8%; P &lt; .001), chemotherapy (30.7% vs 36.4%; P &lt; .001), or radiotherapy (14.3% vs 16.9%; P = .003). Among surgical patients, 30-day mortality was significantly higher (5.1% vs 3.7%; P &lt; .001) and overall median survival was significantly worse (5.0 months vs 6.2 months; P &lt; .001) in the LSES cohorts. Although surgical patients who were treated at teaching facilities (TF) did significantly better; an increased 30-day surgical mortality (2.2% vs 1.3%; P &lt; .001) and decreased median survival (5 months for poverty level &gt;15% vs 6.2 months for poverty level &lt;5%; P &lt; .001) also were observed for patients of LSES. In a multivariate analysis that corrected for patient comorbidities, significant independent predictors of a poorer prognosis included LSES (hazard ratio [HR], 1.09); treatment at a non-TF (HR, 1.09); and failure to receive surgical extirpation (HR, 1.92), chemotherapy (HR 1.41), or radiation (HR 1.25). Conclusions: Patients of LSES were less likely to receive surgical extirpation, chemotherapy, or radiation and had significantly higher perioperative and long-term mortality rates. A greater understanding of the barriers to providing optimal care and identifying means for improving successful delivery of therapies to the poor with pancreatic cancer are needed. abstract_id: PUBMED:33778318 Same Chance of Accessing Resection? Impact of Socioeconomic Status on Resection Rates Among Patients with Pancreatic Adenocarcinoma-A Systematic Review. Background: The incidence of pancreatic cancer is growing and the survival rate remains one of the worst in oncology. Surgical resection is currently a crucial curative option for pancreatic adenocarcinoma (PA). Socioeconomic factors could influence access to surgery. This article reviews the literature on the impact of socioeconomic status (SES) on access to curative surgery among patients with PA. Methods: The EMBASE, MEDLINE, Web of Science, and Scopus databases were searched by three investigators to generate 16 studies for review. Results: Patients with the lowest SES are less likely to undergo surgery than high SES. Low income, low levels of education, not being insured, and living in deprived and rural areas have all been associated with decreased rates of surgical resection. Given the type of health care system and geographic disparities, results in North American populations are difficult to transpose to European countries. However, a similar trend is observed in difficulty for the poorest patients in accessing resection. Low SES seems to be less likely to be offered surgery and more likely to refuse it. Conclusions: Inequalities in insurance coverage and living in poor/lower educational level areas are all demonstrated factors of a lower likelihood of resection populations. It is important to assess the causal effect of socioeconomic deprivation to improve understanding of this disease and improve access to care. abstract_id: PUBMED:35315932 Differences in receipt of multimodality therapy by race, insurance status, and socioeconomic disadvantage in patients with resected pancreatic cancer. BACKGROUND AND METHODS: Racial and socioeconomic disparities in receipt of adjuvant chemotherapy affect patients with pancreatic cancer. However, differences in receipt of neoadjuvant chemotherapy among patients undergoing resection are not well-understood. A retrospective cross-sectional cohort of patients with resected AJCC Stage I/II pancreatic ductal adenocarcinoma was identified from the National Cancer Database (2014-2017). Outcomes included receipt of neoadjuvant versus adjuvant chemotherapy, or receipt of either, defined as multimodality therapy and were assessed by univariate and multivariate analysis. Results: Of 19 588 patients, 5098 (26%) received neoadjuvant chemotherapy, 9624 (49.1%) received adjuvant chemotherapy only, and 4757 (24.3%) received no chemotherapy. On multivariable analysis, Black patients had lower odds of neoadjuvant chemotherapy compared to White patients (OR: 0.80, 95% CI: 0.67-0.97) but no differences in receipt of multimodality therapy (OR: 0.89, 95% CI: 0.77-1.03). Patients with Medicaid or no insurance, low educational attainment, or low median income had significantly lower odds of receiving neoadjuvant chemotherapy or multimodality therapy. Conclusions: Racial and socioeconomic disparities persist in receipt of neoadjuvant and multimodality therapy in patients with resected pancreatic adenocarcinoma. Discussion: Policy and interventional implementations are needed to bridge the continued socioeconomic and racial disparity gap in pancreatic cancer care. abstract_id: PUBMED:34876291 From street address to survival: Neighborhood socioeconomic status and pancreatic cancer outcomes. Background: Neighborhood factors may influence cancer care through physical, economic, and social means. This study assesses the impact of neighborhood socioeconomic status on diagnosis, treatment, and survival in pancreatic cancer. Methods: Patients with pancreatic adenocarcinoma were identified in the 2010-2016 Surveillance Epidemiology and End Results database. Neighborhood socioeconomic status (divided into tertiles) was based on an National Cancer Institute census tract-level composite score, including income, education, housing, and employment. Multivariate models predicted metastasis at time of diagnosis and receipt of surgery for early-stage disease. Overall survival compared via Kaplan-Meier and Cox proportional hazards. Results: Fifteen thousand four hundred and thirty-six patients (29.7%) lived in low neighborhood socioeconomic status, 17,509 (33.7%) in middle neighborhood socioeconomic status, and 19,010 (36.6%) in high neighborhood socioeconomic status areas. On multivariate analysis, neighborhood socioeconomic status was not associated with metastatic disease at diagnosis (low neighborhood socioeconomic status odds ratio 1.02, 95% confidence interval 0.97-1.07; ref: high neighborhood socioeconomic status). However, low neighborhood socioeconomic status was associated with decreased likelihood of surgery for localized/regional disease (odds ratio 0.60, 95% confidence interval 0.54-0.68; ref: high neighborhood socioeconomic status) and worse overall survival (low neighborhood socioeconomic status hazard ratio 1.18, 95% confidence interval 1.15-1.21; ref: high neighborhood socioeconomic status). Conclusion: Patients from resource-poor neighborhoods are less likely to receive stage-appropriate therapy for pancreatic cancer and have an 18% higher risk of death. abstract_id: PUBMED:30449504 Socioeconomic status impacts survival and access to resection in pancreatic adenocarcinoma: A high-resolution population-based cancer registry study. Background: Survival of patients with pancreatic adenocarcinoma (PA) is very poor. Resection status is highly associated with prognosis but only 15%-20% are resectable. The aim of this study was to analyse the impact of socioeconomic deprivation on PA survival and to define which management steps are affected. Methods: Between 01/01/2000 and 31/12/2014, 1451 incident cases of PA recorded in the digestive cancer registry of the French department of Calvados were included. The population was divided between less deprived areas (quintile 1) and more deprived areas (quintile 2,3,4,5 aggregated). Results: Patients from less deprived areas were younger at diagnosis than those from more deprived areas (69.9 vs 72.3 years, p = 0.01). There was no difference in stage or comorbidities. Three- and 5-year survival rates were significantly higher for less deprived areas than more deprived areas: 10.5% vs 5.15% and 4.7% vs 1.7% respectively (p = 0.01). In univariate analysis, those living in less deprived areas had a better survival than those in more deprived areas (HR = 0.81 [0.69-0.95], p = 0.009) but not in multivariable analysis (HRa = 0.93 [0.79-1.11], p = 0.383) or analysis stratified on resection. In multivariable regression, less deprived areas had more access to surgery than more deprived areas (ORa = 1.73 [1.08-2.47], p = 0.013). No difference was observed on access to adjuvant chemotherapy (ORa = 0.95 [0.38-2.34], p = 0.681). Conclusion: The key to reducing survival inequalities in PA is access to resection, so future studies should investigate the factors impacting this issue. abstract_id: PUBMED:35875138 3D Ultrasound-Guided Photoacoustic Imaging to Monitor the Effects of Suboptimal Tyrosine Kinase Inhibitor Therapy in Pancreatic Tumors. Pancreatic cancer is a disease with an incredibly poor survival rate. As only about 20% of patients are eligible for surgical resection, neoadjuvant treatments that can relieve symptoms and shrink tumors for surgical resection become critical. Many forms of treatments rely on increased vulnerability of cancerous cells, but tumors or regions within the tumors that may be hypoxic could be drug resistant. Particularly for neoadjuvant therapies such as the tyrosine kinase inhibitors utilized to shrink tumors, it is critical to monitor changes in vascular function and hypoxia to predict treatment efficacy. Current clinical imaging modalities used to obtain structural and functional information regarding hypoxia or oxygen saturation (StO2) do not provide sufficient depth penetration or require the use of exogenous contrast agents. Recently, ultrasound-guided photoacoustic imaging (US-PAI) has garnered significant popularity, as it can noninvasively provide multiparametric information on tumor vasculature and function without the need for contrast agents. Here, we built upon existing literature on US-PAI and demonstrate the importance of changes in StO2 values to predict treatment response, particularly tumor growth rate, when the outcomes are suboptimal. Specifically, we image xenograft mouse models of pancreatic adenocarcinoma treated with suboptimal doses of a tyrosine kinase inhibitor cabozantinib. We utilize the US-PAI data to develop a multivariate regression model that demonstrates that a therapy-induced reduction in tumor growth rate can be predicted with 100% positive predictive power and a moderate (58.33%) negative predictive power when a combination of pretreatment tumor volume and changes in StO2 values pretreatment and immediately posttreatment was employed. Overall, our study indicates that US-PAI has the potential to provide label-free surrogate imaging biomarkers that can predict tumor growth rate in suboptimal therapy. abstract_id: PUBMED:26581025 Associations of Socioeconomic Variables With Resection, Stage, and Survival in Patients With Early-Stage Pancreatic Cancer. Importance: Socioeconomic variables including sex, race, ethnicity, marital status, and insurance status are associated with survival in pancreatic cancer. It remains unknown exactly how these variables influence survival, including whether they affect stage at presentation or receipt of treatment or are independently associated with outcomes. Objectives: To investigate the relationship between socioeconomic factors and odds of resection in early-stage, resectable pancreatic adenocarcinoma and to determine whether these same factors were independently associated with survival in patients who underwent resection. Design, Setting, And Participants: This was a retrospective cohort study of patients diagnosed as having T1 through T3 M0 pancreatic adenocarcinoma between January 1, 2004, and December 31, 2011, identified from the Surveillance, Epidemiology, and End Results database. Main Outcomes And Measures: Socioeconomic and geographic variables associated with utilization of resection and disease-specific survival. Results: A total of 17,530 patients with localized, nonmetastatic pancreatic cancer were identified. The resection rate among these patients was 45.4% and did not change over time. Utilization of resection was independently associated with white vs African American race (odds ratio [OR] = 0.76; 95% CI, 0.65-0.88; P &lt; .001), non-Hispanic ethnicity (for Hispanic, OR = 0.72; 95% CI, 0.60-0.85; P &lt; .001), married status (OR = 1.42; 95% CI, 1.30-1.57; P &lt; .001), insurance coverage (OR = 1.63; 95% CI, 1.22-2.18; P = .001), and the Northeast region (vs Southeast, OR = 1.67; 95% CI, 1.44-1.94; P &lt; .001). Stage at presentation correlated with sex, race, ethnicity, marital status, and geographic region (ethnicity, P = .003; all others, P &lt; .001); however, the factors associated with increased resection correlated with more advanced stage. Patients who underwent resection had significantly improved disease-specific survival compared with those who did not undergo resection (median, 21 vs 6 months; hazard ratio [HR] for disease-specific death = 0.32; 95% CI, 0.31-0.33; P &lt; .001). Disease-specific survival among the patients who underwent surgical resection was independently associated with geographic region, with patients in the Pacific West (HR for death = 0.706; 95% CI, 0.628-0.793), Northeast (HR for death = 0.766; 95% CI, 0.667-0.879), and Midwest (HR for death = 0.765; 95% CI, 0.640-0.913) having improved survival in comparison with those in the Southeast (all P &lt; .001). Conclusions And Relevance: Disparities in the utilization of surgical resection for patients with early-stage, resectable pancreatic cancer are associated with socioeconomic variables including race, ethnicity, marital status, insurance status, and geographic location. Of these factors, only geographic location is independently associated with survival in patients undergoing resection. abstract_id: PUBMED:35006509 Socioeconomic Disparities in Pancreas Cancer Resection and Survival in the Veterans Health Administration. Background: Disparities based on socioeconomic factors such as race, ethnicity, marital status, and insurance status are associated with pancreatic cancer resection, but these disparities are usually not observed for survival after resection. It is unknown if there are disparities when patients undergo their treatment in a non-fee-for-service, equal-access healthcare system such as the Veterans Health Administration (VHA). Methods: Patients having T1-T3 M0 pancreatic adenocarcinoma diagnosed between 2006 and 2017 were identified from the VHA Corporate Data Warehouse. Socioeconomic, demographic, and tumor variables associated with resection and survival were assessed. Results: In total, 2580 patients with early-stage pancreatic cancer were identified. The resection rate was 36.5%. Surgical resection was independently associated with younger age [odds ratio (OR) 0.94, p &lt; 0.001], White race (OR 1.35, p = 0.028), married status (OR 1.85, p = 0.001), and employment status (retired vs. unemployed, OR 1.41, p = 0.008). There were no independent associations with Hispanic ethnicity, geographic region, or Social Deprivation Index. Resection was associated with significantly improved survival (median 21 vs. 8 months, p = 0.001). Among resected patients, survival was independently associated with younger age (HR 1.019, p = 0.002), geographic region (South vs. Pacific West, HR 0.721, p = 0.005), and employment (employed vs. unemployed, HR 0.752, p = 0.029). Race, Hispanic ethnicity, marital status, and Social Deprivation Index were not independently associated with survival after resection. Conclusions: Race, marital status, and employment status are independently associated with resection of pancreatic cancer in the VHA, whereas geographic region and employment status are independently associated with survival after resection. Further studies are warranted to determine the basis for these inequities. abstract_id: PUBMED:23025956 Effect of socioeconomic status on surgery for pancreatic adenocarcinoma. Survival for pancreatic cancer remains poor. Surgical resection remains the only chance for cure. The intent of this study was to investigate the role of socioeconomic status (SES) on resection rates for pancreatic adenocarcinoma. The National Cancer Institute's Surveillance, Epidemiology, and End results database was used to identify patients with pancreatic adenocarcinoma. Disease was deemed resectable or unresectable based on the extent of disease code. Median family income was used as a SES variable to compare patients who underwent resection with those who did not. Median family income was organized into three categories based on definitions from the national census: less than $34,680 (low income), $34,680 to $48,650 (middle), and greater than $48,650 (high income). A total of 5,908 patients with potentially resectable disease were included. A total of 3,331 patients did not have a surgical resection despite having resectable disease. Subgroup analysis of income status revealed that patients with a low or middle income were less likely to have a resection when compared with those with high income (33.0 vs 39.9 vs 45.8%, P=0.0001). Multivariate analysis revealed that low and middle SES and race were significant predictors of resection. Ongoing study of access to health care may help define the means to eliminate the disparities in the care of patients with pancreatic adenocarcinoma. abstract_id: PUBMED:31605347 Size and Importance of Socioeconomic Status-Based Disparities in Use of Surgery in Nonadvanced Stage Gastrointestinal Cancers. Background: The size and importance of socioeconomic status (SES)-based disparities in use of surgery for non-advanced stage gastrointestinal (GI) cancers have not been quantified. Methods: The exposure in this study of patients age 18-80 with one of nine non-advanced stage GI cancers in the 2007-2015 SEER database was a census tract-level SES composite. Multivariable models assessed associations of SES with use of surgery. Causal mediation analysis was used to estimate the proportion of survival disparities in SES quintiles 1 versus 5 that were mediated by disparities in use of surgery. Results: Lowest SES quintile patients underwent surgery at significantly lower rates than highest quintile patients in each cancer. SES-based disparities in use of surgery were large and graded in esophagus adenocarcinoma, intrahepatic and extrahepatic cholangiocarcinoma, and pancreatic adenocarcinoma. Smaller but clinically relevant disparities were present in stomach, ampulla, and small bowel adenocarcinoma, whereas disparities were small in colorectal adenocarcinoma. Five-year all-stage overall survival (OS) was correlated with the size of disparities in use of surgery in SES quintiles 1 versus 5 (r = - 0.87; p = 0.003). Mean OS was significantly longer (range 3.5-8.9 months) in SES quintile 5 versus 1. Approximately one third of SES-based survival disparities in poor prognosis GI cancers were mediated by disparities in use of surgery. The size of disparities in use of surgery in SES quintiles 1 versus 5 was correlated with the proportion mediated (r = 0.98; p &lt; 0.001). Conclusions: Low SES patients with poor prognosis GI cancers are at substantial risk of undertreatment. Disparities in use of surgery contribute to diminished survival. Answer: Yes, patients of low socioeconomic status (SES) are receiving suboptimal management for pancreatic adenocarcinoma. Studies have shown that patients with lower SES are less likely to undergo surgery, which is a crucial curative option for pancreatic adenocarcinoma (PUBMED:33778318). They are also less likely to receive surgical extirpation, chemotherapy, or radiation, and have significantly higher perioperative and long-term mortality rates (PUBMED:19998350). Additionally, low SES has been associated with decreased rates of surgical resection, and factors such as low income, low levels of education, not being insured, and living in deprived and rural areas contribute to this disparity (PUBMED:33778318). Racial and socioeconomic disparities also persist in the receipt of neoadjuvant and multimodality therapy in patients with resected pancreatic adenocarcinoma (PUBMED:35315932). Furthermore, patients from resource-poor neighborhoods are less likely to receive stage-appropriate therapy for pancreatic cancer and have an 18% higher risk of death (PUBMED:34876291). Socioeconomic deprivation impacts survival and access to resection, with patients from less deprived areas having better survival rates and more access to surgery (PUBMED:30449504). Socioeconomic variables such as race, ethnicity, marital status, insurance status, and geographic location are associated with disparities in the utilization of surgical resection for patients with early-stage, resectable pancreatic cancer (PUBMED:26581025). In the Veterans Health Administration, race, marital status, and employment status are independently associated with resection of pancreatic cancer, whereas geographic region and employment status are independently associated with survival after resection (PUBMED:35006509). Finally, patients with lower SES are less likely to have a resection compared to those with higher income, and ongoing studies are needed to define the means to eliminate these disparities in care (PUBMED:23025956). Disparities in the use of surgery for non-advanced stage gastrointestinal cancers, including pancreatic adenocarcinoma, contribute to diminished survival among low SES patients (PUBMED:31605347).
Instruction: Anorexia of aging in long term care: is dronabinol an effective appetite stimulant? Abstracts: abstract_id: PUBMED:17435963 Anorexia of aging in long term care: is dronabinol an effective appetite stimulant?--a pilot study. Introduction: Anorexia and subsequent weight loss increase the risk of death in long term care (LTC) residents. In patients who fail to respond to nutritional intervention, orexigenic drugs are sometimes prescribed. There is limited data regarding the safety and efficacy of these drugs in older adults. Objective: To examine the effect of a 12-week course of dronabinol on LTC residents with anorexia and significant weight loss. Design, Setting, Participants: Retrospective observational study on residents in five LTC facilities in a major metropolitan area. Results: Twenty-eight subjects (22F, 6M) were involved in the study. Mean age 79.5 +/- 19.8 years (range 46-98 y). Mean body weight, serum albumin and serum prealbumin at baseline were 105.7 +/- 24.7 lbs, 3.39 +/- 0.47 g/dl and 22.15 +/- 7.92 mg/dl respectively. 15 subjects (53.5%) gained weight on dronabinol, of which 10 (67%) gained more than 5 lbs and 6(40%) gained more than 10 lbs. Five (33%) subjects gained less than 5 lbs. Residents who lost weight on dronabinol were younger than those who gained weight (70.9 +/- 5.62 y and 90.8 +/- 7.84 y respectively; p = 0.007) Overall, the mean weight gain on dronabinol was 3 +/- 8.01 lbs (p=0.2). Eleven subjects lost weight (mean loss 3 +/- 2.6 lbs). Of the subjects who lost weight 7 (64%) died compared with 4 (26%) in the subgroup who gained weight. Conclusions: Dronabinol therapy was well tolerated. Overall, there was a trend toward weight gain in LTC residents treated with 12 weeks of dronabinol. Failure to respond to dronabinol may indicate increased risk of death. abstract_id: PUBMED:23570087 Dronabinol (delta9-tetrahydrocannabinol) in long-term treatment. Symptom control in patients with multiple sclerosis and spasticity, neuropathic pain, loss of appetite and cachexia Cannabinoid drugs have been used increasingly in the treatment of neuropathic pain and spasticity. Even though the evidence still is scarce, patients with multiple sclerosis seem to benefit substiantially from cannabinoid therapy. In a case report dose finding and long-term therapie with delta9-tetrahydrocannabinol are described. Dronabinol proved effective and was well tolerated in long-term treatment. Pain was reduced significantly and opioid requirements decreased. At the same time spasticity was reduced and appetite and weight increased. It still needs to bei discussed whether or not part of the positive effect on symptoms was probably mediated by psychomimetic effects. This case report shows, that dronabinol offers an additional therapeutic option in a palliative treatment concept for patiens with high symptom load. abstract_id: PUBMED:9223837 Long-term efficacy and safety of dronabinol for acquired immunodeficiency syndrome-associated anorexia. We studied the effects of long-term (12 months) dronabinol in 94 late-stage acquired immunodeficiency syndrome (AIDS) patients (mean CD4 count of 45/mm3) who previously participated in a 6-week study (placebo versus dronabinol). All patients received dronabinol orally-2.5 mg twice daily (90%) or 2.5 mg once daily (10%). Appetite was measured using a visual analogue scale for hunger (VASH). Dronabinol was associated with consistent improvement in mean appetite. Patients previously treated with dronabinol continued to show improvement in VASH (percent change from baseline of 6-week trial: 48.6-76.1% at each month), whereas those previously treated with placebo exhibited substantial improvement in mean appetite, particularly during the initial 4 months of treatment (48.5-69.9%). Thereafter, dronabinol was associated with a VASH change at least twice baseline. Patients tended toward stable body weight for at least 7 months. Adverse events were primarily related to known central nervous system effects of dronabinol. These data support long-term, safe use of dronabinol for anorexia associated with weight loss in patients with AIDS. abstract_id: PUBMED:30229678 Efficacy and Safety of Appetite-Stimulating Medications in the Inpatient Setting. Background: Hospitalized patients are subject to acute illness and stress which may impact appetite or weight. Loss of appetite may lead to increased morbidity or mortality. Medications such as dronabinol, megestrol, and mirtazapine are used for weight gain in the outpatient setting; however, there is limited information about safety or effectiveness when initiated inpatient. Objectives: To analyze the effectiveness and safety of appetite-stimulating medications in hospitalized patients. Methods: This was a retrospective cohort study of hospitalized patients initiated on dronabinol, megestrol, or mirtazapine for appetite. The primary outcome was change in meal intake between drug initiation and discontinuation. Secondary outcomes included documented improvement in appetite, change in weight and various laboratory parameters, and incidence of adverse effects. Results: A total of 38 patients met inclusion criteria, and mirtazapine was most commonly used (42%). There was no significant difference between groups of appetite-stimulating medications with regard to mean change in meal intake, weight, albumin, or documented improvement in diet. Within groups, each agent showed numerical improvement in percentage meal intake, with a mean change from initiation to discontinuation of 17.12%. Almost half (48%) of the patients experienced improvement in diet after the start of medications. No serious adverse effects were observed. Conclusion and Relevance: In inpatients, there was no difference in change in meal intake or weight between dronabinol, megestrol, or mirtazapine, but they may show numerical improvements in meal intake. To our knowledge, this is the first study to evaluate the use of dronabinol, megestrol, and mirtazapine initiated in the inpatient setting. abstract_id: PUBMED:18001374 Appetite stimulants in cystic fibrosis: a systematic review. Background: The use of appetite stimulants in cystic fibrosis (CF) is controversial because of doubts concerning efficacy and side-effects. The aim of the review was to establish whether appetite stimulants should be recommended in CF. Materials And Methods: Medline, AMED, British Nursing Index, Cinahl, Embase, The Cochrane Library, National Research Register and Google were searched. Unpublished studies, case reports and conference abstracts were sought from experts, including authors of identified published articles, and suppliers of appetite stimulants. Results: Fifty-four articles were located. Fifteen studies were suitable for inclusion in the review: 10 studied, megesterol acetate (MA); 1, dronabinol; 2, mirtazapine; and 2, cyproheptadine. MA showed a beneficial effect on lung function, weight, appetite, fat free and fat mass. Adverse effects for MA included adrenal suppression, abnormalities of glycaemic control, mood changes and testicular failure. There was no consistency in the dose, frequency and duration of therapy for MA. Conclusions: The review shows MA may have a role in the management of anorexia associated with CF. It is not possible to be conclusive for the other agents given the low numbers in the studies. Larger randomized, controlled trials of MA are necessary to confirm its safety and validate efficacy in CF. abstract_id: PUBMED:17538000 Use of Dronabinol Improves Appetite and Reverses Weight Loss in HIV/AIDS-Infected Patients. Objective: To determine whether dronabinol affects appetite and weight status in patients living with HIV/AIDS. Methods: A retrospective chart review was conducted to analyze weight and appetite changes and nausea status in patients with HIV/AIDS who received dronabinol for 3 to 12 months from January 11, 1993, to March 17, 2003. Results: Of the 117 patients who lost weight before baseline, 63% maintained or gained weight. In patients receiving dronabinol for 1 year, the mean weight gain (+/- SD) was 3.7 +/- 10.6 lb. The percentage of patients experiencing loss of appetite decreased significantly from 71% at baseline to 26% at 1 month (P &lt; .001) and continued to decline throughout the trial. The percentage of patients experiencing nausea at baseline (38%) decreased consistently from week 2 on; this change from baseline was significant at month 6 (P = .031). Conclusion: When taken for 3 months to 1 year, dronabinol significantly improves appetite and reverses weight loss in patients living with HIV/AIDS. abstract_id: PUBMED:35360989 Efficacy of medicinal cannabis for appetite-related symptoms in people with cancer: A systematic review. Background: Anorexia (loss of appetite) is a prevalent and distressing symptom in people with cancer, with limited effective interventions. Medicinal cannabis has shown promise in improving appetite-related symptoms in people with cancer. Aim: To assess the efficacy of medicinal cannabis for improving appetite-related symptoms in people with cancer, considering measures and outcomes, interventions and toxicity. Design: Systematic review with narrative approach to synthesis and meta-analysis. Data Sources: Databases (MEDLINE, CINAHL, CENTRAL), websites and trials registries were searched from inception to February 2021. Included studies were randomised controlled trials (RCT) in English peer-reviewed journals comparing medicinal cannabis with placebo and/or another intervention. Study quality was assessed using the Cochrane risk of bias tool. Results: Five studies were included that compared medicinal cannabis interventions (dronabinol, nabilone and cannabis extract) either with placebo (n = 4) or megestrol acetate (n = 1). Measures and trial endpoints varied, but efficacy was demonstrated in one trial only, in which dronabinol significantly improved chemosensory perception and other secondary outcomes (taste of food, premeal appetite, proportion of calories consumed as protein) compared with placebo. Cannabis interventions were generally well tolerated across studies, regardless of the product or dose, although the comprehensive measurement of toxicities was limited. Conclusion: Evidence from RCTs that medicinal cannabis increases appetite in people with cancer is limited. Measures, outcomes and interventions were variable, and toxicities have not been comprehensively evaluated. Future research should carefully consider biological mechanisms to guide more nuanced selection of endpoints and interventions, including product, dose and administration. abstract_id: PUBMED:8035251 A phase II study of delta-9-tetrahydrocannabinol for appetite stimulation in cancer-associated anorexia. Purpose: To evaluate the appetite-stimulating properties of delta-9-tetrahydrocannabinol (THC) in patients with anorexia due to advanced cancer. Patients And Methods: Nineteen patients with various malignancies were entered. All had cancer-associated anorexia and a life expectancy greater than four weeks. Patients were started on THC 2.5 mg p.o. t.i.d. one hour after meals for four weeks. Evaluations for side effects, efficacy, acceptability and satisfaction were conducted at two and four weeks. Results: 18 patients were evaluable. Ten patients completed the entire 28-day study. Four patients experienced grade I toxicity and three withdrew at their request. Thirteen patients reported an improved appetite. Conclusion: THC is an effective appetite stimulant in patients with advanced cancer. It is well tolerated at low doses. Further studies are needed to determine the most appropriate dose and the specific population most likely to respond. abstract_id: PUBMED:29550881 The effect of nabilone on appetite, nutritional status, and quality of life in lung cancer patients: a randomized, double-blind clinical trial. Background: Over one half of the patients diagnosed with advanced lung cancer experience anorexia. In addition to its high incidence, cancer-induced anorexia promotes the development of the anorexia-cachexia syndrome, which is related to poor clinical outcomes. Recently, drugs derived from cannabinoids, such as Nabilone, have been recognized for their appetite improvement properties; however, clinical trials to support their use in cancer patients are necessary. Methods: This is a randomized, double-blind, placebo-controlled clinical trial to assess the effect of Nabilone vs. placebo on the appetite, nutritional status, and quality of life in patients diagnosed with advanced Non-small cell lung cancer (NSCLC) (NCT02802540). Results: A total of 65 patients from the outpatient clinic at the National Institute of Cancer (INCan) were assessed for eligibility and 47 were randomized to receive Nabilone (0.5 mg/2 weeks followed by 1.0 mg/6 weeks) or placebo. After 8 weeks of treatment, patients who received Nabilone increased their caloric intake (342-kcal) and had a significantly higher intake of carbohydrates (64 g) compared to patients receiving placebo (p = 0.040). Quality of life also showed significant improvements in patients in the experimental arm of the trial, particularly in role functioning (p = 0.030), emotional functioning (p = 0.018), social functioning (p = 0.036), pain (p = 0.06), and insomnia (p = 0.020). No significant change in these scales was seen in the control group. Conclusion: Nabilone is an adequate and safe therapeutic option to aid in the treatment of patients diagnosed with anorexia. Larger trials are necessary in order to draw robust conclusions in regard to its efficacy in lung cancer patients. abstract_id: PUBMED:8021744 Appetite effect of dronabinol. N/A Answer: Based on the provided abstracts, dronabinol appears to be an effective appetite stimulant for anorexia of aging in long-term care settings. A pilot study (PUBMED:17435963) showed that dronabinol therapy was well tolerated and there was a trend toward weight gain in long-term care residents treated with a 12-week course of dronabinol. In this study, 53.5% of subjects gained weight, with 67% of those gaining more than 5 lbs and 40% gaining more than 10 lbs. The study also suggested that failure to respond to dronabinol may indicate an increased risk of death. Additionally, dronabinol has been used in other conditions associated with anorexia and weight loss, such as multiple sclerosis, neuropathic pain, and HIV/AIDS, with positive effects on appetite and weight (PUBMED:23570087, PUBMED:9223837, PUBMED:17538000). In the context of HIV/AIDS, a retrospective chart review indicated that dronabinol significantly improved appetite and reversed weight loss in patients living with HIV/AIDS when taken for 3 months to 1 year (PUBMED:17538000). Furthermore, a systematic review assessing the efficacy of medicinal cannabis for appetite-related symptoms in people with cancer included studies on dronabinol and found that it significantly improved chemosensory perception and other secondary outcomes related to appetite compared with placebo (PUBMED:35360989). In summary, the evidence from the abstracts suggests that dronabinol can be an effective appetite stimulant in long-term care residents with anorexia of aging, as well as in patients with other conditions that cause anorexia and weight loss. However, it is important to note that the studies also highlight the need for further research to confirm the safety and validate the efficacy of dronabinol in these populations.
Instruction: Is 'gamma' (40 Hz) synchronous activity disturbed in patients with traumatic brain injury? Abstracts: abstract_id: PUBMED:12350441 Is 'gamma' (40 Hz) synchronous activity disturbed in patients with traumatic brain injury? Objectives: The objective of this work is to determine whether Gamma (40 Hz) synchronous activity has disturbed patients with severe traumatic brain injury (TBI). Methods: Using a conventional auditory oddball paradigm, the extent of Gamma synchrony across multiple scalp sites in specific frequency bands as a function of time was examined in 15 patients with severe TBI and 15 age- and sex-matched controls. Averaged Gamma synchrony was analyzed using within and between group multiple analyses of variance with region (left versus right hemisphere, anterior versus posterior region) as the within factor. Results: Compared to controls, subjects with TBI displayed significantly delayed early Gamma latency (from -150 to 150 ms) (F((1,28))=10.28, P&lt;0.003) across all sites in addition to other specific regional disturbances. For late Gamma synchrony, subjects with TBI displayed delayed Gamma latency at the left hemisphere (from 200 to 450 ms) (F((1,28))=8.71, P&lt;0.006) and posterior region (F((1,28))=9.18, P&lt;0.006) in comparison to controls. Conclusions: Impaired integration of spatially distributed brain activity ('40 Hz' electroencephalogram rhythms) may be an important marker of deficits of cortical network binding postulated to be abnormal in people who have survived TBI. abstract_id: PUBMED:35800078 40 Hz Blue LED Relieves the Gamma Oscillations Changes Caused by Traumatic Brain Injury in Rat. Background: Photobiomodulation (PBM) using low-level light-emitting diodes (LEDs) can be rapidly applied to various neurological disorders safely and non-invasively. Materials And Methods: Forty-eight rats were involved in this study. The traumatic brain injury (TBI) model of rat was set up by a controlled cortical impact (CCI) injury. An 8-channel cortex electrode EEG was fixed to two hemispheres, and gamma oscillations were extracted according to each electrode. A 40 hz blue LED stimulation was set at four points of the frontal and parietal regions for 60 s each, six times per day for 1 week. Modified Neurologic Severity Scores (mNSS) were used to evaluate the level of neurological function. Results: In the right-side TBI model, the gamma oscillation decreased in electrodes Fp2, T4, C4, and O2; but significantly increased after 1 week of 40 hz Blue LED intervention. In the left-side TBI model, the gamma oscillation decreased in electrodes Fp1, T3, C3, and O1; and similarly increased after 1 week of 40 hz Blue LED intervention. Both left and right side TBI rats performed significantly better in mNSS after 40 hz Blue LED intervention. Conclusion: TBI causes the decrease of gamma oscillations on the injured side of the brain of rats. The 40 hz Blue LED therapy could relieve the gamma oscillation changes caused by TBI and improve the prognosis of TBI. abstract_id: PUBMED:31041986 Marked Increases in Resting-State MEG Gamma-Band Activity in Combat-Related Mild Traumatic Brain Injury. Combat-related mild traumatic brain injury (mTBI) is a leading cause of sustained impairments in military service members and veterans. Recent animal studies show that GABA-ergic parvalbumin-positive interneurons are susceptible to brain injury, with damage causing abnormal increases in spontaneous gamma-band (30-80 Hz) activity. We investigated spontaneous gamma activity in individuals with mTBI using high-resolution resting-state magnetoencephalography source imaging. Participants included 25 symptomatic individuals with chronic combat-related blast mTBI and 35 healthy controls with similar combat experiences. Compared with controls, gamma activity was markedly elevated in mTBI participants throughout frontal, parietal, temporal, and occipital cortices, whereas gamma activity was reduced in ventromedial prefrontal cortex. Across groups, greater gamma activity correlated with poorer performances on tests of executive functioning and visuospatial processing. Many neurocognitive associations, however, were partly driven by the higher incidence of mTBI participants with both higher gamma activity and poorer cognition, suggesting that expansive upregulation of gamma has negative repercussions for cognition particularly in mTBI. This is the first human study to demonstrate abnormal resting-state gamma activity in mTBI. These novel findings suggest the possibility that abnormal gamma activities may be a proxy for GABA-ergic interneuron dysfunction and a promising neuroimaging marker of insidious mild head injuries. abstract_id: PUBMED:26346965 Gamma-Band Modulation and Coherence in the EEG by Involuntary Eye Movements in Patients in Unresponsive Wakefulness Syndrome. Gamma power and coherence in the electroencephalogram increase in healthy individuals in association with voluntary eye movements, saccades. Patients with unresponsive wakefulness syndrome show repetitive involuntary eye movements that are similar to saccades but progress at a much lower speed. In the present study, we explored the changes in gamma power and coherence related to these eye movements and investigated whether any relationship to the patients' clinical status could be found that would indicate first neurophysiological signs of recovery. To this end, we assessed the clinical status and registered classical scalp electroencephalography with 19 surface electrodes and electro-oculogram of 45 consecutive patients at admission and at 4 weekly intervals. Slow gamma activity (in the frequency range of 37-40 Hz) was analyzed before, during, and after eye movements (pre, -intra and post-eye movement) by means of "continuous wavelet transform." We graded recovery using clinical behavioral scales, taking into account the variables, age, gender, recovery (yes or no), as well as the patients diagnoses (traumatic brain injury, hypoxia, hemorrhage, infection). Statistical evaluation was performed using DataLab, R, and Kruskal-Wallis methods. Based on the clinical status, we distinguished between recovering and chronic groups of patients. In comparison with the chronic group, the recovering group showed significantly higher gamma power over the posterior electrodes and significant higher values of coherence in the gamma-band activity during the presaccadic period of eye movements. We suggest that our findings on the onset of involuntary eye movements in the recovering group of patients with unresponsive wakefulness syndrome indicates a first neurophysiological sign of favorable prognosis. abstract_id: PUBMED:36740277 Gamma frequency entrainment rescues cognitive impairment by decreasing postsynaptic transmission after traumatic brain injury. Introduction: The relationship between oscillatory activity in hippocampus and cognitive impairment in traumatic brain injury (TBI) remains unclear. Although TBI decreases gamma oscillations and 40 Hz light flicker improves TBI prognosis, the effects and mechanism of rhythmic flicker on TBI remain unclear. Aims: In this study, we aimed to explore whether light flicker could reverse cognitive deficits, and further explore its potential mechanisms in TBI mouse model. Methods: The Morris water maze test (MWM), step-down test (SDT), and novel object recognition test (NOR) were applied to evaluate the cognitive ability. The local field potential (LFP) recording was applied to measure low gamma reduction of CA1 in hippocampus after TBI. And electrophysiological experiments were applied to explore effects of the gamma frequency entrainment on long-term potentiation (LTP), postsynaptic transmission, and intrinsic excitability of CA1 pyramidal cells (PCs) in TBI mice. Immunofluorescence staining and western blotting were applied to explore the effects of 40 Hz light flicker on the expression of PSD95 in hippocampus of TBI mice. Results: We found that 40 Hz light flicker restored low gamma reduction of CA1 in hippocampus after TBI. And 40 Hz, but not random or 80 Hz light flicker, reversed cognitive impairment after TBI in behavioral tests. Moreover, 40 Hz light flicker improved N-methyl-D-aspartate (NMDA) receptor-dependent LTP (LTPNMDAR ) and L-type voltage-gated calcium channel-dependent LTP (LTPL-VGCC ) after TBI treatment. And gamma frequency entrainment decreased excitatory postsynaptic currents (EPSCs) of CA1 PCs in TBI mice. Our results have illustrated that 40 Hz light flicker could decrease intrinsic excitability of PCs after TBI treatment in mice. Furthermore, 40 Hz light flicker decreased the expression of PSD95 in hippocampus of TBI mice. Conclusion: These results demonstrated that 40 Hz light flicker rescues cognitive impairment by decreasing postsynaptic transmission in PCs after TBI treatment in mice. abstract_id: PUBMED:32388346 Auditory steady-state response to chirp-modulated tones: A pilot study in patients with disorders of consciousness. Objective: Due to the problems with behavioral diagnosis of patients with prolonged DOC (disorders of consciousness), complementary approaches based on objective measurement of neural function are necessary. In this pilot study, we assessed the sensitivity of auditory chirp-evoked responses to the state of patients with severe brain injury as measured with CRS-R (Coma Recovery Scale - Revised). Methods: A convenience sample of fifteen DOC patients was included in the study. Auditory stimuli, chirp-modulated at 1-120 Hz were used to evoke auditory steady-state response (ASSR). Phase-locking index (PLI) estimates within low gamma and high gamma windows were evaluated. Results: The PLI estimates within a narrow low gamma 38-42 Hz window positively correlated with the CRS-R total score and with the scores of the Auditory and Visual Function subscales. In the same low gamma window, significant difference in the PLIs was found between minimally conscious (MCS) and vegetative state (VS) patients. We did not observe any between-group differences nor any significant correlations with CRS-R scores in the high gamma window (80-110 Hz). Conclusions: Our results support the notion that the activity around 40 Hz may serve as a possible marker of the integrity of thalamocortical networks in prolonged DOC patients. Significance: Auditory steady-state responses at gamma-band frequencies highlight the role of upper parts of auditory system in evaluation of the level of consciousness in DOC patients. abstract_id: PUBMED:38249572 Diagnosing awareness in disorders of consciousness with gamma-band auditory responses. Introduction: The prolonged disorders of consciousness (pDOC) describe a group of neurological conditions characterized by severe impairment of consciousness resulting from the injury of the central nervous system. As the behavioral diagnosis of pDOC remains challenging, the methods based on observing brain activity appear as promising alternatives. One of these methods is electroencephalography, which allows for noninvasive assessment of brain function. Methods: In this study, we evaluated evoked auditory responses to the chirp-modulated auditory stimulation as a potential biomarker of awareness in pDOC. Chirp-modulated stimulation is based on the repetitive presentation of auditory stimuli with a changing frequency over time. Two protocols were tested: amplitude-modulated narrow-band chirps (frequency range 25-55 Hz) and click-based wide-band chirps (30-100 Hz). The studied pDOC patient group included 62 patients (19 females and 43 males, mean age 40.72 years) diagnosed with Coma Recovery Scale-Revised. Envelope-following responses to stimulation were examined using the intertrial phase clustering coefficient. Results: For both types of stimulation, the strength of the response in the low-gamma range (around 40 Hz) was related to the diagnosis of pDOC. Patients diagnosed with unresponsive wakefulness syndrome exhibited diminished responses, while more favorable diagnoses, suggesting awareness (minimally conscious state or emergence from the minimally conscious state), showed elevated responses. The variations in the integrity of the auditory pathway and the etiology of brain injury altered the observed response strength. Narrow-band stimulation yielded a more systematic relationship between low-gamma response and pDOC diagnosis. Discussion: The results suggest the potential role of low gamma-band responses to chirp-modulated stimulation as the supportive diagnostic tool to detect awareness in the pDOC patient group. abstract_id: PUBMED:3826609 Gamma-aminobutyric acid concentrations in the plasma in unconscious patients with severe craniocerebral trauma or nontraumatic intracranial hemorrhage The detection of cerebral lesions by chemical laboratory studies has not as yet been possible. Tests were carried out to determine whether there was an increase in plasma concentration of the neurotransmitter gamma-aminobutyric acid (GABA) in 12 unconscious patients: 6 with severe craniocerebral trauma (CCT) and 6 with spontaneous intracranial hemorrhage. Normal plasma concentrations were detected in the central venous blood of patients with severe CCT (normal range 366 +/- 123 pmol/ml); in patients with intracranial hemorrhage the values were partially elevated. Maximal values in arterial blood were around 613 pmol/ml (patients with CCT) and 1723 pmol/ml (patients with spontaneous intracranial hemorrhage). The influence of fluid turnover on plasma GABA concentrations is discussed. abstract_id: PUBMED:7065866 Motor rhythms synchronous with respiration in the decerebration syndrome of man Rhythmical somatomotor discharges in the EMG from arms and legs are recorded together with the respiration in decerebrated patients. The rhythms of 7-18/min are synchronous with respiration. The slow activations of 0.3-0.8/min are related to periods of acceleration with higher amplitudes of respiration. The quick rhythms above 2/min correlate with single respiratory movements. They are interpreted as a phenomenon of disinhibition of lower brain stem structures. abstract_id: PUBMED:8868389 A trial of the clinical use of the Testascan computerized gamma tomograph in neurosurgical pathology The paper outlines the diagnostic potentialities of the first Russian multidetector Testascan gamma-tomograph. With this, three different radio pharmaceuticals (RP) were used; 99mTC-GM PAO for examining cerebral circulatory disorders, 99mTC pertechnetate for visualizing tumor foci, and 99mTC-DTPK for studying impaired cerebral liquor pathway. Regional cerebral blood supply was studied in 78 patients with arterial aneurysms and arteriovenous malformations, with stenoses and thromboses of the great arterial trunks, as well in patients with brain injury sequelae and hypophyseal, craniofascial, and cavernous sinus tumors, VIII-neurinomas, meningiomas of the pyramid of the temporal bone. Some 103 patients with various intra- and extracerebral volumetric formations are examined. The impaired cerebral liquor pathways were explored in 27 patients with pathological processes of various genesis. Experience in clinically using the Russian Testascan computer gamma-tomograph in neurosurgical care has shown its high validity in diagnosing various focal processes of tumor and nontumor genesis. Single photon emission computerized tomography (SPECT) may reveal foci of abnormal elevation or reduction of RP accumulation by showing different anatomically topographic characteristics. SPECT is the most promising in the study of cerebrovascular diseases of various genesis and in the presence of tumors at basal sites. Answer: Yes, 'gamma' (40 Hz) synchronous activity is disturbed in patients with traumatic brain injury (TBI). Studies have shown that compared to controls, subjects with TBI displayed significantly delayed early Gamma latency across all scalp sites in addition to other specific regional disturbances. For late Gamma synchrony, subjects with TBI displayed delayed Gamma latency at the left hemisphere and posterior region in comparison to controls, suggesting that impaired integration of spatially distributed brain activity ('40 Hz' electroencephalogram rhythms) may be an important marker of deficits of cortical network binding postulated to be abnormal in people who have survived TBI (PUBMED:12350441). Additionally, TBI causes a decrease in gamma oscillations on the injured side of the brain in rats, and 40 Hz Blue LED therapy could relieve the gamma oscillation changes caused by TBI and improve the prognosis of TBI (PUBMED:35800078). In humans, combat-related mild TBI has been associated with marked increases in resting-state gamma-band activity in various cortical regions, which correlates with poorer performances on tests of executive functioning and visuospatial processing (PUBMED:31041986). Furthermore, gamma frequency entrainment using 40 Hz light flicker has been shown to rescue cognitive impairment by decreasing postsynaptic transmission after TBI in mice, suggesting a therapeutic potential for modulating gamma activity in TBI (PUBMED:36740277). In patients with disorders of consciousness, gamma-band auditory responses have been proposed as a potential biomarker of awareness, with low gamma-band responses to chirp-modulated stimulation showing a systematic relationship with the diagnosis (PUBMED:38249572). These findings collectively indicate that gamma-band activity is indeed disturbed in TBI and may have implications for both the diagnosis and treatment of TBI-related cognitive impairments.
Instruction: Does cryotherapy before drainage increase the risk of intraocular haemorrhage and affect outcome? Abstracts: abstract_id: PUBMED:9290370 Does cryotherapy before drainage increase the risk of intraocular haemorrhage and affect outcome? A prospective, randomised, controlled study using a needle drainage technique and sustained ocular compression. Aims/background: A prospective, randomised, controlled clinical trial was conducted to investigate the effect of performing cryotherapy before drainage of subretinal fluid (SRF) on the incidence of intraocular haemorrhage (IOH) in the management of retinal detachment. Methods: Eighty eyes of 80 patients with rhegmatogenous retinal detachment requiring SRF drainage were recruited. Thirty four cases were randomised to receive drainage before cryotherapy (drainage, air injection, cryotherapy, and explant = DACE group) while 46 cases had drainage after cryotherapy (cryotherapy, drainage, air injection, and explant = CDAE group). All cases had trans-scleral drainage of SRF using a 27 gauge hypodermic needle combined with prolonged, intraocular hypertension. Results: There was a low incidence of IOH associated with drainage in both groups with no statistically significant difference between the groups (DACE group = 2.9%; CDAE group = 4.3%; p = 0.43). There was no significant difference between the groups in the rate of anatomical success with a single operation (DACE group = 82.4%; CDAE group = 86.9%; p = 0.38). There was no significant difference between the groups in the visual outcome. An improvement of two Snellen lines or more occurred in 52.9% of the DACE group and in 56.5% of the CDAE group (p = 0.93). Conclusion: It was concluded that the surgical sequence of applying cryotherapy before drainage of SRF can be safely and effectively performed. The sequences CDAE and CDE, when air injection is not required, along with DACE should all be part of the surgical repertoire for the management of retinal detachments. abstract_id: PUBMED:27684459 Surgical Drainage of Lymphangiectasia Haemorrhagica Conjunctivae. Purpose: To report a case of nonresolving lymphangiectasia haemorrhagica conjunctivae (LHC) successfully treated with surgical drainage. Methods: Retrospective case report. Results: A 17-year-old white girl presented with a history of a large red lesion affecting her right eye. Approximately 1 year earlier, she had noticed a small painless "red spot" affecting the inferior aspect of the conjunctiva of her right eye. Three months before presentation, she noticed a sudden increase in the size of the lesion. There was no history of trauma, and she was not a contact lens wearer. Her medical history was remarkable for a remote seizure disorder which had been stable on levetiracetam. Magnetic resonance imaging/magnetic resonance angiography of the brain and orbits was unremarkable. Anterior segment fluorescein angiography did not show any flow in the lesion, indicating a lymphatic origin of channels. Surgical drainage of the lesion was performed while sparing the affected conjunctiva resulting in an excellent cosmetic outcome. Incisional biopsy confirmed the diagnosis of LHC. The patient remained free of recurrence 6 months after the procedure. Conclusions: Nonresolving LHC can be managed successfully with conjunctival sparing surgical drainage with an excellent cosmetic outcome. abstract_id: PUBMED:1160894 Drainage of the subretinal fluid and injection of saline solution into the vitreous body. In a group of 250 cases operated upon for retinal detachment, drainage of the subretinal fluid was carried out in 96%. Since in 54 cases, i.e. 22%, a large quantity of subretinal fluid was evacuated, saline solution was injected into the vitreous. The following complications due to drainage were noted: intraocular haemorrhages (16%), incarceration of the retina (1%) and temporary iridocyclitis (2%). None of these complications prevented reattachment. In 54 cases in which injection of saline solution into the vitreous was carried out, we have noted: temporary occlusion of the central retinal artery (5 cases), temporary cloudiness in the vitreous body (2 cases) and traumatic cataract (1 case). The injection of saline solution into the vitreous was well tolerated. Although the complications due to drainage are rather numerous (intraocular infection and haemorrhages, incarceration of retina, secondary retinal hole, glial proliferation in the vitreous, dehiscence at the place of drainage), these complications rarely occur if the drainage is carried out in a proper manner. Despite the fact that drainage and injection of saline solution into the vitreous do not cause more severe complications, however, we consider that the method of avoiding drainage and internal tamponage, whenever it is possible, represents a fully justified trend. abstract_id: PUBMED:8022165 Drainage of subretinal fluid in our clinical studies The studies comprised 531 patients (544 eyes) who underwent retinal detachment surgery in the Ophthalmological Clinic in Cracow, in the years 1983-1992. Drainage of subretinal fluid was performed in 54.6% of the eyes and in this group attachment of the retina was achieved in 72.4% of cases; in the group of patients with nondrainage operations it was 74.5%. Intraocular haemorrhages, the only complications, which may be attributed to the drainage of subretinal fluid were found in 18.9% of eyes with drainage and in 8.1% without; this difference was statistically significant (p &lt; 0.01). Statistically significant relationship (p &lt; 0.01) was also found between subretinal fluid drainage and severity of the retinal detachment. The results showed that there were no significant relationships between drainage of subretinal fluid and the results of surgery and that it is indicated first of all in severe cases of retinal detachment in which method of choice is an encirclement procedure. abstract_id: PUBMED:3493985 Combined internal drainage of subretinal fluid and choroidal detachment. We describe a surgical technique to perform the intraocular drainage of choroidal hemorrhage with a flute needle. This procedure is performed at the time of the internal subretinal fluid drainage for a traumatic retinal detachment. abstract_id: PUBMED:2674814 The use of viscoelastic substances in the drainage of postoperative suprachoroidal hemorrhage. Suprachoroidal hemorrhage is a serious surgical complication occurring in up to 10% of aphakic patients undergoing filtration surgery. Surgical treatment includes drainage of the hemorrhage through sclerotomies and reformation of the anterior chamber with balanced salt solution or air. Two patients are presented whose suprachoroidal hemorrhages were managed with the instillation of 10% sodium hyaluronate. The intraocular administration of a space-occupying fluid produces an even distribution of hydraulic force to facilitate drainage of the hemorrhage and avoids the optical problems associated with the use of intraocular air. While most cases of suprachoroidal hemorrhage can be effectively managed using balanced salt solution or air, we believe that viscoelastic agents may be of special benefit in selected situations. abstract_id: PUBMED:31415703 External Choroidal Drainage Using Direct Visualization. Laura L. Snyder Shriji N. Patel A woman in her 60s with a functional glaucoma tube shunt presented after vitrectomy for epiretinal membrane peeling with symptomatic choroidal effusions not responsive to medical therapy. She underwent a minimally invasive, transconjunctival choroidal drainage procedure, which was directly visualized under a widefield viewing system to prevent intraocular hemorrhage or retinal penetration of the needle. This allowed for preservation of her conjunctiva, restoration of normal intraocular pressure by temporary blockage of her tube shunt with a viscoelastic, and resolution of her choroidal effusions. abstract_id: PUBMED:17717780 Glaucoma drainage system according to Molteno for therapy-resistant glaucoma--a two-stage surgical technique to avoid postoperative hypertension Background: The treatment of therapy-resistant secondary glaucoma with cyclodestructive approaches may give rise to unsatisfactory intraocular pressure results, leading to phthisis. A final option may be the implantation of an external glaucoma drainage system (GDS). A consecutive positioning of the drainage system under the conjunctiva and implantation of the drainage tube into the anterior chamber, may result in an uncontrolled reduced of intraocular pressure leading to intraocular haemorrhages into the anterior chamber or vitreous cavity. In particular, expulsive haemorrhages are feared as deleterious complication. Patients And Methods: 15 patients (15 eyes) with secondary glaucoma received a GDS. All patients were previously unsuccessfully treated by numerous approaches (mean 2.7 times). The mean preoperative intraocular pressure was 37 mmHg (range: 22 to 55 mmHg). We performed a sequential approach: during the first operation we implanted and fixed the resorption site of a Molteno GDS onto the sclera close to the equator in the superotemporal quadrant. After an inflammation-free interval of 2 - 3 weeks we placed the drainage tube into the anterior chamber. The average postoperative follow-up period was 20.5 months (range: 3 to 62 months). Results: The implantation of the GDS using a sequential approach was well tolerated by all patients. While 14 out of 15 eyes achieved an IOP of 15 mmHg (range: 12 to 18 mmHg), 2 of them still required additional topical glaucoma treatment. An unsatisfactory IOP regulation was observed in only one eye although a revision surgery was performed postoperatively. Four eyes developed a light anterior chamber haemorrhage that resorbed without serious complications within 3 weeks. A choroidal effusion in one eye was treated by an anterior chamber injection of a viscoelastic gel. Conclusion: The potential disadvantage of the GDS can be almost completely avoided using a sequential approach. The implant heals well in the subtenon space during the first postoperative week, thus preventing an overfiltration of anterior chamber fluid. Our positive results show that the GDS is an important treatment option in selected patients. abstract_id: PUBMED:7077051 Retinal detachment recurrences. 1. Frequency and risk factors (author's transl) Retinal detachment recurrence is defined the reappearance of a detachment after an initial complete success, whatever the delay between surgery and recurrence. This eliminates immediate failures due to inadequate buckles or major complications during surgery, exsudative retinal detachments that reattach spontaneously within a few days and cases where it is not sure that the retina had been completely reattached initially (cloudy media). Employing these criteria, a retrospective study demonstrated 130 eyes with recurrences out of a total of 1237 eyes operated between 1-10-69 and 31-12-79 (10,5 per cent rate); recurrences occurred in this series between 3 days and 7 years after initial surgery, 53,5 per cent of them within an interval of less than 3 months. The most important recurrence risk factors appear to be: detachment due to a retinal tear and not a hole, bullous detachment, abnormal vitreous but not full blown, massive vitreo-retinal retraction and massive intraocular bleeding during subretinal fluid drainage. abstract_id: PUBMED:3490250 Prevention and management of delayed suprachoroidal hemorrhage after filtration surgery. We report two new cases of massive delayed nonexpulsive suprachoroidal hemorrhage (DNSH) following a filtering operation in the aphakic eyes of elderly patients with glaucoma. A timely surgical drainage resulted in full recovery of preoperative visual acuity in both of our patients. As we combine our cases with a series of 18 similar cases of others in the literature, the following conclusions emerge. Limited DNSH does not require surgical intervention for a favorable visual outcome. Massive DNSH, however, requires timely and appropriate surgical intervention to achieve a favorable visual outcome and to avoid persistent hypotony. The most effective surgical intervention is drainage of the suprachoroidal hemorrhage and re-formation of the anterior chamber, but without concomitant vitrectomy. In both limited and massive DNSH, the final visual outcome is not determined by the worst vision at the time of DNSH. Some of the known and suspected risk factors of DNSH following filtering surgery are old age, aphakia, postoperative hypotony, a history of vitreous manipulation or complication, general anesthesia, increased venous pressure, use of fluorouracil, and high myopia. In view of these risk factors, we recommend several preventive measures for decreasing the incidence of DNSH following filtering surgery. Answer: Based on the findings from a prospective, randomised, controlled clinical trial, performing cryotherapy before drainage of subretinal fluid (SRF) does not increase the risk of intraocular haemorrhage (IOH) in the management of retinal detachment. The study compared two groups: one that received drainage before cryotherapy (DACE group) and another that had drainage after cryotherapy (CDAE group). The incidence of IOH was low in both groups and there was no statistically significant difference between them (DACE group = 2.9%; CDAE group = 4.3%; p = 0.43). Additionally, there was no significant difference in the rate of anatomical success with a single operation or in the visual outcome between the two groups. The study concluded that the surgical sequence of applying cryotherapy before drainage of SRF can be safely and effectively performed and should be part of the surgical repertoire for the management of retinal detachments (PUBMED:9290370).
Instruction: Microsatellite alterations and cell-free DNA analysis: could they increase the cytology sensitivity in the diagnosis of malignant pleural effusion? Abstracts: abstract_id: PUBMED:22551519 Microsatellite alterations and cell-free DNA analysis: could they increase the cytology sensitivity in the diagnosis of malignant pleural effusion? Background: The exact diagnosis of malignant pleural effusions (PE) is difficult and often requires combined procedures, because the cytological examination of pleural fluid does not detect tumoral cells in 40% of malignant effusion cases. The aim of this study was to analyze microsatellite alterations (MA) in malignant PE and to determine their diagnostic value as an additional test to cytological examination. The increase in cell-free DNA levels was also evaluated as a signal of probable malignancy. Methods: A total of 84 patients with PE were enrolled and underwent PE and whole blood and exhaled breath condensate analyses. Free DNA was measured by spectrophotometer analyses. DNA was extracted from all samples and analyzed for MA, using the microsatellite markers at chromosomes 3p, 12p, 5q, and 17p. Results: The microsatellite analysis of PE exhibited a higher percentage of alterations in malignant PE than in benign PE. In addition to this, cell-free DNA in PE was seen to be significantly more elevated in malignant than in benign PE. The sensitivity of the sole cytology increased considerably when patients showed at least one MA or DNA&gt;4 ng/μL in the PE. Conclusion: In conclusion, it was seen that the combination of the cytological examination with microsatellite analyses and cell-free DNA in pleural fluid could increase the sensitivity of the diagnosis in patients with PE who have a suspected malignancy, obviating the need for other invasive diagnostic procedures. abstract_id: PUBMED:17982637 Microsatellite DNA analysis does not distinguish malignant from benign pleural effusions. Distinguishing malignant from benign pleural effusions using routine cytology is a common diagnostic problem. Recently, genetic alterations, including microsatellite instability (MSI) and loss of heterozygosity (LOH), have been described in malignant pleural effusions and proposed as methods improving diagnostics. The purpose of this study was to evaluate a panel of molecular markers for the detection of genetic alterations of cells in pleural effusions and to determine their diagnostic value as an additional test to cytologic examination. Pleural fluid and peripheral blood from 48 patients (36 male and 12 female, median age 71 years) were analyzed. Twenty-six patients had malignant pleural effusion, including 23 lung cancer and three metastatic non-pulmonary carcinoma. The control group consisted of 22 patients with benign pleural effusions. Only 14 malignancy-associated pleural effusions were cytology-positive for malignant cells (54%), whereas all benign pleural effusions were negative. DNA was extracted from all the samples and analysed for MSI and/or LOH using the following microsatellite markers: D3S1234, D9S171, D12S363, D17S250, D5S346 and TP53Alu, located at five chromosomal regions: 3p, 9p, 12q, 17q, 5q. Microsatellite analysis of the pleural fluid pellet exhibited genetic alterations in two neoplastic pleural fluid cases and in one inflammatory case. Two out of 26 (7.6%) patients with malignant pleural effusion showed genetic alterations. One exhibited MSI in three different microsatellite markers (D17S250, D9S171, D3S134) and the other showed LOH in marker D3S134. One out of 22 (4.5%) patients with benign pleural effusion showed LOH in marker D3S134. In conclusion, genetic alterations at the level of microsatellite DNA, were detected only in very few cases of malignant pleural effusions, and in one case of benign pleural effusion. Thus, our data suggest that microsatellite DNA analysis does not facilitate the diagnosis of malignant pleural effusion. abstract_id: PUBMED:29363841 A Combined test using both cell sediment and supernatant cell-free DNA in pleural effusion shows increased sensitivity in detecting activating EGFR mutation in lung cancer patients. Introduction: The aim of this study was to examine whether a combined test using both cell sediment and supernatant cytology cell-free DNA (ccfDNA) is more useful in detecting EGFR mutation than using cell sediment DNA or supernatant ccfDNA alone in pleural effusion of lung cancer patients. Methods: A total of 74 lung adenocarcinoma patients with paired samples between primary tumour and corresponding metastatic tumour with both cell sediment and supernatant ccfDNA of pleural effusion cytology were enrolled in this study. Cell sediment and supernatant ccfDNA were analysed separately for EGFR mutations by polymerase chain reaction. Results: Out of 45 patients with mutant EGFR in primary tumours, EGFR mutations were detected in 23 cell sediments of corresponding metastases (sensitivity; 51.1%) and 20 supernatant ccfDNA corresponding metastases (sensitivity; 44.4%). By contrast, the combined test detected EGFR mutations in 27 corresponding metastases (sensitivity; 60.0%), and had a higher sensitivity than the cell sediment or the supernatant ccfDNA alone (P &lt; .05). Out of 45 patients with mutant EGFR, 24, three and 18 were cytologically diagnosed as positive, atypical or negative, respectively. The detection rate in the combined test was highest (95.8%) in the positive group, and mutant EGFR was also detected in four of 18 samples (22.2%) in the negative group. Conclusions: A combined test using both cell sediment DNA and supernatant ccfDNA samples increases the concordance rate of EGFR mutations between primary tumour and corresponding metastases. Our findings indicate that supernatant ccfDNA is useful even in cases where the cytological diagnosis is negative. abstract_id: PUBMED:15013581 p53 and FHIT mutations and microsatellite alterations in malignancy-associated pleural effusion. Cancer is a genetic disease and thus is influenced by oncogenes and tumor suppressor genes. To determine whether the genetic analysis of pleural fluid can be used to diagnose malignant effusion, we investigated p53 and FHIT mutations and microsatellite alterations (MA) in the pleural fluid of 40 patients with pleural effusion associated with malignancy (ME) and in the pleural fluid of 17 patients with tuberculous pleurisy (TB) as a control group. p53 mutations were detected in five ME patients (13%) and in no TB patient, and FHIT mutations were detected in seven ME patients (18%) and two TB patients (12%). For four microsatellite markers, D3S1234, D3S1285, D9S171, and TP53, in ME patients, loss of heterozygosity (LOH) was seen in 10 (25%), 5 (13%), 10 (25%), and 6 patients (15%), respectively, and microsatellite instability (MI) in 6 (15%), 0 (0%), 1 (3%), and 3 patients (8%), respectively. Using the same markers, in TB patients, LOH was seen in three (18%), one (6%), three (18%), and one (6%), respectively, and MI in one (6%), zero (0%), zero (0%), and zero (0%), respectively. Twenty-five ME cases (63%) exhibited MA (LOH or MI) in at least one marker. Moreover, in four (80%) of five ME cases with negative cytology and no carcinoembryonic antigen increase in pleural fluid, MAs were identified. In ME, positive cytology was found in 42.5%, and positive MA, using four markers, in 63%. Although still limited in terms of sensitivity and specificity, this study shows that molecular diagnostic strategies could enhance the diagnostic yield in cases of malignant effusion. abstract_id: PUBMED:38316884 Cell-free DNA methylation analysis as a marker of malignancy in pleural fluid. Diagnosis of malignant pleural effusion (MPE) is made by cytological examination of pleural fluid or histological examination of pleural tissue from biopsy. Unfortunately, detection of malignancy using cytology has an overall sensitivity of 50%, and is dependent upon tumor load, volume of fluid assessed, and cytopathologist experience. The diagnostic yield of pleural fluid cytology is also compromised by low abundance of tumor cells or when morphology is obscured by inflammation or reactive mesothelial cells. A reliable molecular marker that may complement fluid cytology for the diagnosis of malignant pleural effusion is needed. The purpose of this study was to establish a molecular diagnostic approach based on pleural effusion cell-free DNA methylation analysis for the differential diagnosis of malignant pleural effusion and benign pleural effusion. This was a blind, prospective case-control biomarker study. We recruited 104 patients with pleural effusion for the study. We collected pleural fluid from patients with: MPE (n = 48), indeterminate pleural effusion in subjects with known malignancy or IPE (n = 28), and benign PE (n = 28), and performed the Sentinel-MPE liquid biopsy assay. The methylation level of Sentinel-MPE was markedly higher in the MPE samples compared to BPE control samples (p &lt; 0.0001) and the same tendency was observed relative to IPE (p = 0.004). We also noted that the methylation signal was significantly higher in IPE relative to BPE (p &lt; 0.001). We also assessed the diagnostic efficiency of the Sentinel-MPE test by performing receiver operating characteristic analysis (ROC). For the ROC analysis we combined the malignant and indeterminate pleural effusion groups (n = 76) and compared against the benign group (n = 28). The detection sensitivity and specificity of the Sentinel-MPE test was high (AUC = 0.912). The Sentinel-MPE appears to have better performance characteristics than cytology analysis. However, combining Sentinel-MPE with cytology analysis could be an even more effective approach for the diagnosis of MPE. The Sentinel-MPE test can discriminate between BPE and MPE. The Sentinel-MPE liquid biopsy test can detect aberrant DNA in several different tumor types. The Sentinel-MPE test can be a complementary tool to cytology in the diagnosis of MPE. abstract_id: PUBMED:36693563 High Yield of Pleural Cell-Free DNA for Diagnosis of Oncogenic Mutations in Lung Adenocarcinoma. Background: Pleural cytology is currently used to assess targetable mutations in patients with advanced lung adenocarcinoma. However, it is fraught with low diagnostic yield. Research Question: Can pleural cell-free DNA (cfDNA) be used to assess targetable mutations in lung adenocarcinoma patients with malignant pleural effusions (MPE)? Study Design And Methods: Patients with lung adenocarcinoma MPE were recruited prospectively between January 2017 and September 2021. Oncogenic mutations were assessed by treating providers using pleural fluid cytology or lung cancer biopsies. Pleural and plasma cfDNA were used to assess the mutations using next-generation sequencing (NGS). Results: Fifty-four pleural fluid samples were collected from 42 patients. The diagnostic yield to detect oncogenic mutations for pleural cfDNA, pleural cytology, biopsy, and plasma cfDNA was 49/54 (90.7%), 16/33 (48.5%), 22/25 (88%), and 24/32 (75%), respectively, P &lt; .001. The agreement of mutations in positive samples between pleural cfDNA and pleural cytology was 100%, whereas the agreement of pleural cfDNA with biopsies was 89.4%. The median concentration (interquartile range) of pleural cfDNA was higher than plasma: 28,444 (4,957-67,051) vs 2,966.5 (2,167-5,025) copies of amplifiable DNA per mL, P &lt; .01. Median of 5 mL (interquartile range, 4.5-5) of pleural fluid supernatant was adequate for cfDNA testing. Interpretation: The diagnostic yield of pleural cfDNA NGS for oncogenic mutations in lung adenocarcinoma patients is comparable to tumor biopsies and higher than pleural cytology and plasma cfDNA. The pleural cfDNA can be longitudinally collected, can be readily incorporated in clinical workflow, and may decrease the need for additional biopsies. abstract_id: PUBMED:36398618 DNA image cytometry ploidy analysis technique improves the detection rate of pleural effusion cytology. Objective: To explore the clinical diagnostic value of DNA image cytometry (DNA-ICM) ploidy analysis in malignant pleural effusion cancer screening, this study analyzed the effect of exfoliated cell smears (ECSs), cell blocks (CBs), and immunochemistry. Method: A total of 830 cases of pleural effusion were considered for the DNA-ICM ploidy analysis. The ECSs were centrifuged, the CBs were formed, and the DNA-ICM ploidy analysis was carried out in the diagnosis of malignant pleural effusion. Immunochemistry and biopsy was applied to differentiate between benign and malignant pleural effusion and to determine the source of the latter. The sensitivity and specificity differences between the three methods alone and in combination were compared. Results: The sensitivity of the DNA-ICM, ECS, and CB methods was 96.28%, 94.93%, and 95.95%, respectively, and the specificity of each method was 86.52%, 87.08%, and 86.14%, respectively. The sensitivity and specificity of the combined diagnosis method were 99.32% and 75.09%, respectively. Among the 22 cases diagnosed as positive in the DNA-ICM ploidy analysis but negative in the ECS and CB analyses, four cases were diagnosed as positive by comprehensive clinical diagnosis. Conclusion: The sensitivity and specificity of DNA-ICM ploidy analysis are high; the positive detection rate of pleural fluid cytology is effectively increased, and the missed detection rate of cell pathologies is effectively reduced. The combination of the three methods significantly improves the specificity and sensitivity of the diagnosis of malignant pleural effusion, and immunochemistry with CBs can be used to accurately analyze the primary tumor site. abstract_id: PUBMED:26234638 Pleural fluid cell-free DNA in parapneumonic pleural effusion. Objectives: To measure the accuracy of pleural fluid cell-free DNA (cfDNA) concentration for diagnosis of parapneumonic pleural effusions (PPE). Design And Methods: We studied pleural fluids obtained by thoracocentesis in patients with pleural effusion. DNA was automatically extracted from pleural fluid using the MagNa Pure Compact instrument (Roche Diagnostics), and was measured by a real-time quantitative PCR assay for the β-globin gene using a Light-Cycler 480 Real-Time PCR instrument (Roche Diagnostics). Patients were classified into two groups according to the etiology of pleural effusion: PPE and NOT PPE. The diagnostic accuracy was determined using receiver operating characteristic (ROC) techniques by analyzing the area under the ROC curve (AUC). Results: We studied 78 patients with ages between 1 and 86 years old (median=64). Sixteen patients were PPE and 62 were NOT PPE (24 transudative, 30 malignant and 8 other etiology). Pleural fluid cfDNA concentration was higher in patients with PPE (median=46,240 ng/mL) than in those with NOT PPE (median=224 ng/mL). The AUC value was 0.907 (p&lt;0.0001) and the optimal cut-off value was 6740 ng/mL exhibiting 87.5% sensitivity and 80.6% specificity. Also, there were significant differences between transudative and exudative effusions according to pleural fluid cfDNA concentration (p&lt;0.0001). The AUC value was 0.994 and the optimal cut-off value was 162ng/mL exhibiting 100% sensitivity and 96.3% specificity. Conclusions: Pleural fluid cfDNA concentration showed high accuracy for diagnosis of PPE and to discriminate between transudative and exudative effusions. abstract_id: PUBMED:38072300 Performance of SHOX2 and RASSF1A methylation assay in supernatants and matched cell pellets for the diagnosis of malignant pleural effusion. Background: It is difficult to distinguish between malignant pleural effusion (MPE) and benign pleural effusion (BPE). The purpose of this study was to determine the best specimen type by evaluating the DNA methylation status of SHOX2 and RASSF1A in 3 matched PE components. Methods: In total, 94 patients were enrolled, including 45 MPE, 35 BPE, and 14 undefined PE (UPE) with malignancies. PE samples were processed into supernatants, fresh-cell pellets, and formalin-fixed and paraffin-embedded (FFPE) cell blocks, respectively. A quantitative real-time PCR was used to detect the methylation status of SHOX2 and RASSF1A. Results: SHOX2 and RASSF1A methylation levels were significantly higher in the 3 MPE sample types than those of BPE (P &lt; 0.05). The area under the curve using cell-free DNA (cf-DNA) was the highest. The detection sensitivity of SHOX2 and RASSF1A in fresh-cell DNA, cf-DNA and FFPE cell-block were 71.1% (32/45), 97.8% (44/45) and 66.7% (28/42), respectively, with specificities of 97.1% (34/35), 94.3% (33/35), and 96.9% (31/32). Notably, a combination of the cytological analysis and cf-DNA methylation assay showed an increase in positivity rate from 75.6% to 100%. Conclusions: The SHOX2 and RASSF1A methylation assay using cf-DNA, the primary recommended specimen type, can excellently increase the diagnostic sensitivity of MPE. A combination of methylation assay with cytological analysis can be used for auxiliary diagnosis of PE. abstract_id: PUBMED:31466063 DNA Ploidy Analysis and Cell Block Immunohistochemistry in the Diagnosis of Malignant Effusions. Background: Effusion cytology is a major diagnostic tool in medicine and has both therapeutic and prognostic implications. One of the dilemmas encountered is the differentiation between atypical cells and reactive mesothelial cells. The use of ancillary tools can reduce this grey zone and help to achieve a definitive diagnosis. Objectives: The main objective of this study was to evaluate the role of flow cytometry (FCM) and cell block with immunohistochemistry (IHC), along with the clinicoradiological investigations, to achieve a final diagnosis in effusion cytology to the maximum extent possible. Method: A prospective study was conducted. Effusion fluids, showing adequate amount and cellularity, were processed for conventional cytology, ploidy analysis by FCM, and cell block analysis, followed by IHC wherever required. Conventional cytological analysis was done by 2 independent pathologists, to look for interobserver variation, if any. The final result was achieved on the basis of integration of the results of the aforementioned studies, cytological details, clinicoradiological information, tissue biopsy findings, and follow-up. Result: A total of 90 samples were analyzed. On cytological examination, observer I categorized 60% samples as benign and 18.8% (n = 17) as malignant versus 58% categorized as benign and 23.3% (n = 21) as malignant by observer II. Observer I reported 19 (21.1%) equivocal cases and observer II reported 16 (17.7%). When both pathologists were considered together, the number of equivocal cases increased to 20. Sensitivity and specificity of FCM were 96.67 and 100%, respectively, and 100% for the cell block. On combining all techniques, the equivocal cases were resolved and a total of 33 cases were reported as malignant. However, 3 cases could still not be categorized and were labeled inconclusive. Conclusion: Conventional cytology combined with cell block IHC and FCM has the potential to minimize the requirement of tissue biopsy for confirmation. If the first sample is used judiciously for all the techniques, this may reduce the requirement for a second sample and possibly also the time required for a definite diagnosis and the initiation of therapy. Answer: Microsatellite alterations (MA) and cell-free DNA (cfDNA) analysis have been studied as potential adjuncts to cytology in the diagnosis of malignant pleural effusion (MPE). The sensitivity of cytological examination alone does not detect tumor cells in a significant number of malignant effusion cases. The addition of these molecular analyses could potentially increase the diagnostic sensitivity. One study found that microsatellite analysis of pleural effusion (PE) exhibited a higher percentage of alterations in malignant PE than in benign PE, and cell-free DNA in PE was significantly more elevated in malignant than in benign PE. The sensitivity of cytology increased considerably when patients showed at least one MA or DNA levels greater than 4 ng/μL in the PE. This suggests that combining cytological examination with microsatellite analyses and cell-free DNA in pleural fluid could increase the sensitivity of the diagnosis in patients with PE who have a suspected malignancy, potentially obviating the need for other invasive diagnostic procedures (PUBMED:22551519). However, another study reported that microsatellite DNA analysis did not distinguish malignant from benign pleural effusions effectively. Genetic alterations at the level of microsatellite DNA were detected only in very few cases of malignant pleural effusions, and in one case of benign pleural effusion, suggesting that microsatellite DNA analysis does not facilitate the diagnosis of malignant pleural effusion (PUBMED:17982637). A study examining the combined use of cell sediment DNA and supernatant cell-free DNA samples found an increased concordance rate of EGFR mutations between primary tumors and corresponding metastases, indicating that supernatant cfDNA is useful even in cases where the cytological diagnosis is negative (PUBMED:29363841). Furthermore, DNA image cytometry ploidy analysis has been shown to effectively increase the positive detection rate of pleural fluid cytology and reduce the missed detection rate of cell pathologies. The combination of DNA-ICM with conventional cytology significantly improves the specificity and sensitivity of the diagnosis of malignant pleural effusion (PUBMED:36398618). In conclusion, while some studies suggest that microsatellite alterations and cell-free DNA analysis could increase the sensitivity of cytology in diagnosing malignant pleural effusion, the evidence is not entirely consistent. The effectiveness of these molecular techniques may depend on the specific markers and methods used, as well as the context in which they are applied.
Instruction: Should ancillary brain blood flow analyses play a larger role in the neurological determination of death? Abstracts: abstract_id: PUBMED:20706879 Should ancillary brain blood flow analyses play a larger role in the neurological determination of death? Purpose: We present two patients who regained spontaneous respiration following clinical neurological determination of death (NDD) while ancillary radiological imaging demonstrated brain blood flow. Clinical Features: A 26-yr-old male with chronic otitis media presented with a Glasgow Coma Scale (GCS) score of 3 and fixed 7-mm pupils. Computed tomography demonstrated right-sided mastoiditis and a temporal lobe abscess associated with uncal herniation. The patient was diagnosed brain dead seven-hr later when motor responses and brainstem reflexes were absent and his apnea test was positive. Approximately 28-hr after NDD, during post-declaration organ resuscitation, the patient regained spontaneous respiration and magnetic resonance imaging revealed brain blood flow. Spontaneous respirations persisted for five-days before cardiovascular collapse occurred. In the second case, a 50-yr-old female presented with a GCS score of 3 and fixed 6-mm pupils following a traumatic brain injury and a five-minute cardiac arrest. The patient was deemed clinically brain dead six-hr later when physical examination revealed absent motor responses and brainstem reflexes and her apnea test was positive. As confirmation of brain death, a cerebral radionuclide angiogram was performed, which surprisingly revealed intracranial arterial flow. During organ resuscitation, 11-hr after NDD, the patient regained spontaneous respiration. She expired hours after family decision to withdraw treatment. Conclusion: For both patients, several unrecognized confounding factors for NDD were present. These cases illustrate the difficulties encountered by experienced clinicians in determining brain death using clinical criteria alone, and they suggest that more routine use of ancillary brain blood flow analyses should be recommended. abstract_id: PUBMED:18574925 Brain blood flow in the neurological determination of death: Canadian expert report. The neurological determination of death (NDD, brain death) is principally a clinical evaluation. However, ancillary testing is required when there are factors confounding the clinical determination or when it is impossible to complete the minimum clinical criteria. At the time of the 2003 Canadian Forum clarifying the criteria for brain death, 4-vessel cerebral angiography or radionuclide angiography were the recommended tests and the electroencephalogram was no longer supported. At the request of practitioners in the field, the Canadian Council for Donation and Transplantation sponsored the assembly of neuroradiology and neurocritical care experts to make further recommendations regarding the use of ancillary testing. At minimum, patients referred for ancillary testing should be in a deep unresponsive coma with an established etiology, in the absence of reversible conditions accounting for the unresponsiveness and the clinical examination should be performed to the fullest extent possible. For newborns, children and adults, demonstration of the absence of brain blood flow by following recommended imaging techniques fulfill the criteria for ancillary testing: 1. radionuclide angiography or CT angiography 2. traditional 4-vessel angiography 3. Magnetic resonance angiography or Xenon CT. In the absence of neuroimaging, an established cardiac arrest, as defined by the permanent loss of circulation, fulfills the ancillary criteria for the absence of brain blood flow. Acknowledging the existing limitations in this field, further research validating current or evolving techniques of brain blood flow imaging are recommended. abstract_id: PUBMED:18973057 A review of ancillary tests in evaluating brain death. The neurological determination of death (NDD) is primarily considered to be clinical. However, situations may arise where confounding factors make this clinical assessment difficult or impossible. As a result, ancillary tests have been developed in order to aid in the confirmation of brain death. As assessment of neuronal electrical activity; electroencephalography (EEG) is no longer recommended in this determination, tools assessing cerebral perfusion, as reflected by the presence or absence of cerebral blood flow (CBF), are the mainstay of NDD. The preferred ancillary test currently is Hexamethylpropylene amine oxime-single photon emission computed tomography (HMPAO SPECT) radionuclide angiography. When this is not available, or is equivocal, 4-vessel cerebral angiography can be used to determine the presence or absence of intracranial blood flow. However, as cerebral angiography has its own limitations, other techniques are sought by physicians in the Intensive Care and Neuro-intensive Care settings to replace cerebral angiography. In this article, we briefly review the history of diagnosis of brain death, pathophysiologic issues in making this determination, and currently available CBF imaging techniques, discussing each in turn with respect to their utility in the diagnosis of brain death. abstract_id: PUBMED:32782467 Neurological determination of death in isolated brainstem lesions: A case report to highlight the issues involved. The neurological determination of death in patients with isolated brainstem lesions or by disruption of the posterior cerebral circulation is uncommon and many intensivists may never see such a case in their career. It is also the only major difference between the "whole brain" and "brain stem" formulations for the neurological determination of death. We present a case of a patient with infarction of the structures supplied by the posterior cerebral circulation in whom death was diagnosed using neurological criteria, to illustrate the issues involved. We also suggest that international consensus may be achieved if ancillary tests, such as CT angiography, are made mandatory in this situation o demonstrate loss of blood flow in the anterior cerebral circulation as well the posterior circulation. abstract_id: PUBMED:37131034 Ancillary radionuclide perfusion studies in the determination of death by neurologic criteria: methods, interpretation, and lexicon-a user guide for the clinician. Radionuclide perfusion studies have an established ancillary role in determination of death by neurologic criteria (DNC). While critically important, these examinations are not well understood by individuals outside of the imaging specialties. The purpose of this review is to clarify relevant concepts and nomenclature and provide a lexicon of relevant terminology of value to non-nuclear medicine practitioners who wish to better understand these examinations. Radionuclides were first employed to evaluate cerebral blood flow in 1969. Radionuclide DNC examinations that use lipophobic radiopharmaceuticals (RPs) entail a flow phase followed immediately by blood pool images. On flow imaging, presence of intracranial activity within the arterial vasculature is scrutinized following arrival of the RP bolus into the neck. Lipophilic RPs designed for functional brain imaging were introduced to nuclear medicine in the 1980s and were engineered to cross the blood-brain-barrier and be retained in the parenchyma. The lipophilic RP 99mTc-hexamethylpropyleneamine oxime (99mTc-HMPAO) was first used as an ancillary investigation in DNC in 1986. Examinations using lipophilic RPs entail both flow and parenchymal phase images. According to some guidelines, parenchymal phase uptake should be assessed by tomographic imaging, while other investigators consider simple planar imaging sufficient. Findings of perfusion on either the flow or parenchymal phase of the examination effectively precludes DNC. If the flow phase is omitted or somehow compromised, the parenchymal phase remains sufficient for DNC. A priori, parenchymal phase imaging is superior to flow phase imaging for several reasons and lipophilic RPs are favoured over lipophobic RPs in that both flow and parenchymal phase imaging are performed. Disadvantages of lipophilic RPs are increased cost and the need to procure them from a central laboratory, which can prove difficult, especially outside usual working hours. According to most current guidelines, both lipophilic and lipophobic RP categories are acceptable for use in ancillary investigations in DNC, with a growing overt preference for studies using the lipophilic RPs based on their ability to capture the parenchymal phase. The new adult and pediatric Canadian recommendations favour use of lipophilic RPs to variable degrees, specifically 99mTc-HMPAO, the lipophilic moiety which has undergone the greatest validation. Although ancillary use of radiopharmaceuticals is quite settled in multiple DNC guidelines and best practices, several areas of further research remain open to investigation. Examens auxiliaires de perfusion nucléaire pour la détermination du décès selon des critères neurologiques : méthodes, interprétation et lexique-un guide de l'utilisateur à l'intention du clinicien. abstract_id: PUBMED:32648194 Ancillary Testing for Determination of Death by Neurologic Criteria Around the World. Objective: We sought to identify similarities and differences in the diagnostic requirements for ancillary testing for determination of brain death/death by neurologic criteria (BD/DNC) around the world. Methods: We reviewed diagnostic requirements for ancillary testing for BD/DNC in 78 unique official national BD/DNC protocols obtained from contacts worldwide between January 2018 and April 2019. Results: Details provided on the performance and interpretation of ancillary tests for determination of BD/DNC were variably provided and inconsistent. Approximately half of all protocols that included each ancillary test provided details about study performance: 63% of protocols that included conventional cerebral angiography, 55% of protocols that included electroencephalography, 50% of protocols that included somatosensory evoked potentials, 48% of protocols that included transcranial Doppler ultrasonography, 43% of protocols that included nuclear medicine flow study and 41% of protocols that included brainstem auditory evoked potentials. Similarly, about half of all protocols that included each ancillary test provided details about study interpretation: 66% of protocols that included electroencephalography, 59% of protocols that included brainstem auditory evoked potentials, 56% of protocols that included somatosensory evoked potentials, 55% of protocols that included transcranial Doppler ultrasonography, 52% of protocols that included conventional cerebral angiography and 49% of protocols that included nuclear medicine flow study. Interpretation: Diagnostic requirements for ancillary testing in BD/DNC determination vary around the world. We hope that the World Brain Death Project will improve worldwide consensus on the diagnostic requirements for ancillary testing in BD/DNC, both for performance and interpretation. abstract_id: PUBMED:16738299 Brief review: the role of ancillary tests in the neurological determination of death. Purpose: The acceptance of brain death by society has allowed for the discontinuation of "life support" and the transplantation of organs. In Canada we accept the clinical criteria for brain death (essentially brain stem death) when they can be legitimately applied. Ancillary tests are needed when these clinical criteria cannot be applied or when there are confounders. Ancillary tests include tests of intracranial blood circulation, electrophysiological tests, metabolic studies and tests for residual vagus nerve function. The ideal confirmatory test is one which, when positive, would be incompatible with recoverable brain function (i.e., has no false positives), is not influenced by drugs or metabolic disturbances and which can be readily applied. A critical review of the various ancillary tests used to support the neurological determination of death (brain death) was undertaken. Methods: A literature review based on a MEDLINE search of relevant articles published between January 1966 to January 2005 was undertaken. Results: Tests of whole brain perfusion/intracranial blood circulation are the only ones that meet stated criteria. Conclusions: At present only cerebral angiography and nuclear medicine tests of perfusion are accepted by Canadian standards, but computed tomography and magnetic resonance angiography should prove to be suitable. Transcranial Doppler studies may be suitable for specific cases once appropriate guidelines are established. abstract_id: PUBMED:31073456 Ancillary Studies in Evaluating Pediatric Brain Death. When confounding variables exist that inhibit the ability to diagnose brain death clinically in pediatric patients, ancillary tests may provide additional information for the practitioner in evaluating for the presence or absence of brain death. Multiple options exist but differ in availability, ease of administration, cost, safety profile, and reliability to accurately diagnose brain death. An important desirable quality of an ancillary test is eliminating false positives, which imply brain death when brain death is in fact not present. More commonly available ancillary studies include electroencephalograms, brain angiography through various modalities, brain stem auditory evoked potentials, and transcranial Doppler ultrasound. At this time, there is not an ancillary test with 100% reliability in diagnosing brain death that can replace the clinical brain death exam. Therefore, practitioners need to understand the strengths and limitations of the ancillary studies available at their hospital. abstract_id: PUBMED:32266423 Brain death confirmation. The number of patients waiting for a new organ has continuously decreased in recent years. Brain death confirmation plays an important role in the clinical routine concerning a possible organ transplantation. In many countries a strictly defined protocol prescribes the required neurological examination and ancillary test criteria. Therefore, many years of experience and expertise is absolutely necessary for neurologists and neuroradiologists. Pitfalls can sometimes be very challenging for the treating physicians. abstract_id: PUBMED:35864656 Challenges in Brain-Death Certification in India. The determination of Brain Death (BD)/Death by neurological criteria (DNC) is now widely accepted among various international societies following the World Brain Death project recommendation. As per the World Brain Death project, ancillary testing should be performed when standard brain-death examination components are inconclusive or cannot be performed. BD was defined legally in 1994 under the Transplantation of Human Organs Act (THOA). However, even after 27 years of the formulated law, there are no guidelines in the THOA regarding the determination of BD using ancillary tests. The present brief report describes two instances where ancillary tests like four-vessel angiography and transcranial doppler-aided brain-death certification were done. It is the first available literature from our country where ancillary tests aided in confirmation of BD when the standard clinical components of DNC could not be performed. Answer: The question of whether ancillary brain blood flow analyses should play a larger role in the neurological determination of death (NDD) is supported by several arguments presented in the provided abstracts. Firstly, there are cases where patients who were clinically determined to be brain dead later exhibited spontaneous respiration, and ancillary imaging demonstrated brain blood flow, suggesting that clinical criteria alone may not always be sufficient for accurate NDD (PUBMED:20706879). This indicates that ancillary tests could provide additional confirmation in cases where clinical assessments may be confounded. Secondly, expert reports and guidelines, such as those from the Canadian Council for Donation and Transplantation, recommend the use of ancillary testing when clinical criteria are confounded or cannot be fully met. Recommended imaging techniques for demonstrating the absence of brain blood flow include radionuclide angiography, CT angiography, traditional 4-vessel angiography, magnetic resonance angiography, or Xenon CT (PUBMED:18574925). Additionally, ancillary tests, particularly those assessing cerebral perfusion, have become the mainstay of NDD when clinical assessment is difficult or impossible. Hexamethylpropylene amine oxime-single photon emission computed tomography (HMPAO SPECT) radionuclide angiography is the preferred ancillary test, with 4-vessel cerebral angiography used when the former is not available or is equivocal (PUBMED:18973057). Furthermore, in cases of isolated brainstem lesions, ancillary tests such as CT angiography may be necessary to demonstrate loss of blood flow in both the anterior and posterior cerebral circulation, which could help achieve international consensus on NDD (PUBMED:32782467). Radionuclide perfusion studies are also highlighted for their established ancillary role in DNC, with a growing preference for studies using lipophilic radiopharmaceuticals like 99mTc-HMPAO due to their ability to capture the parenchymal phase (PUBMED:37131034). However, it is important to note that there is variability and inconsistency in the diagnostic requirements for ancillary testing for BD/DNC determination around the world, and further research is recommended to validate current or evolving techniques of brain blood flow imaging (PUBMED:32648194).
Instruction: Can relative strength between the back and knees differentiate lifting strategy? Abstracts: abstract_id: PUBMED:20415155 Can relative strength between the back and knees differentiate lifting strategy? Objective: This study investigated whether relative strength between the back and knees can differentiate and predict lifting strategy and the effects of gender, load magnitude, and knowledge of strength on the strategy. Background: Although muscular strength is thought to play a vital role in the mechanics of lifting, how localized joint strengths and their relations influence lifting strategy remains unclear. Method: Thirty-two participants (16 men and 16 women) underwent isokinetic strength tests and were then divided into two groups: one provided with the knowledge of their strength test results and the other not. They subsequently performed the same set of simulated lifting tasks while their lifting kinematics were being recorded. Postural indices to quantify the lifting strategies were derived from the kinematic data. Results: The ratio of back strength versus total knee strength and gender had significant effects on measures quantifying the lifting strategy. A statistical model incorporating gender, strength, and anthropometry achieved an R2 value of .64 and predicted correctly 76% of lifting strategies used by individual participants. Conclusion: Individuals with back strength greater than their total knee strength tended to use a back-preferred lift strategy, and vice versa, suggesting that muscular strength is a determining factor of lifting strategy. Application: An emphasis on additional knee strengthening in a training program may change the tendency of using and overstressing the back. APPLICATION of modeling and simulation technology for ergonomics design can be enhanced by more individually and accurately specified lifting strategies based on anthropometry and strength profiles. abstract_id: PUBMED:35413576 The influence of sex and strength capacity on normalized low-back exposure during lifting: Insights from a joint power analysis. Objective: Investigate the influence of sex, strength capacity, and relative load mass on low-back exposure and lower extremity joint power generation in backboard lifting. Background: Sex and strength have been shown to influence lifting strategy, but without load mass being scaled to strength it is unknown which factor influences low-back exposures, and whether there are interactions with load mass. Methods: Motion capture and force plate data from 28 participants were collected during backboard lifting at load masses scaled to strength capacity. Differences in normalized peak low-back moment, peak knee-to-hip power magnitude ratio and timing were tested as a function of sex, strength, and load mass. Results: Stronger participants had lower normalized peak low-back moments (average 32% change from low-capacity across all load masses), with no significant sex effect (p = 0.582). As load mass increased, normalized peak low-back moment, peak knee-to-hip power magnitude and synchronicity decreased. Conclusion: Training to both increase strength capacity and hip-joint power generation may be a strategy to reduce low-back exposure in backboard lifting. abstract_id: PUBMED:11295889 Lifting strategy and stability in strength-impaired elders. Study Design: Ninety-six subjects underwent biomechanical analysis of freestyle box lifting. Objectives: To relate lifting strategy to lower extremity muscle strength and postural stability in functionally limited elders. Summary Of Background Data: Back pain and postural instability in elders is rampant and poorly understood. Much of the literature on lifting relates to young subjects. Methods: Lifting strategy for 91 functionally limited elders was classified by timing of peak power in the back and knee joints. Isometric hip and knee extensor strength and postural stability were compared among strategy classifications. Postural stability was analyzed by measuring center of gravity (CG) displacement during lifting. Results: Three lift strategy groups were established: back-lift, or back dominant strategy (BDS); leg-lift, or leg dominant strategy (LDS); and leg-dominant back-first mixed strategy (LDB). Subjects with relatively strong hip and knee extensors used leg dominant strategy; subjects with relatively strong knee, but weak hip, extensors used leg-dominant back-first mixed strategy; and subjects with weak hip and knee extensors used back dominant strategy. Leg dominant strategy and leg-dominant back-first mixed strategy engendered less center of gravity displacement and thus were posturally more stable than the back dominant strategy. Conclusion: Subjects apparently chose their lift strategy based on their hip and knee extensor strength. Weaker elders using a less stable back dominant strategy could be susceptible to falls and subsequent long bone and vertebral body fractures. Clinicians could identify at-risk elders by muscle testing. Beyond emphasizing strength and endurance exercise in elderly patients, weak elders should be taught to use a leg dominant lifting strategy, or if they are not physically able, to use a combined back/leg strategy. abstract_id: PUBMED:15513714 Foot positioning instruction, initial vertical load position and lifting technique: effects on low back loading. This study investigated the effects of initial load height and foot placement instruction in four lifting techniques: free, stoop (bending the back), squat (bending the knees) and a modified squat technique (bending the knees and rotating them outward). A 2D dynamic linked segment model was combined with an EMG assisted trunk muscle model to quantify kinematics and low back loading in 10 subjects performing 19 different lifting movements, using 10.5 kg boxes without handles. When lifting from a 0.05 m height with the feet behind the box, squat lifting resulted in 19.9% (SD 8.7%) higher net moments (p &lt; 0.001) and 17.0% (SD 13.2%) higher compression forces (p &lt; 0.01) than stoop lifting. This effect was reduced to 12.8% (SD 10.7%) for moments and a non-significant 7.4% (SD 16.0%) for compression forces when lifting with the feet beside the box and it disappeared when lifting from 0.5 m height. Differences between squat and stoop lifts, as well as the interaction with lifting height, could to a large extent be explained by changes in the horizontal L5/S1 intervertebral joint position relative to the load, the upper body acceleration, and lumbar flexion. Rotating the knees outward during squat lifts resulted in moments and compression forces that were smaller than in squat lifting but larger than in stoop lifting. Shear forces were small ( &lt; 300 N) at the L4/L5 joint and substantial (1100 - 1400 N) but unaffected by lifting technique at the L5/S1 joint. The present results show that the effects of lifting technique on low back loading depend on the task context. abstract_id: PUBMED:8753142 Study of isometric lifting strength in normal Chinese adults. The purposes of this study were to describe norms of isometric lifting strength for healthy Chinese adults and to investigate the effects of age, gender, body weight, and body height on lifting strength. Three types of lifting (i.e., arm, back, and leg lifting) were measured in a sample of 350 Chinese adults (172 men and 178 women) aged between 20 and 81 years. A Force Evaluation and Testing System (FET 5000) was used for strength measurements with three standard lifting positions. The average of two trials for each lifting strength test was used as the subjects' test score. The results showed that the greatest strengths for both sexes exist aged between late 20's and early 30's, with a general decline following thereafter. The magnitude of strength decline with age was most prominent in leg lifting strength among all lifting patterns. Males were stronger than females in all lifting patterns among the different age groups. All of the lifting strengths studied correlated positively with gender, body weight, and body height, and negatively with age in the analyses of simple correlation coefficients. In addition, stepwise multiple regression analyses showed that sex, age and body weight were most predictive of all types of lifting strengths. Body height was an effective predictor of back lifting strength. These findings and the establishment of data base can provide therapists with an objective evaluation regarding lifting strength of individuals for clinical use. abstract_id: PUBMED:23722573 Effect of mental processing on low back load while lifting an object. Study Design: Biomechanical study using a 3D motion analysis system conducted in a laboratory setting. Objective: To determine the effect of mental processing on low back load during lifting. Summary Of Background Data: In addition to ergonomic factors such as frequent lifting, work-related psychosocial factors are significant in the onset of disabling back pain. Although the ergonomic effects on low back load while lifting have been investigated and are widely accepted in the workplace, few studies have investigated the effect of psychosocial factors on low back load while lifting. Methods: Thirteen healthy subjects lifted a box from the ground in 4 different lifting tasks in randomized order: (1) squat posture with knees flexed; (2) stoop posture with knees extended; (3) squat posture with mental processing using arithmetic tasks; and (4) stoop posture with mental processing using arithmetic tasks. We used a 3D motion analysis system and 4 force plates to record kinematic and kinetic data. We calculated dynamic triaxial low back joint moments and low back compression force as index parameters of low back load under these experimental conditions. Results: Mental processing significantly increased peak low back compression force and low back extension moment, but not lateral flexion or rotation moment, while lifting in both lifting postures. Also, mental processing increased trunk bending angles in the squat posture. Although a tendency for decreased pelvic forward tilt was observed, mental processing did not significantly decrease forward pelvic tilt in the stoop posture. Conclusion: Mental processing during lifting tasks seems to affect both trunk and pelvis angles in the sagittal plane for squat and stoop postures, resulting in increased low back load. The present findings might help explain the effect of ergonomic demands of lifting tasks as well as psychosocial factors responsible for the onset of disabling back pain. Level Of Evidence: 2. abstract_id: PUBMED:16922195 Effects of a new industrial lifting belt on back muscular activity, hand force, and body stability during symmetric lifting. This work investigated how wearing a new design of back belt affects erector spinae activity, hand force, and body stability. The belt was first tested with static holding tasks and found to significantly decrease the back muscle activity. Actual lifting tasks were further carried out to test the effect of the belt. Ten male subjects performed a symmetric lifting task of low-lying loads (11 and 16 kg) at natural toting velocity, using either a squat or stoop lifting posture, both with and without a belt. The study measured various independent variables using electromyography (EMG), load cells, and motion capture device. The results demonstrated that the belt reduced the load on the erector spinae, as well as the triceps brachii and biceps brachii. The overall mean values of the peak (hand) force did not appear significantly affected while wearing the belt, but the force peaks appeared postponed. The belt did not alter body stability while lifting. From the present findings, the belt effectively changed the force distribution during lifting, at least reducing the muscle load on the back. The belt may be a potentially useful device for symmetric industrial lifting tasks. abstract_id: PUBMED:30819070 Peak Rate of Force Development and Isometric Maximum Strength of Back Muscles Are Associated With Power Performance During Load-Lifting Tasks. This study investigates the relationship between peak force and rate of force development (RFD) obtained from maximal voluntary isometric contraction (MVC) of the back muscles and the power produced during a loaded lifting task. A group of 27 resistance-trained and 41 recreationally physically active men performed a maximal isometric strength test of the back muscles and a deadlift to high pull while lifting progressively increasing weights. Peak RFD correlated significantly with the peak and mean power produced during a deadlift to high pull with lower weights (from 20 to 40 kg), with r values ranging from .941 to .673 and from .922 to .633. The r2 values ranged from .89 to .45 and from .85 to .40, explaining 89%-45% and 85%-40% of total variance. There were also significant relationships between MVC peak force and peak and mean values of power produced during a deadlift to high pull with weights ≥60 kg ( r in range from .764 to .888 and from .735 to .896). Based on r2, a moderate-to-high proportion of variance was explained (58%-79% and 54%-80%). These findings indicate that peak RFD obtained from MVC of the back muscles may be predictive of power performance during a lifting task at light loads. In addition to MVC peak force produced by back muscles, the ability of subjects to develop a high force in a short time should be evaluated in order to gain deeper insight into a loaded lifting performance, namely, in those prone to low back pain. abstract_id: PUBMED:33130385 Understanding individual differences in lifting mechanics: Do some people adopt motor control strategies that minimize biomechanical exposure. The movement strategy an individual uses to complete a lift can influence the resultant biomechanical exposure on their low back. We hypothesize that some lifters may choose a motor control strategy to minimize exposure to the low back, where others may not. Lower magnitudes of exposure to the low back coupled with less variability in lift-to-lift exposure and in features of movement strategy related to biomechanical exposure would support that such lifters consider minimizing exposure in their motor control strategy. We tested this hypothesis by investigating if differences in variability of low back exposure measures, as well as features of movement strategy related to resultant low back exposures differed across lifters. Twenty-eight healthy adults participated in the study where ten repetitions of a lifting task with the load scaled to 75% of participant's one-repetition maximum were completed. In all trials, whole-body kinematics and ground reaction forces were collected. Lifters were grouped as low, moderate or high relative exposure based on low back flexion angles and normalized L4/L5 extensor moments when lifting. Principal component analysis was used to identify independent movement strategy features, and statistical testing determined which features differed between high and low exposure lifts. Variability in low back exposures and movement features associated with relative biomechanical exposure were compared across lifter classifications. Significantly less variability was observed in low back exposures among the low exposure lifter group. Additionally, a trend towards lower variability in movement features associated with relative biomechanical exposure was also observed in low exposure lifters. These findings provide initial support for the hypothesis that some lifters likely define a motor control strategy that considers minimizing biomechanical exposure in addition to completing the lift demands. Future work should explore how state and trait-based factors influence an individual to consider biomechanical exposure within their motor control strategy in lifting. abstract_id: PUBMED:8303453 Isokinetic trunk strength and lifting strength measures. Differences and similarities between low-back-injured and noninjured workers. Fifty-eight back pain patients and 21 entry-level Postal Service workers without low-back pain were evaluated using a variety of lumbar function measures. Isolated trunk strength and full lifting strength were gauged with isokinetic and isometric methods. Lumbar range-of-motion was computed using toe-touch and goniometers. Conventional clinical techniques such as toe touch and straight leg raise were effective in distinguishing back-injured from normal subjects. Isometric and isokinetic peak force and torque tests failed to show significant differences between low-back pain and job applicant groups. When compared with published norms, our job applicant group was significantly deconditioned. Our data suggest that asymptomatic, deconditioned subjects could be mistaken for back-impaired patients or symptom magnifiers. Answer: Yes, relative strength between the back and knees can differentiate lifting strategy. The study described in abstract PUBMED:20415155 found that individuals with back strength greater than their total knee strength tended to use a back-preferred lift strategy, and vice versa. This suggests that muscular strength is a determining factor of lifting strategy. The study also developed a statistical model incorporating gender, strength, and anthropometry that predicted correctly 76% of lifting strategies used by individual participants, indicating a strong relationship between relative strength and lifting strategy. Additionally, the study in abstract PUBMED:11295889 supports this finding by showing that subjects chose their lifting strategy based on their hip and knee extensor strength. Weaker elders using a less stable back dominant strategy could be susceptible to falls and subsequent injuries. This study also suggests that individuals with relatively strong hip and knee extensors used a leg dominant strategy, while those with weak hip and knee extensors used a back dominant strategy. These findings are consistent with the notion that the relative strength of the back and knee muscles influences the choice of lifting strategy, which can have implications for both performance and injury risk.
Instruction: Does changing the configuration of a motor racing circuit make it safer? Abstracts: abstract_id: PUBMED:15728695 Does changing the configuration of a motor racing circuit make it safer? Objectives: To assess the pattern of injuries presenting to a racing circuit medical centre in two three-year periods before and after two chicanes were built into the track. Methods: Medical centre records were used to identify all patients assessed during the two time periods. Those referred to hospital were categorised by injury severity into three groups. Results: The proportions of those attending the medical centre that were referred and admitted to hospital were the same in both periods (12-13% and 3% respectively). During the two study periods, the risk of a severe injury for a car driver decreased from 0.1% to 0.03% (p&lt;0.05). For a motorcyclist, similar values were 0% and 0.2% (not significant). Conclusions: Chicanes have improved the safety of the racing circuit for car drivers, reducing the risk of injury. abstract_id: PUBMED:36283410 Post-embryonic remodeling of the C. elegans motor circuit. During development, animals can maintain behavioral output even as underlying circuitry structurally remodels. After hatching, C. elegans undergoes substantial motor neuron expansion and synapse rewiring while the animal continuously moves with an undulatory pattern. To understand how the circuit transitions from its juvenile to mature configuration without interrupting functional output, we reconstructed the C. elegans motor circuit by electron microscopy across larval development. We observed the following: First, embryonic motor neurons transiently interact with the developing post-embryonic motor neurons prior to remodeling of their juvenile wiring. Second, post-embryonic neurons initiate synapse development with their future partners as their neurites navigate through the juvenile nerve cords. Third, embryonic and post-embryonic neurons sequentially build structural machinery needed for the adult circuit before the embryonic neurons relinquish their roles to post-embryonic neurons. Fourth, this transition is repeated region by region along the body in an anterior-to-posterior sequence, following the birth order of neurons. Through this orchestrated and programmed rewiring, the motor circuit gradually transforms from asymmetric to symmetric wiring. These maturation strategies support the continuous maintenance of motor patterns as the juvenile circuit develops into the adult configuration. abstract_id: PUBMED:16799114 Does changing the configuration of a motor racing circuit make it safer? N/A abstract_id: PUBMED:31318331 A circuit-dependent ROS feedback loop mediates glutamate excitotoxicity to sculpt the Drosophila motor system. Overproduction of reactive oxygen species (ROS) is known to mediate glutamate excitotoxicity in neurological diseases. However, how ROS burdens can influence neural circuit integrity remains unclear. Here, we investigate the impact of excitotoxicity induced by depletion of Drosophila Eaat1, an astrocytic glutamate transporter, on locomotor central pattern generator (CPG) activity, neuromuscular junction architecture, and motor function. We show that glutamate excitotoxicity triggers a circuit-dependent ROS feedback loop to sculpt the motor system. Excitotoxicity initially elevates ROS, thereby inactivating cholinergic interneurons and consequently changing CPG output activity to overexcite motor neurons and muscles. Remarkably, tonic motor neuron stimulation boosts muscular ROS, gradually dampening muscle contractility to feedback-enhance ROS accumulation in the CPG circuit and subsequently exacerbate circuit dysfunction. Ultimately, excess premotor excitation of motor neurons promotes ROS-activated stress signaling that alters neuromuscular junction architecture. Collectively, our results reveal that excitotoxicity-induced ROS can perturb motor system integrity through a circuit-dependent mechanism. abstract_id: PUBMED:27703403 Epidemiology of racing injuries in Thoroughbred racehorses with special reference to bone fractures: Japanese experience from the 1980s to 2000s. This report describes the descriptive epidemiology of racing fractures that occurred from the 1980s to 2000s on racetracks of the Japan Racing Association (JRA). The incidence of racehorse fractures during flat racing was approximately 1-2%. Fractures occurring during a race are more likely to occur in a forelimb. Fractures mostly occur at the third and fourth corners of oval tracks and on the home stretch. They also occur more frequently at the time of changing the leading limb. Comparison of the incidence of racing fracture between before and after reconstruction of the geometrical configuration of a racetrack revealed that there was an outstanding reduction in the number of serious fractures in the year before and after reconstruction. It was postulated that the improvement in racing time, possibly influenced by reconstructing the geometrical configuration of the racetrack, was connected to the reduction in the number of fractures. Of non-biological race- and course-related factors, type of course (dirt or turf), track surface condition, differences between racecourses, and racing distance significantly influence racing time. By using an instrumented shoe, vertical ground reaction forces (VGRFs) on the forelimb during galloping and the relationships between a rough dirt and woodchip track surface and a smooth dirt and woodchip surface were measured. Relating the incidence of racing fractures with track conditions in general showed that track surface has significant effects on the incidence of fracture, with the incidence of fractures increasing as track conditions on dirt worsen and a tendency for the incidence of fractures to decrease as track conditions on turf worsen. It seems probable that track condition in general may affect the incidence of fracture. The incidence of fracture in horses during both racing and training decreased as the years progressed. abstract_id: PUBMED:32380681 Patterns of Racing and Career Duration of Racing Greyhounds in New Zealand. The welfare and wastage of racing greyhounds is a topic of public concern. Little is published about the racing patterns of these dogs in New Zealand. The aim of this study is to describe the pattern of greyhound racing in New Zealand. Data on all race starts between 1 August 2011 and 25 March 2018 were supplied by Greyhound Racing New Zealand. A cohort was created containing dogs that had a racing career between 1 August 2013 and 31 July 2017. Data were collated within a customized Microsoft Access database from electronic records of all racing starts for every dog within the 2013-2016 racing seasons. For this cohort of racing dogs, there were 97,973 race starts across 22,277 races involving 2393 individual greyhounds. The median number of days between racing starts was 7 days (inter-quartile range (IQR): 4-10 days). The median career length was 424 days (IQR: 206-647 days) and the median number of racing starts throughout a racing career was 35 (IQR: 16-59 starts). Dogs of similar ability finished their career at a similar age. abstract_id: PUBMED:15388550 Injuries in professional motor car racing drivers at a racing circuit between 1996 and 2000. Background: Research on injuries in racing drivers is limited. Objective: To gain more information about such injuries. Methods: Injuries recorded during and after races between 1996 and 2000 were investigated using the medical charts from the circuit medical centre at Fuji Speedway, which is one of the biggest circuits in Japan. Races were in either single seat/formula cars or saloon cars. Results: Data were obtained from 39 races in single seat cars (1030 participating cars) and 42 races in saloon cars (1577 cars). Fifty injuries were recorded during the single seat car races, and 62 during the saloon car races (injury rate 1.2 per 1000 competitors per race and 0.9 per 1000 competitors per race respectively). Thirteen injuries were recorded after the race, 12 of them in saloon car racing. Bruises were the major injury in single seat car racing (58%). Lower limb bruising was more common than upper limb bruising. Most of the injuries in saloon car racing (53.2%) were neck sprains. The incidence of concussion was high in both groups compared with other high risk sports. Conclusions: There were some differences in injuries between the two types of car. No serious injuries occurred except for one death. However, the driver's body is subjected to large forces in a crash, hence the high incidence of concussion. The injuries recorded after the race emphasise that motor racing is a demanding sport. abstract_id: PUBMED:25393770 Heritability of racing performance in the Australian Thoroughbred racing population. Performance data for 164,046 Thoroughbreds entered in a race or official barrier trial in Australia were provided by Racing Information Services Australia. Analyses estimating the heritability for a range of racing performance traits using a single-trait animal model were performed using ASREML-R. Log of cumulative earnings (LCE; 0.19 ± 0.01), log of earnings per race start (0.23 ± 0.02) and best race distance (0.61 ± 0.03) were all significantly heritable. Fixed effects for sex were significant (P &lt; 0.001) for all performance traits aside from LCE (P = 0.382). With the exception of annual earnings, trainer was also significant for all performance traits. As the application of modern genetic selection methodologies continues to gain popularity in the racing industry, contemporary heritability estimates from the current population of Thoroughbreds will play a vital role in identifying which traits are better suited to selection and in the development of more accurate genomic evaluations for racing performance. abstract_id: PUBMED:36144140 Static Torque Analysis of Micro Claw-Pole Stepper Motor Based on Field-Circuit Combination. Because of the complexity of the structure and magnetic circuit of the micro claw-pole stepper motor, it is difficult to analyze this kind of motor quickly and accurately. Therefore, it takes a lot of time to accurately model and use the three-dimensional finite element analysis method to accurately analyze the motor. Regarding the three-dimensional finite element method, the equivalent magnetic circuit method analysis is fast, but the accuracy is not high. In order to better study the performance of this kind of micro claw-pole motor and reduce the cost of optimization time, this paper adopts the method of combining the equivalent magnetic circuit method and three-dimensional finite element analysis to analyze the static torque characteristics of the micro permanent magnet claw-pole stepper motor. Firstly, the equivalent magnetic circuit method is used for theoretical analysis, the air-gap flux equation is deduced, and the relationship between the electromagnetic torque and the geometric parameters of the motor is deduced. Then, the three-dimensional finite element simulation results are substituted into the relevant formulas defined by the equivalent magnetic circuit method to obtain a more accurate electromagnetic torque. Finally, through the comparison and analysis of the experimental data, simulation data, and theoretical calculation values, the error rate of the derived motor torque is within 8.5%. The micromotor studied in this paper is optimized, and the holding torque is increased by 12.5% under the premise that the braking torque does not change much. The simulation calculation time is effectively shortened, the analysis difficulty is reduced, and the calculation accuracy is high. It is shown that the method combining the equivalent magnetic circuit method and the three-dimensional finite element analysis method is suitable for preliminary design research and optimization calculation of the micro claw-pole stepper motor. abstract_id: PUBMED:36182701 Extrasynaptic signaling enables an asymmetric juvenile motor circuit to produce symmetric undulation. In many animals, there is a direct correspondence between the motor patterns that drive locomotion and the motor neuron innervation. For example, the adult C. elegans moves with symmetric and alternating dorsal-ventral bending waves arising from symmetric motor neuron input onto the dorsal and ventral muscles. In contrast to the adult, the C. elegans motor circuit at the juvenile larval stage has asymmetric wiring between motor neurons and muscles but still generates adult-like bending waves with dorsal-ventral symmetry. We show that in the juvenile circuit, wiring between excitatory and inhibitory motor neurons coordinates the contraction of dorsal muscles with relaxation of ventral muscles, producing dorsal bends. However, ventral bending is not driven by analogous wiring. Instead, ventral muscles are excited uniformly by premotor interneurons through extrasynaptic signaling. Ventral bends occur in anti-phasic entrainment to activity of the same motor neurons that drive dorsal bends. During maturation, the juvenile motor circuit is replaced by two motor subcircuits that separately drive dorsal and ventral bending. Modeling reveals that the juvenile's immature motor circuit is an adequate solution to generate adult-like dorsal-ventral bending before the animal matures. Developmental rewiring between functionally degenerate circuit solutions, which both generate symmetric bending patterns, minimizes behavioral disruption across maturation. Answer: Yes, changing the configuration of a motor racing circuit can make it safer. A study assessing the pattern of injuries presenting to a racing circuit medical center before and after the construction of two chicanes into the track found that the risk of a severe injury for a car driver decreased from 0.1% to 0.03% after the changes were made (PUBMED:15728695). This suggests that the introduction of chicanes, which are artificial features added to a circuit to slow down cars, can improve the safety of the racing circuit for car drivers by reducing the risk of injury. Additionally, another study on the epidemiology of racing injuries in Thoroughbred racehorses found that the reconstruction of the geometrical configuration of a racetrack led to an outstanding reduction in the number of serious fractures (PUBMED:27703403). This indicates that circuit configuration changes can also have a positive impact on reducing injuries in horse racing.
Instruction: Does inbreeding promote evolutionary reduction of flower size? Abstracts: abstract_id: PUBMED:22859658 Does inbreeding promote evolutionary reduction of flower size? Experimental evidence from Crepis tectorum (Asteraceae). Premise Of The Study: Small, autogamous flowers have evolved repeatedly in the plant kingdom. While much attention has focused on the mechanisms that promote the shift to autogamy, there is still a paucity of information on the factors that underlie the reduction of flower size so prevalent in selfing lineages. In this study of Crepis tectorum, I examine the role of inbreeding, acting alone or together with selection, in promoting evolutionary reduction of flower size. Methods: Experimental crosses were performed to produce progeny populations that differed in inbreeding and (or) selection history. Progenies were grown in two different environments and scored for flower size and other characters. Key Results: Inbreeding depressed flower and fruit size, but also caused changes in flowering time and the number of heads produced. Despite some inconsistencies in the results for the last progeny generation, the decline in flower size was persistent over generations, consistent across environments, and similar in magnitude to the effects of selection for small flower size and the floral reduction inferred to have taken place during the shift toward autogamy within the study species. The floral size reduction was largely independent of changes in overall vigor, and there was considerable adaptive potential in flower size (measured by sib analyses and parent-offspring comparisons) after inbreeding. Conclusions: The results of this study indicate that inbreeding can promote evolutionary reduction of flower size and highlight the close, persistent association between flower and fruit size in the study species. abstract_id: PUBMED:33988502 Inbreeding in a dioecious plant has sex- and population origin-specific effects on its interactions with pollinators. We study the effects of inbreeding in a dioecious plant on its interaction with pollinating insects and test whether the magnitude of such effects is shaped by plant individual sex and the evolutionary histories of plant populations. We recorded spatial, scent, colour, and rewarding flower traits as well as pollinator visitation rates in experimentally inbred and outbred, male and female Silene latifolia plants from European and North American populations differing in their evolutionary histories. We found that inbreeding specifically impairs spatial flower traits and floral scent. Our results support that sex-specific selection and gene expression may have partially magnified these inbreeding costs for females, and that divergent evolutionary histories altered the genetic architecture underlying inbreeding effects across population origins. Moreover, the results indicate that inbreeding effects on floral scent may have a huge potential to disrupt interactions among plants and nocturnal moth pollinators, which are mediated by elaborate chemical communication. abstract_id: PUBMED:21900210 Rapid decrease in horn size of bighorn sheep: environmental decline, inbreeding depression, or evolutionary response to trophy hunting? There are numerous examples demonstrating that selection has greatly influenced phenotypes in wild-harvested species. Here, a significant reduction in horn size in trophy desert bighorn sheep rams over 30 years in a reintroduced population in Aravaipa Canyon, Arizona is documented. After examining the potential effects of a detrimental change in the environment, inbreeding depression, and hunter-caused evolutionary change, it appears that environmental deterioration, apparently from the effects of drought, may be a major cause of the decline in horn size. In particular, the reduction in ram horn size is positively associated with reduced winter lifetime rainfall over the 3 decades. Over the same period, the demographic indicator lamb-to-ewe ratio has also declined in the Aravaipa population. On the other hand, lamb-to-ewe ratio has not declined statewide in Arizona, and the population size in Aravaipa appears to be increasing, suggesting local- and trait-specific effects. Using a theoretical context, neither inbreeding depression nor hunter selection by themselves appear to the sole causes of the lower horn size. However, some combination of environmental factors, inbreeding depression, and hunter selection may have caused the decrease in observed horn size. It is not clear what management actions might be successful in countering the environmental effects on horn size, but supplemental feeding and cattle removal are suggested while translocation is suggested to counter the effects of inbreeding depression and reduced hunting and translocation are suggested to counter the effects of hunter selection. abstract_id: PUBMED:25475165 Inbreeding impact on litter size and survival in selected canine breeds. Data obtained from the French Kennel Club and the Fichier National Canin were used to estimate the effect of inbreeding on average litter size and survival in seven French breeds of dog. Depending on the breed, litter sizes were 3.5-6.3 puppies and longevities were 7.7-12.2 years. Estimated heritabilities were 6.0-10.9% for litter size and 6.1-10.1% for survival at 2 years of age. Regression coefficients indicated a negative effect of inbreeding on both individual survival and litter size. Although the impact of baseline inbreeding within breeds appears to be limited, the improper mating of close relatives will reduce biological fitness through significant reduction of litter size and longevity. abstract_id: PUBMED:25081866 Effective population size and inbreeding depression on litter size in rabbits. A case study. The purpose of this study is to use demographic and litter size data on four Spanish maternal lines of rabbits (A, V, H and LP), as a case study, in order to: (i) estimate the effective population size of the lines, as a measure of the rate of increase of inbreeding, and (ii) study whether the inbreeding effect on litter size traits depends on the pattern of its accumulation over time. The lines are being selected for litter size at weaning and are kept closed at the same selection nucleus under the same selection and management programme. The study considered 47,794 l and a pedigree of 14,622 animals. Some practices in mating and selection management allow an increase of the inbreeding coefficient lower than 0.01 per generation in these lines of around 25 males and 125 females. Their effective population size (Ne) was around 57.3, showing that the effect of selection, increasing the inbreeding, was counterbalanced by the management practices, intended to reduce the rate of inbreeding increase. The inbreeding of each individual was broken down into three components: old, intermediate and new inbreeding. The coefficients of regression of the old, intermediate and new inbreeding on total born (TB), number born alive (NBA) and number weaned (NW) per litter showed a decreasing trend from positive to negative values. Regression coefficients significantly different from zero were those for the old inbreeding on TB (6.79 ± 2.37) and NBA (5.92 ± 2.37). The contrast between the coefficients of regression between the old and new inbreeding were significant for the three litter size traits: 7.57 ± 1.72 for TB; 6.66 ± 1.73 for NBA and 5.13 ± 1.67 for NW. These results have been interpreted as the combined action of purging unfavourable genes and artificial selection favoured by the inbreeding throughout the generations of selection. abstract_id: PUBMED:15316556 The relationship between flower size, inbreeding coefficient and inferred selfing rate in British Euphrasia species. The genus Euphrasia in Britain comprises a taxonomically complex group of self-compatible, morphologically similar, hemi-parasitic, annual plant species of high conservation importance. The 19 diploid and tetraploid taxa currently recognised show striking variation in flower size. The objective of this paper is to determine whether a relationship exists between flower size and breeding system within Euphrasia. Following a survey of flower size variation among the 19 taxa, seven diploid populations, encompassing a broad range of flower sizes, were selected for detailed study. Four nuclear microsatellite loci were used to estimate the inbreeding coefficient Fis within each population. Fis values varied from to 0.17-0.77 and showed a significant, negative correlation with flower size. These results are best explained as the consequence of variation in selfing rate among the Euphrasia populations, with selfing rate increasing as flower size decreases. The potential factors influencing breeding system evolution in Euphrasia are discussed, together with the role of autogamy in generating taxonomic complexity and facilitating lineage differentiation within the genus. abstract_id: PUBMED:31889300 Season-dependent effect of cleistogamy in Lamium amplexicaule: flower type origin versus inbreeding status. Premise: Inbreeding depression is traditionally considered a major factor favoring outcrossing in hermaphrodites. Recent experiments have shown that environmental conditions can influence the magnitude of inbreeding depression, but their relevance in natural populations is unclear. To investigate this, we studied the cleistogamous species Lamium amplexicaule, an annual species with individuals experiencing either spring or autumn environmental conditions. In this species, the proportion of cleistogamous/chasmogamous flowers changes according to seasonal cues (e.g., temperature, photoperiod). Our hypothesis was that the plasticity of cleistogamy is an adaptation to seasonal fitness variation in different flower progenies. Methods: To test this hypothesis, we produced the three possible types of progenies through controlled crosses: (1) selfed progeny from cleistogamous flowers; (2) chasmogamous selfed progeny; and (3) chasmogamous outcrossed progeny. Progeny fitness was then measured in a common garden in the two reproduction seasons (autumn and spring). Results: The results showed that season had a major impact on fitness. The fitness of the different progeny types changed across seasons, indicating that the effect of cleistogamy on progeny performance is season-dependent, consistent with a previous study in a similar environment. Surprisingly, the flower from which the progeny issued (cleistogamous or chasmogamous) had more impact on fitness than the inbred status of the progeny (selfed versus outcrossed), suggesting a potential role of epigenetic processes. Conclusions: The observed patterns of variation were not consistent either with adaptation to environment-dependent inbreeding depression or to variation in resource availability, but were possibly consistent with adaptation to seasonal pollinator activity. abstract_id: PUBMED:17519972 Analysis of inbreeding depression in the first litter size of mice in a long-term selection experiment with respect to the age of the inbreeding. An understanding of inbreeding and inbreeding depression are important in evolutionary biology, conservation genetics, and animal breeding. A new method was developed to detect departures from the classical model of inbreeding; in particular, it investigated differences between the effects of inbreeding in recent generations from that in the more distant past. The method was applied in a long-term selection experiment on first-litter size in mice. The total pedigree included 74,630 animals with approximately 30,000 phenotypic records. The experiment comprised several different lines. The highest inbreeding coefficients (F) within a line ranged from 0.22 to 0.64, and the average effective population size (N(e)) was 58.1. The analysis divided F into two parts, corresponding to the inbreeding occurring in recent generations ('new') and that which preceded it ('old'). The analysis was repeated for different definitions of 'old' and 'new', depending on length of the 'new' period. In 15 of these tests, 'new' inbreeding was estimated to cause greater depression than 'old'. The estimated depression ranged from -11.53 to -0.79 for the 'new' inbreeding and from -5.22 to 15.51 for 'old'. The difference was significant, the 'new' period included at least 25 generations of inbreeding. Since there were only small differences in N(e) between lines, and near constant N(e) within lines, the effect of 'new' and 'old' cannot be attributed to the effects of 'fast' versus 'slow' inbreeding. It was concluded that this departure from the classical model, which predicts no distinction between this 'old and 'new' inbreeding, must implicate natural selection and purging in influencing the magnitude of depression. abstract_id: PUBMED:36164840 Continuous inbreeding affects genetic variation, phenology, and reproductive strategy in ex situ cultivated Digitalis lutea. Premise: Ex situ cultivation is important for plant conservation, but cultivation in small populations may result in genetic changes by drift, inbreeding, or unconscious selection. Repeated inbreeding potentially influences not only plant fitness, but also floral traits and interactions with pollinators, which has not yet been studied in an ex situ context. Methods: We studied the molecular genetic variation of Digitalis lutea from a botanic garden population cultivated for 30 years, a frozen seed bank conserving the original genetic structure, and two current wild populations including the source population. In a common garden, we studied the effects of experimental inbreeding and between-population crosses on performance, reproductive traits, and flower visitation of plants from the garden and a wild population. Results: Significant genetic differentiation was found between the garden population and the wild population from which the seeds had originally been gathered. After experimental selfing, inbreeding depression was only found for germination and leaf size of plants from the wild population, indicating a history of inbreeding in the smaller garden population. Moreover, garden plants flowered earlier and had floral traits related to selfing, whereas wild plants had traits related to attracting pollinators. Bumblebees visited more flowers of outbred than inbred plants and of wild than garden plants. Conclusions: Our case study suggests that high levels of inbreeding during ex situ cultivation can influence reproductive traits and thus interactions with pollinators. Together with the effects of genetic erosion and unconscious selection, these changes may affect the success of reintroductions into natural habitats. abstract_id: PUBMED:34894276 Abscisic acid mediates the reduction of petunia flower size at elevated temperatures due to reduced cell division. Main Conclusion: Elevated temperatures suppress cell division in developing petunia buds leading to smaller flowers, mediated by ABA. Flower size is one of the most important showy traits in determining pollinator attraction, and a central factor determining the quality of floricultural products. Whereas the adverse effects of elevated temperatures on showy traits have been described in detail, its underlining mechanisms is poorly understood. Here, we investigated the physiological mechanism responsible for the reduction of flower size in petunia under elevated temperatures. We found that the early stages of flower-bud development were most sensitive to elevated temperatures, resulting in a drastic reduction of flower diameter that was almost independent of flower load. We demonstrated that the temperature-mediated flower size reduction occurred due to a shorter growth period, and a lower rate of corolla cell division. Consistently, local application of cytokinin, a phytohormone that promotes cell division, resulted in recovery of flower dimensions when grown under elevated temperatures. Hormone analysis of temperature-inhibited flower buds revealed no significant changes in levels of cytokinin, and a specific increase of abscisic acid (ABA) levels, known to inhibit cell division. Moreover, local application of ABA on flower buds caused a reduction of flower dimensions as a result of lower levels of cell division, suggesting that ABA mediates the reduction of flower size at elevated temperatures. Taken together, our results shed light on the mechanism by which elevated temperatures decrease petunia flower size, and show that temperature-mediated reduction of flower size can be alleviated by increasing the cytokinin/ABA ratio. Answer: Yes, inbreeding can promote evolutionary reduction of flower size. In a study of Crepis tectorum, inbreeding was found to depress flower and fruit size, and this decline in flower size was persistent over generations, consistent across environments, and similar in magnitude to the effects of selection for small flower size (PUBMED:22859658). Additionally, inbreeding in a dioecious plant, Silene latifolia, was shown to specifically impair spatial flower traits and floral scent, which could potentially disrupt interactions with pollinators (PUBMED:33988502). Furthermore, the relationship between flower size, inbreeding coefficient, and inferred selfing rate was investigated in British Euphrasia species, revealing a significant negative correlation between flower size and inbreeding coefficient, suggesting that selfing rate increases as flower size decreases (PUBMED:15316556). These findings indicate that inbreeding can indeed contribute to the evolutionary reduction of flower size in plants.
Instruction: Blunt solid organ injury: do adult and pediatric surgeons treat children differently? Abstracts: abstract_id: PUBMED:18784587 Blunt solid organ injury: do adult and pediatric surgeons treat children differently? Background: The management of blunt solid organ injury (SOI) in children may differ depending on the treating facility. These differences, however, may not reflect the individual surgeon's treatment philosophy. To investigate differences in management, adult and pediatric surgeons were presented the same hypothetical pediatric trauma "patient" and asked a series of treatment questions. Methods: By using an internet-based survey, members of American Association for the Surgery of Trauma, American Academy of Pediatrics, and Eastern Association of the Surgery of Trauma were invited to participate anonymously. Surgeons who "never or rarely saw children" and those who "would transfer the patient to another facility" were excluded. Demographic, educational, and practice data were collected. Scenarios of increasing complexity were presented with CT images (isolated SOI, multiple SOI, and SOI with intracranial hemorrhage [ICH]). For each scenario, respondents were asked if they would initially manage the patient nonoperatively, pursue angiography, or operate. Scenarios were repeated with the addition of a CT "blush." For patients managed nonoperatively, respondents were asked their transfusion threshold needed to operate. Responses were compared using exact chi tests and risk ratios. Results: Two hundred eighty-one surgeons (114 pediatric, 167 adult) were included. For all scenarios, adult surgeons were more likely to operate or pursue embolization than their pediatric colleagues (RR: 8.6 SOI, 14.8 multiple SOI, 17.9 SOI with ICH). Adult surgeons were also more likely to consider any transfusion a failure (13.3% vs. 1.2%, p &lt; 0.01) and had a much lower transfusion threshold. Conclusion: When presented with the identical clinical scenario, adult trauma surgeons are less likely than pediatric surgeons to pursue nonoperative management of pediatric solid organ injuries and are more conservative in their willingness to transfuse. abstract_id: PUBMED:35716446 Discreet Values of Shock Index Pediatric Age-Adjusted (SIPA) to Predict Intervention in Children With Blunt Organ Injuries. Introduction: Elevated shock index pediatric age-adjusted (SIPA) has been shown to be associated with the need for both blood transfusion and intervention in pediatric patients with blunt liver and spleen injuries (BLSI). SIPA has traditionally been used as a binary value, which can be classified as elevated or normal, and this study aimed to assess if discreet values above SIPA cutoffs are associated with an increased probability of blood transfusion and failure of nonoperative management (NOM) in bluntly injured children. Materials And Methods: Children aged 1-18 y with any BLSI admitted to a Level-1 pediatric trauma center between 2009 and 2020 were analyzed. Blood transfusion was defined as any transfusion within 24 h of arrival, and failure of NOM was defined as any abdominal operation or angioembolization procedure for hemorrhage control. The probabilities of receiving a blood transfusion or failure of NOM were calculated at different increments of 0.1. Results: There were 493 patients included in the analysis. The odds of requiring blood transfusion increased by 1.67 (95% CI 1.49, 1.90) for each 0.1 unit increase of SIPA (P &lt; 0.001). A similar trend was seen initially for the probability of failure of nonoperative management, but beyond a threshold, increasing values were not associated with failure of NOM. On subanalysis excluding patients with a head injury, increased 0.1 increments were associated with increased odds for both interventions. Conclusions: Discreet values above age-related SIPA cutoffs are correlated with higher probabilities of blood transfusion in pediatric patients with BLSI and failure of NOM in those without head injury. The use of discreet values may provide clinicians with more granular information about which patients require increased resources upon presentation. abstract_id: PUBMED:27854071 Staged laparotomies based on the damage control principle to treat hemodynamically unstable grade IV blunt hepatic injury in an eight-year-old girl. Background: Severe blunt hepatic injury is a major cause of morbidity and mortality in pediatric patients. Damage control (DC) surgery has been reported to be useful in severely compromised children with hepatic injury. We applied such a technique in the treatment of a case of hemodynamically unstable grade IV blunt hepatic injury in an eight-year-old girl. This case is the first to use multimodal approaches including perihepatic packing, temporary closure of the abdominal wall with a plastic sheet, transarterial embolization (TAE), and planned delayed anatomical hepatic resection in a child. Case Presentation: An eight-year-old girl was run over by a motor vehicle and transferred to the emergency department of the local hospital. Her diagnoses were severe blunt hepatic injury (grade IV) with left femoral trochanteric fracture. No other organ injuries were observed. Because her hemodynamic state was stable under aggressive fluid resuscitation, she was transferred to our hospital for surgical management. On arrival at our institution about 4 h after the injury, her hemodynamic condition became unstable. Abdominal compartment syndrome also became apparent. Because her condition had deteriorated and the lethal triad of low BT, coagulopathy, and acidosis was observed, a DC treatment strategy was selected. First, emergent laparotomy was performed for gauze-packing hemostasis to control intractable bleeding from the liver bed, and the abdomen was temporarily closed with a plastic sheet with continuous negative pressure aspiration. Transarterial embolization of the posterior branch of the right hepatic artery was then carried out immediately after the operation. The lacerated right lobe of the liver was safely resected in a stable hemodynamic condition 2 days after the initial operation. Bleeding from the liver bed ceased without further need of hemostasis. She was transferred to the local hospital without any surgical complications on day 42 after admission. She had returned to her normal life by 3 months after the injury. Conclusion: The DC strategy was found to be effective even in a pediatric patient with hemodynamically unstable severe blunt hepatic injury. The presence of the deadly triad (hypothermia, coagulopathy, and acidosis) and abdominal compartment syndrome was an indication for DC surgery. abstract_id: PUBMED:33358417 Elevated pediatric age-adjusted shock-index (SIPA) in blunt solid organ injuries. Background/purpose: Shock index-pediatric age-adjusted (SIPA) is a proven tool to predict outcomes in blunt pediatric trauma. We hypothesized that an elevated SIPA in either the pre-hospital or in the emergency department (ED) would identify children with blunt liver or spleen injury (BLSI) needing a blood transfusion and those at risk for failure of non-operative management (NOM). Methods: Pediatric patients (1-18 years) in the ACS pediatric-TQIP database (2014-2016) with a BLSI were included. Patients were stratified by the need for a blood transfusion and/or abdominal operation. Results: A total of 3561 patients had BLSI, of which 4% received a blood transfusion, and 4% underwent an abdominal operation. Patients who received blood had higher ISS scores (27.0 vs. 5.0, p &lt; 0.001) and mortality (22% vs. 0.4%, p &lt; 0.001). Those who failed NOM had higher ISS scores (17.0 vs. 5.0, p &lt; 0.001) and mortality (7.9% vs. 0.9%, p &lt; 0.001). On multivariable regression, an elevated SIPA score in either pre-hospital or ED was significantly associated with blood transfusion (odds ratio (OR) 8.2, 95% confidence intervals (CI) 5.8-11.5, p &lt; 0.001) and failure of NOM (OR 2.3, CI 1.5-3.4, p &lt; 0.001). Conclusions: Hemodynamic instability, represented by an elevated pre-hospital or ED SIPA, accurately identifies children with BLSI who may need blood products or an operative intervention. Type Of Study: Retrospective Comparative Study. Level Of Evidence: Level III. abstract_id: PUBMED:37832307 Utilization of Angiography in Pediatric Blunt Abdominal Injury at Adult versus Pediatric Trauma Centers. Introduction: Angiography has been widely accepted as an adjunct in the management of blunt abdominal trauma in adults. However, the role of angiography with or without angioembolization (AE) is still being defined in pediatric solid organ injury. We sought to compare the use of angiography in solid organ injury (SOI) at pediatric trauma centers (PTCs) versus an adult trauma center (ATC) in a large metropolitan city. Methods: Data were drawn from a collaborative effort of three Trauma centers (one adult and two pediatric) in Atlanta, GA. All pediatric patients (ages 1-18) treated for SOI between January 1, 2016 and December 31, 2021 were included (n = 350). Registry data obtained included demographics, mechanism of injury, injury grade, injury severity score (ISS), procedures performed, and transfusions. Multivariate regression analysis was used to identify factors associated with angiography. Results: A total of 350 patients were identified during the study period with 101 treated at ATC and 249 treated at the two PTCs. The median age at the ATC was 17 y (IQR 16, 18) compared to nine (6, 13) at the PTCs. ISS was significantly higher at the ATC 22 (14, 34) compared to 16 (9, 22) at PTCs (P &lt; 0.001). At the ATC, 11 (10.9%) patients underwent angiography, 4 (4.9%) of which underwent AE compared to seven (2.8%) patients who underwent angiography and AE at PTCs. In the multivariate analysis, factors associated with angiography use included age (OR 1.44, 95% CI 1.09-1.90, P = 0.010) and ISS (OR 1.05, 95% CI 1.02-1.09, P = 0.004). Through setting, ATC versus PTC was significant on univariable analysis, it did not remain a significant predictor of angiography on multivariable regression. Conclusions: Our study demonstrated increased utilization of angiography for the management of SOI in pediatric patients treated at ATCs versus PTCs. On regression analysis, age and ISS remained significant predictors for angiography utilization, while setting (ATC versus PTC) was notably not a significant predictor. This data would suggest that differences in angiography utilization for pediatric SOI at PTCs and ATCs are influenced by differing patient populations (older and higher ISS), with otherwise uniform use. These findings provide a basis for future treatment algorithm revisions for pediatric blunt abdominal trauma that include angiography and provide support for the development of formal guidelines. abstract_id: PUBMED:11988661 Blunt renal trauma in children: experience with conservative management at a pediatric trauma center. Background: The authors reviewed the outcome for children with blunt renal injury managed with a nonoperative protocol at their pediatric trauma center. Methods: Fifty-five consecutive children aged 0.5 to 17 years with blunt renal injury managed over a 14-year period were reviewed. All patients were evaluated with computed tomographic scanning. Injuries were graded according to the American Association for the Surgery of Trauma Organ Injury Scale. Results: Forty-eight of 55 children (87%) were successfully managed nonoperatively. Overall, there were 5 grade I, 13 grade II, 18 grade III, 14 grade IV, and 5 grade V injuries. All children with grades I and III injuries were successfully managed nonoperatively. Two (6%) of these children required transfusion. Only four (29%) children with grade IV and three (60%) with grade V injuries required surgical interventions (one nephrostomy, six nephrectomies). Excluding patients with continuing hemorrhage, only 2 (14%) of 14 with high-grade injuries required surgical intervention (1 nephrostomy, 1 nephrectomy). Clearance of gross hematuria correlated with severity of injury and was prolonged in grade IV and V compared with grade I to III injuries (6.8 +/- 2.7 vs. 3.2 +/- 2.1 days, respectively; p &lt; 0.05). Fifty-one children (93%) available for follow-up were normotensive with normal renal function. Conclusion: These data support the use of conservative management for all grades in stable children with blunt renal injury. Transfusion requirements, operative rates, and outcome are consistent with other pediatric solid organ injuries. abstract_id: PUBMED:20976655 National survey on abdominal trauma practices of pediatric surgeons. Introduction: Pediatric blunt abdominal trauma is a frequent reason for hospital admission, but there are no established guidelines to assess these patients. Our study aims to evaluate the diagnostic process used by pediatric surgeons in Switzerland to evaluate abdominal trauma. Material And Methods: A scenario-based survey was carried out among Swiss pediatric surgeons. Respondents were asked to report on their management of children with blunt abdominal trauma. Results: The response rate was 46% (26 of 54). The clinical signs considered the most important were abdominal examination and palpation (100%), auscultation (81%), external genital exam (77%) and Glasgow Coma Scale (77%). The most frequent laboratory exams requested were urine analysis (100%), complete blood count (96%), liver function tests (85%) and coagulation tests (77%). 42% of the physicians asked for an abdominal ultrasound for every patient with blunt abdominal trauma. 58% reported that some patients do not need a CT scan despite anomalies in the initial workup. There were significant variations in the clinical assessment of patients with minor blunt abdominal trauma. Abnormal ultrasounds, but not abnormal liver functions tests, prompted clinicians to obtain CT scans. When evaluating the probability of organ injury after a full workup, clinicians relied on the results of the ultrasound but not on liver function tests. A normal CT scan did not appear to reassure physicians if the patient still presented with mild abdominal pain. Conclusions: There is a wide variation in the clinical assessment, request for laboratory tests and use of radiological exams among Swiss pediatric surgeons. Further studies are required on the evaluation of abdominal organ injuries in children. abstract_id: PUBMED:35124723 Characteristics and predictors of intensive care unit admission in pediatric blunt abdominal trauma. Background: Pediatric trauma patients sustaining blunt abdominal trauma (BAT) with intra-abdominal injury (IAI) are frequently admitted to the intensive care unit (ICU). This study was performed to identify predictors for ICU admission following BAT. Methods: Prospective study of children (&lt; 16 years) who presented to 14 Level-One Pediatric Trauma Centers following BAT over a 1-year period. Patients were categorized as ICU or non-ICU patients. Data collected included vitals, physical exam findings, laboratory results, imaging, and traumatic injuries. A multivariable hierarchical logistic regression model was used to identify predictors of ICU admission. Predictive ability of the model was assessed via tenfold cross-validated area under the receiver operating characteristic curves (cvAUC). Results: Included were 2,182 children with 21% (n = 463) admitted to the ICU. On univariate analysis, ICU patients were associated with abnormal age-adjusted shock index, increased injury severity scores (ISS), lower Glasgow coma scores (GCS), traumatic brain injury (TBI), and severe solid organ injury (SOI). With multivariable logistic regression, factors associated with ICU admission were severe trauma (ISS &gt; 15), anemia (hematocrit &lt; 30), severe TBI (GCS &lt; 8), cervical spine injury, skull fracture, and severe solid organ injury. The cvAUC for the multivariable model was 0.91 (95% CI 0.88-0.92). Conclusion: Severe solid organ injury and traumatic brain injury, in association with multisystem trauma, appear to drive ICU admission in pediatric patients with BAT. These results may inform the design of a trauma bay prediction rule to assist in optimizing ICU resource utilization after BAT. Study Design: Prognosis study. Level Of Evidence: 1: abstract_id: PUBMED:29995255 A rare mechanism of delayed splenic rupture following the nonoperative management of blunt splenic injury in a child. Background: Nonoperative management (NOM) has been established as the standard treatment for isolated blunt organ injury in hemodynamically stable pediatric patients. Although delayed splenic rupture or bleeding is a rare complication in NOM, it is an issue that many pediatric surgeons are greatly concerned about. We herein report a rare pediatric case concerning the mechanisms involved in delayed splenic rupture after NOM. Case Presentation: A 9-year-old boy with severe abdominal pain was transferred to our hospital. Twenty-one hours before the admission, he had been kicked in the region of his left lateral abdomen. Contrast-enhanced abdominal computed tomography revealed a severe intra-parenchymal hematoma and multiple lacerations of the spleen with a large amount of hemoperitoneum without active bleeding. His condition was diagnosed as a grade III injury on the AAST splenic injury scale. After fluid resuscitation, his vital signs became stable. The patient was treated with NOM in our intensive care unit. However, suddenly after defecation (72 h after the injury), he started complaining of severe abdominal pain and left shoulder pain. His blood pressure dropped to 70/35 mmHg, and he started to lose consciousness. Abdominal ultrasonography (US) revealed increased ascites. Fluid resuscitation and blood transfusion were performed. His symptoms and abdominal US findings suggested that splenic re-bleeding had caused delayed splenic rupture to occur. Emergency splenectomy was performed. The resected spleen was enlarged with a large parenchymal hematoma. The posterior-lateral side of the splenic capsule was ruptured. Conclusions: The mechanism of delayed splenic rupture in our case was considered to be the result of a tear in the subcapsular hematoma caused by stretching the splenocolic ligament related to a bowel movement during defecation. Although delayed splenic rupture or bleeding is unpredictable, it is very important to understand the mechanisms and to educate the family of the children with splenic injuries of the warning signs of delayed rupture or bleeding. abstract_id: PUBMED:37117078 Updated APSA Guidelines for the Management of Blunt Liver and Spleen Injuries. Background: Non-operative management of blunt liver and spleen injuries was championed initially in children with the first management guideline published in 2000 by the American Pediatric Surgical Association (APSA). Multiple articles have expanded on the original guidelines and additional therapy has been investigated to improve care for these patients. Based on a literature review and current consensus, the management guidelines for the treatment of blunt liver and spleen injuries are presented. Methods: A recent literature review by the APSA Outcomes committee [2] was utilized as the basis for the guideline recommendations. A task force was assembled from the APSA Committee on Trauma to review the original guidelines, the literature reported by the Outcomes Committee and then to develop an easy to implement guideline. Results: The updated guidelines for the management of blunt liver and spleen injuries are divided into 4 sections: Admission, Procedures, Set Free and Aftercare. Admission to the intensive care unit is based on abnormal vital signs after resuscitation with stable patients admitted to the ward with minimal restrictions. Procedure recommendations include transfusions for low hemoglobin (&lt;7 mg/dL) or signs of ongoing bleeding. Angioembolization and operative exploration is limited to those patients with clinical signs of continued bleeding after resuscitation. Discharge is based on clinical condition and not grade of injury. Activity restrictions remain the same while follow-up imaging is only indicated for symptomatic patients. Conclusion: The updated APSA guidelines for the management of blunt liver and spleen injuries present an easy-to-follow management strategy for children. Level Of Evidence: Level 5. Answer: Yes, adult and pediatric surgeons do treat children with blunt solid organ injury (SOI) differently. A study comparing the treatment approaches of adult and pediatric surgeons to identical clinical scenarios of pediatric trauma patients found that adult trauma surgeons are less likely than pediatric surgeons to pursue nonoperative management of pediatric solid organ injuries. Adult surgeons were more likely to operate or pursue embolization than their pediatric colleagues across various scenarios, including isolated SOI, multiple SOI, and SOI with intracranial hemorrhage (ICH). Additionally, adult surgeons were more likely to consider any transfusion a failure and had a lower transfusion threshold compared to pediatric surgeons (PUBMED:18784587). This difference in treatment philosophy may be influenced by the surgeons' training and experience, as well as the typical patient populations they treat. Pediatric surgeons, who are more accustomed to treating children, may be more inclined towards nonoperative management, which aligns with the current trend in pediatric trauma care. Conversely, adult trauma surgeons may be more inclined towards operative interventions, reflecting practices more common in the adult population. The differences in management strategies underscore the importance of tailored approaches to pediatric trauma care and the potential need for guidelines that consider the unique physiological responses and recovery patterns of children with blunt solid organ injuries.
Instruction: Do skinfold measurements provide additional information to body mass index in the assessment of body fatness among children and adolescents? Abstracts: abstract_id: PUBMED:17545361 Do skinfold measurements provide additional information to body mass index in the assessment of body fatness among children and adolescents? Objectives: The purpose of this work was to validate the performance of age- and gender-specific BMI, triceps, and subscapular skinfold for the classification of excess of body fat in children and adolescents and to examine how much additional information these 2 skinfold measurements provide to BMI-for-age. Methods: The receiver operating characteristic curve was used to characterize the sensitivity and specificity of these 3 indices in classifying excess body fat. Percentage of body fat was determined by dual-energy radiograph absorptiometry. Both &gt; or = 85th and &gt; or = 95th percentile of percentage of body fat were used to define excess body fat. Data from the New York Pediatric Rosetta Body Composition Project were examined (n = 1196; aged 5-18 years). Results: For children aged 5 to 18 years, BMI-for-age, triceps skinfold-for-age, and subscapular skinfold-for-age each performed equally well alone in the receiver operating characteristic curves in the identification of excess body fat defined by either the 85th or 95th percentile of percentage of body fat by dual-energy radiograph absorptiometry. However, if BMI-for-age was already known and was &gt; 95th percentile, the additional measurement of skinfolds did not significantly increase the sensitivity or specificity in the identification of excess body fat. Conclusions: In contrast to the recommendations of expert panels, skinfold measurements do not seem to provide additional information about excess body fat beyond BMI-for-age alone if the BMI-for-age is &gt;95th percentile. abstract_id: PUBMED:17558589 The prediction of body fatness by BMI and skinfold thicknesses among children and adolescents. Background: Although the body mass index (BMI, kg m(-2)) is widely used as a measure of adiposity, it is a measure of excess weight, rather than excess body fat. It has been suggested that skinfold thicknesses be measured among overweight children to confirm the presence of excess adiposity. Objective: The present study examined the additional information provided by skinfold thicknesses on body fatness, beyond that conveyed by BMI-for-age, among healthy 5- to 18-years old (n = 1196). Methods And Procedures: Total body dual-energy X-ray absorptiometry (DXA) provided estimates of % body fat, and the sum of two skinfolds (triceps and subscapular) was used as an indicator of the overall skinfold thickness. Results: As assessed by the multiple R(2)s and the residuals of various regression models, information on the skinfold sum significantly ( p &lt; 0.001) improved the prediction of body fatness beyond that obtained with BMI-for-age. For example, the use of the skinfold sum, in addition to BMI-for-age, increased the multiple R(2)s for predicting % body fat from 0.81 to 0.90 (boys), and from 0.82 to 0.89 (girls). The use of the skinfold sum also reduced the overall prediction errors (absolute value of the residuals) for % body fat by 20-30%, but these reductions varied substantially by BMI-for-age. Among overweight children, defined by a BMI-for-age &gt;/=95th percentile, the skinfold sum reduced the predication errors for % body fat by only 7-9%. Conclusions: Although skinfold thicknesses, when used in addition to BMI-for-age, can substantially improve the estimation of body fatness, the improvement among overweight children is small. abstract_id: PUBMED:19720664 The validity of BMI as an indicator of body fatness and risk among children. Purpose Of Review: Although the prevalence of childhood obesity, as assessed by BMI (kg/m(2)), has tripled over the last 3 decades, this index is a measure of excess weight rather than excess body fatness. In this review we focus on the relation of BMI to body fatness and health risks, particularly on the ability of BMI for age &gt;or=95th Centers for Disease Control and Prevention [CDC] percentile to identify children who have excess body fatness. We also examine whether these associations differ according to race/ethnicity and whether skinfold and circumference measurements provide additional information on body fatness or health risks. Results: The accuracy of BMI varies according to the degree of body fatness. Among relatively fat children, BMI is a good indicator of excess adiposity, but differences in the BMIs of relatively thin children can be largely due to fat-free mass. Although the accuracy of BMI in identifying children with excess body fatness depends on the chosen cut points, we have found that a high BMI-for-age has a moderately high (70%-80%) sensitivity and positive predictive value, along with a high specificity (95%). Children with a high BMI are much more likely to have adverse risk factor levels and to become obese adults than are thinner children. Skinfold thicknesses and the waist circumference may be useful in identifying children with moderately elevated levels of BMI (85th to 94 th percentiles) who truly have excess body fatness or adverse risk factor levels. Conclusion: A BMI for age at &gt;or=95th percentile of the CDC reference population is a moderately sensitive and a specific indicator of excess adiposity among children. abstract_id: PUBMED:23410599 The abilities of body mass index and skinfold thicknesses to identify children with low or elevated levels of dual-energy X-ray absorptiometry-determined body fatness. Objective: To examine the accuracies of body mass index (BMI) and skinfold thicknesses in classifying the body fatness of 7365 8- to 19-year-old subjects in a national sample. Study Design: We used percent body fat determined by dual-energy x-ray absorptiometry (PBFDXA) between 1999 and 2004. Categories of PBFDXA and the skinfold sum (triceps plus subscapular) were constructed so that that numbers of children in each category were similar to the number in each of 5 BMI categories based on the Centers for Disease Control and Prevention growth charts. Results: Approximately 75% of the children and adolescents who had a BMI-for-age ≥ 95th percentile (considered obese) had elevated body fatness, but PBFDXA levels were more variable at lower BMIs. For example, only 41% of the boys who had a BMI &lt; 25th percentile, had a similarly low PBFDXA. The use of the skinfold sum, rather than BMI, slightly improved the identification of elevated levels of body fatness among boys (P = .03), but not among girls (P &gt; .10). A low sum of the triceps and subscapular skinfold thicknesses was a better indicator of low PBFDXA than was a low BMI, but differences were smaller among children with greater levels of body fatness. Among girls who had a PBFDXA above the median, for example, BMI and the skinfold sum were correlated similarly (r = 0.77-0.79) with body fatness. Conclusions: Both BMI and skinfold thicknesses are fairly accurate in identifying children who have excess body fatness. In contrast, if the goal is to identify children who have low body fatness, skinfold thicknesses would be preferred. abstract_id: PUBMED:9725657 Skinfold thickness measurements are better predictors of body fat percentage than body mass index in male Spanish children and adolescents. Objective: To develop equations, from some simple anthropometric measurements, for the prediction of body density from underwater weighing in male spanish children and adolescents. Subjects: One hundred and seventy-five males, aged 7.0-16.9 y, participated in this study, they were recruited from primary and secondary schools. Measurements: Body weight and height and skinfold thicknesses by anthropometry, body density by underwater weighing. Results: Correlations between body density and body mass index (BMI) were high until 14.0-16.9y. Correlations between body density and log sigma 4 skinfolds were higher than those with BMI at all ages. Log sigma 4 skinfolds explained between 61% (14.0-16.9 y) and 68% (11.0-13.9 y) of the body density variance. Regression equations for body density from BMI and triceps skinfold thickness explained between 51% (14.0-16.9y) and 68% (7.0-10.9 y) of the body density variance. Conclusions: The best estimators of body density in the children and adolescents studied were log sigma 4 skinfolds and a combination of BMI and triceps skinfold. abstract_id: PUBMED:14985694 Assessment of body fatness in young children using the skinfold technique and BMI vs body water dilution. Objectives: To compare body fatness estimated using a skinfold technique and body mass index (BMI) with body fatness estimated using the body water dilution method in healthy Swedish children 9 or 14 months of age. Methods: Total body fat (TBF) was calculated from total body water, estimated using the doubly labelled water method, and body weight. When expressed in per cent of body weight, these estimates (%TBF-BWD) represented reference values for body fatness. Body fatness was also calculated from skinfold thickness (%TBF-SFT) and as BMI. The children were ranked and grouped into five groups with an increasing level of body fatness using BMI, %TBF-SFT and %TBF-BWD, respectively. Subjects: A total of 30 infants 9 months of age and 29 children 14 months of age. Results: On average, the children (n=59) had a BMI=17.5+/-1.6 kg/m(2) and contained 27.8+/-3.7 %TBF-SFT and 29.1+/-4.4 %TBF-BWD. %TBF-BWD minus %TBF-SFT was=1.35+/-4.06%. By measuring %TBF-SFT or BMI, about 35% of the children could be classified in the correct group with respect to body fatness. Serious misclassification (ie two or more groups too high or too low) was, however, more common for %TBF-SFT (29%) than for BMI (17%). Conclusions: The capacity of BMI to place children in the correct body fatness group was poor although not quite as poor as the corresponding capacity of the skinfold technique. The latter method produced inaccurate and imprecise estimates of body fatness. abstract_id: PUBMED:33088457 Patterns of Body Mass Index, Percentage Body Fat, and Skinfold Thicknesses in 7- to 18-Year-Old Children and Adolescents from Indonesia. Background: Body mass index (BMI) and skinfold thickness are widely used to evaluate body composition. Information on patterns of skinfold thickness may help to understand changes in body composition during growth. The objectives of this study were to observe patterns of BMI, percentage body fat (%BF), and skinfold thicknesses of Indonesian children and adolescents aged 7-18 years. Methods: Weight, height, and four skinfold thicknesses were measured in 2104 school children (924 boys, 1,180 girls) aged 7-18 years from Yogyakarta between 2015 and 2018. BMI and ratios between central and peripheral skinfold thicknesses were determined. %BF was predicted using the equation of Slaughter et al. Data were analyzed using analysis variance (ANOVA), independent sample t-test, and partial correlation (SPSS version 20.0). Results: At 7-18 years, boys and girls showed a comparable gain in BMI. The comparable gain in %BF between boys and girls only occurred till age 10 and total skinfolds till age 11 years. While, %BF and skinfold thicknesses were higher in girls at 12-17 years, central to peripheral skinfold ratio were higher in boys. Partial correlation analyses showed that all skinfold thickness parameters and %BF were significantly correlated with BMI (P &lt; 0.001; r = 0.19-0.87). Conclusions: The gain in BMI and skinfold thickness between the ages of 7 and 18 years occurred in age- and sex-specific patterns. Instead of comparable BMI, girls showed higher means of total skinfold thickness from age 12 years, while boys had higher central to peripheral adiposity ratio. abstract_id: PUBMED:12036802 Validity of body mass index compared with other body-composition screening indexes for the assessment of body fatness in children and adolescents. Background: Validation studies of height- and weight-based indexes of body fatness in children and adolescents have examined only small samples of school-age children. Objective: The objective was to validate the performance of age- and sex-specific body mass index (BMI) compared with the Rohrer index (RI) and weight-for-height in screening for both underweight and overweight in children aged 2-19 y. Design: Data from the third National Health and Nutrition Examination Survey (n = 11096) and a pooled data set from 3 studies that used dual-energy X-ray absorptiometry (n = 920) were examined. The receiver operating characteristic curve was used to characterize the sensitivity and specificity of these 3 indexes in classifying both underweight and overweight. Percentage body fat and total fat mass were determined by dual-energy X-ray absorptiometry. Subcutaneous fat was assessed on the basis of the average of triceps and subscapular skinfold thicknesses. Results: For children aged 2-19 y, BMI-for-age was significantly better than were weight-for-height and RI-for-age in detecting overweight when average skinfold thicknesses were used as the standard, but no differences were found in detecting underweight. When percentage body fat or total fat mass was used as the standard, BMI-for-age was significantly better than was RI-for-age in detecting overweight in children aged 3-19 y. No differences were found between BMI-for-age and weight-for-height in detecting overweight or underweight. Conclusion: For children and adolescents aged 2-19 y, the performance of BMI-for-age is better than that of RI-for-age in predicting underweight and overweight but is similar to that of weight-for-height. abstract_id: PUBMED:17556689 Adolescent skinfold thickness is a better predictor of high body fatness in adults than is body mass index: the Amsterdam Growth and Health Longitudinal Study. Background: Body mass index (BMI) during adolescence is predictive of BMI at adult age. However, BMI cannot distinguish between lean and fat body mass. Skinfold thickness may be a better predictor of body fatness. Objective: The objective of this study was to evaluate the relations between BMI and skinfold thickness during adolescence and body fatness during adulthood. Design: We included 168 men and 182 women from the Amsterdam Growth and Health Longitudinal Study, a prospective study that conducted 8 measurements of BMI and skinfold thickness between 1976 and 2000. BMI and skinfold thickness during adolescence were analyzed in relation to adult body fatness measured at a mean age of 37 y with dual-energy X-ray absorptiometry. Results: None of the boys and 1.7% of the girls were overweight at baseline, whereas the prevalence of high body fatness during adulthood was 29% in men and 32% in women. At the ages of 12-16 y, skinfold thickness was more strongly associated with adult body fatness than was BMI. Age-specific relative risks for a high level of adult body fatness varied between 2.3 and 4.0 in boys and between 2.1 and 4.3 in girls in the highest versus the lowest tertile of the sum of 4 skinfold thicknesses. For the highest tertile of BMI, the relative risk varied between 0.8 and 2.1 in boys and between 1.3 and 1.8 in girls. Conclusion: Skinfold thickness during adolescence is a better predictor of high body fatness during adulthood than is BMI during adolescence. abstract_id: PUBMED:11332928 Body mass index, triceps skinfold and waist circumference in screening for adiposity in male children and adolescents. Unlabelled: Simple anthropometric measurements and indices are the most commonly used tools for assessing body composition. Only a few papers have examined the sensitivity and specificity of the body mass index (BMI) against total body fat percentage (TBF%) from underwater weighing in children and adolescents. The objective of the study was to evaluate the screening performance of BMI, triceps skinfold thickness and waist circumference for excess TBF%. A total of 175 healthy volunteer males, aged 7.0-16.9 y, participated in the study. TBF% was measured using underwater weighing as the reference method. Receiver operating characteristic (ROC) curves were constructed to assess the value of the three anthropometric measurements as a screening measure for total adiposity. Sensitivity and specificity were calculated at several percentile cut-offs for BMI, triceps skinfold and waist circumference. The areas under the ROC curves were also calculated, and were 0.86 for BMI, 0.90 for triceps skinfold and 0.88 for waist circumference. The point on the ROC curve closest to 1 corresponded to the 70th percentile for BMI, to the 75th percentile for triceps skinfold, and to the 70th percentile for waist circumference. Conclusion: BMI, triceps skinfold and waist circumference predicted total fat content well in male children and adolescents. Answer: Yes, skinfold measurements can provide additional information to body mass index (BMI) in the assessment of body fatness among children and adolescents. Studies have shown that while BMI is a widely used measure of adiposity, it is essentially a measure of excess weight rather than excess body fat. Skinfold thickness measurements, when used in addition to BMI, can significantly improve the prediction of body fatness. For example, the use of the skinfold sum, in addition to BMI-for-age, increased the multiple R(2)s for predicting percentage body fat from 0.81 to 0.90 in boys, and from 0.82 to 0.89 in girls, and reduced the overall prediction errors for percentage body fat by 20-30% (PUBMED:17558589). However, the improvement in the estimation of body fatness among overweight children (defined by a BMI-for-age ≥95th percentile) by adding skinfold measurements is relatively small, reducing prediction errors for percentage body fat by only 7-9% (PUBMED:17558589). Additionally, for children aged 5 to 18 years with a BMI-for-age >95th percentile, the additional measurement of skinfolds did not significantly increase the sensitivity or specificity in the identification of excess body fat (PUBMED:17545361). Skinfold thickness measurements have been found to be better predictors of body fat percentage than BMI in male Spanish children and adolescents (PUBMED:9725657), and adolescent skinfold thickness is a better predictor of high body fatness in adults than is BMI (PUBMED:17556689). Furthermore, skinfold thicknesses may be useful in identifying children with moderately elevated levels of BMI (85th to 94th percentiles) who truly have excess body fatness or adverse risk factor levels (PUBMED:19720664). In contrast, if the goal is to identify children who have low body fatness, skinfold thicknesses would be preferred over BMI (PUBMED:23410599). However, it is important to note that the capacity of BMI to place children in the correct body fatness group was found to be poor, although not quite as poor as the corresponding capacity of the skinfold technique, which produced inaccurate and imprecise estimates of body fatness (PUBMED:14985694). In summary, skinfold measurements can provide valuable additional information to BMI in assessing body fatness among children and adolescents, particularly in cases where BMI is not above the 95th percentile. However, the benefit of adding skinfold measurements to BMI in overweight children is limited.
Instruction: Is initial success after Monarc(®) suburethral sling durable at mid-term evaluation? Abstracts: abstract_id: PUBMED:27032312 Is initial success after Monarc(®) suburethral sling durable at mid-term evaluation? Objectives: Suburethral sling is the gold standard treatment for stress urinary incontinence (SUI). Short-term cure rates are high, but only few studies are available for longer assessment after transobturator tape procedure. The objectives of this study were to assess mid-term functional outcome for Monarc(®) transobturator tape after initial success, and to identify risk factors for recurrence. Material And Methods: We conducted a single centre retrospective study (2004-2013) on consecutive women with SUI who underwent Monarc(®) transobturator tape procedure and were initially cured at the postoperative medical consultation. Pre- and postoperative data (age, weight, height, body mass index, hormonal status, surgical history, associated organ prolapse [Baden and Walker], associated urinary symptoms, postoperative complications [Clavien-Dindo]) were extracted from the electronic medical record. Subjective cure was defined by a score of zero from the ICIQ-SF questionnaire, no second intervention for recurrent SUI and no need for pads at latest news. Statistical analysis was performed using SAS(®) v9.3 (P&lt;0.05). Results: One hundred and thirty-three consecutive women underwent TOT Monarc(®) procedure, and 125 women were cured in the short-term. Among these women, 103 (82%) were available for mid-term evaluation. Sixty-four women (62%) had pure stress urinary incontinence. The mean follow-up period was 51 months [2-119]. At last follow-up, cure rate was 61%. Seventy-eight percent of women with recurrent urinary incontinence had SUI. Other women had mixed urinary incontinence (3/40), or de novo urgency (6/40). In univariate analysis, we could not identify pejorative prognostic factors for mid-term failure. Conclusion: In our experience, mid-term functional outcome after Monarc(®) transobturator tape procedure seems to deteriorate. After 4 years of follow-up, 61% of the women who were initially cured were still free from any leakage. Level Of Evidence: 4. abstract_id: PUBMED:25363659 Randomized prospective study comparing Monarc and Miniarc suburethral slings. Aim: Recently a miniature version of the Monarc suburethral sling has been introduced. This paper aims to evaluate the postoperative recovery, effectiveness and complications of these two types of suburethral sling. Methods: This was a prospective randomized series of 50 female patients with stress incontinence who underwent either a Monarc or Miniarc suburethral sling. Results: The only significant intraoperative difference was a shorter operation time for the Miniarc (18.8 vs 22.4 min). The success rates were similar at 6 weeks and 6 months. Conclusion: There were no significant differences between the Miniarc and Monarc, except for a significantly shorter Miniarc sling operating time. abstract_id: PUBMED:23618512 Treatment of female stress urinary incontinence using suburethral slings: comparative, retrospective, observational study of two surgical techniques. Objective: The treatment of female stress urinary incontinence has undergone a revolution in recent years due the emergence on the market of suburethral slings. The aim of this study is to compare two surgical techniques for treating stress urinary incontinence: Monarc™ (transobturator suburethral sling) and MiniArc(®) (single-incision suburethral mini-sling). Material And Methods: Comparative, retrospective, observational study from January 2005 to December 2011 on 317 women diagnosed with stress urinary incontinence. Of these, 214 were treated with the Monarc™ transobturator suburethral sling, and 103 were treated with the MiniArc(®) mini-sling. The results were treated with SPSS v.15 software, and the statistical significance was P≤.005. Results: The two patients groups were homogeneous in terms of age, number of births, presence of urgency urinary incontinence and prior hysterectomy. There were significant differences in hospital stay, surgical time and early complications in favour of the MiniArc(®), technique, but overall there were no significant differences in the late complications. Some 84% of the patients treated with the Monarc™ transobturator sling were cured compared with the 72% of patients in whom we implanted a MiniArc(®), a difference that was statistically significant. Conclusion: We need to perform more high-quality, prospective and randomised studies with larger numbers of patients and longer follow-up times to confirm or disprove the difference that we found in the success rate for the Monarc™ transobturator suburethral sling. abstract_id: PUBMED:16328116 Prospective randomized comparison of transobturator suburethral sling (Monarc) vs suprapubic arc (Sparc) sling procedures for female urodynamic stress incontinence. The aims of this trial were to compare the attendant complications and postoperative voiding function of the Monarc and suprapubic arc (Sparc) suburethral sling procedures. This is the first article in English literature to compare different suburethral sling procedures using prosthesis of the same material and the same weave. Sixty women with urodynamic stress incontinence alone or combined with pelvic prolapse less than International Continence Society stage II were randomly allocated to either the Monarc or Sparc group. A comparison of the peri- and postoperative results comprising complications and symptoms of voiding function revealed no significant differences between the two groups in a follow-up ranged from 6 to 14 months (median, 9 months). However, the nonsignificant increase in tendency of inadvertent vaginal perforation and disability/pain of thigh (Monarc 12.9 vs Sparc 0.0%, p=0.112 and Monarc 12.9 vs Sparc 0.0%, p=0.112, respectively) has clinical significance. Based on the analyses of the comparison of Blaivas obstruction nomograms, postoperative voiding function was not significantly different between the two surgical groups. We concluded that Monarc and Sparc suburethral taping proved to be equally safe and posed no remarkable impact on voiding function in a short term postoperative follow up. However, intraoperative urethrocystoscopy is recommended for safety in both the Monarc and Sparc procedures. abstract_id: PUBMED:22442522 A Comparative Evaluation of Suburethral and Transobturator Sling in 209 Cases with Stress Urinary Incontinence in 8 years. Aims And Objectives: To evaluate the outcome of suburethral and transobturator sling in treatment of female stress urinary incontinence in 209 cases from 2002 to 2010. The criteria evaluated were success, failure, complications, operating time, ease of the procedure, availability and cost effectivity of the sling. Design And Setting: A retrospective comparative study was carried out at a tertiary referral centre for female urinary incontinence. Material And Methods: A total of 209 patients (females from 27 to 79 years of age) with proven stress urinary incontinence were treated by suburethral transvaginal tape (TVT) type of slings in 101 cases and transobturator Monarc type of sling in 108 cases at the National Institute of Endoscopic Surgery and Urinary Incontinence Center, Mumbai, India, from March 2002 to June 2010. The maximum follow up was for 8 years. Results: The TVT type of slings had higher complication rate like needle entering the bladder, retention of urine necessitating to cut the tape in the center and had a success rate of 94.5% compared to Monarc/Trivedi obturator tape (TrOT) type of sling with outside-in technique, which had a negligible complication (less than 1%), pain in groin or leg movement that reduced in 6 weeks and a success rate of 95%. Specially, the Indian design Trivedi's stress urinary incontinence tape (TSUIT) and TrOT with reusable needles, the cost was only 15-20% of the international brands. abstract_id: PUBMED:30484955 Bladder dysfunction does not affect long-term success rate of the retropubic suburethral sling procedure in women with stress urinary incontinence. Objective: This study investigated the long-term success rate of retropubic suburethral sling in the treatment of women with stress urinary incontinence (SUI) and different bladder function. Methods: Surgical outcomes of women with SUI undergoing a retropubic suburethral sling procedure between October 1989 and November 2014 were analyzed retrospectively. Bladder function was evaluated in every patient preoperatively using videourodynamic studies. Patients were classified as having stable bladder, detrusor overactivity (DO), or detrusor underactivity (DU). Baseline urodynamic parameters were analyzed and long-term therapeutic outcomes were compared among these three groups. Results: In all, 403 patients underwent sling procedure for SUI. Of these, 291 (72.2%) had a stable bladder, 78 (19.4%) had DO, and 34 (8.4%) had DU. Mean (± SD) patient age was 60.2 ± 11.8 years, and the median follow-up was 97 months (interquartile range 24-325 months). Postoperatively, the overall continence rate was 83.4% (336/403). After surgery, 71 patients (17.6%) complained of dysuria, 14 (3.5%) complained of urgency incontinence, 25 (6.2%) had recurrent SUI requiring a secondary sling procedure, and urethrolysis was performed in 13 (3.2%). In the stable bladder, DO, and DU groups, the 5-year continence rates were 88.6%, 84.1%, and 79.4%, respectively (P = 0.59), whereas the 10-year continence rates were 83.8%, 72.9%, and 79.4%, respectively. Kaplan-Meier survival analysis indicated that the long-term success rate was similar among the three groups (P = 0.39). Conclusions: The overall continence rate was 83.4% and the 10-year continence rate was satisfactory in all bladder function subgroups. Treatment outcomes were the same for women with SUI but different bladder function. abstract_id: PUBMED:20171657 The urethral motion profile before and after suburethral sling placement. Purpose: We examined the effect of the Monarc suburethral sling on urethral mobility. Materials And Methods: We retrospectively studied the records of 54 consecutive women who received a Monarc suburethral sling between July 2005 and November 2008. All patients were examined by volume ultrasound preoperatively and at followup (average 0.7 years). Volume data sets were analyzed using post-processing software. Urethral mobility was described by vectors of movement from rest to a maximum Valsalva maneuver of 6 equidistant points marked evenly along the urethra from bladder neck (point 1) to external urethral meatus (point 6), as identified in the mid sagittal view. Measurements were made of point coordinates relative to the pubic symphysis dorsocaudal margin at rest and during maximal Valsalva maneuver. To determine the urethral motion profile we calculated mobility vectors of the 6 points using the formula, square root [(x(val) - x(rest))(2) + (y(val) - y(rest))(2)], where val represents the value during the Valsalva maneuver and rest represents the value at rest. We compared values before and after sling placement. Results: The subjective cure rate for stress urinary incontinence was 78% (42 cases). There was a statistically significantly decreased mobility at points 2 to 4, corresponding to the urethral central aspect (p = 0.002 to 0.018). No significant change in mobility was noted at the bladder neck and distal urethra (p = 0.39 to 0.89). Conclusions: Monarc suburethral sling placement decreases mid urethral mobility but does not seem to affect the bladder neck. abstract_id: PUBMED:37312329 Cost Analysis of Vaginal Mid Urethral Sling Suburethral Removal. Introduction: For cost analysis of mid urethral sling surgery there is a lack of data on costs associated with long-term complications. Therefore, we studied the costs related to a vaginal suburethral synthetic sling removal procedure over 2 consecutive years. Methods: From a prospectively maintained database of consecutive women who underwent 1 vaginal suburethral synthetic sling removal only, we analyzed the cost of suburethral synthetic sling removal for 2013 and 2014. Costing data were obtained for operating room expenses, medical and surgical supplies, pharmacy, anesthesia supplies, and room and bed. Professional fees for the suburethral synthetic sling removal procedure were obtained from the Medicare fee-for-service schedule. Costs for 2013 were adjusted by 3% to match 2014 costs. Results: From 2013 to 2014 a total of 46 women underwent suburethral synthetic sling removal. Mean ± SD length of surgery was 62 ± 22 minutes and median length of stay was 1 day (range 0 to 2). Costs for medical and surgical supplies decreased significantly from 2013 to 2014 while operating room and total cost increased during that time. No significant differences were found among payer types. With the 3% inflation adjustment for 2013 the mean total cost based on these factors was $3,714 ± $941, with a median cost of $3,556. Of the 46 women 13 were treated on an outpatient basis and the median cost was reduced at $3,030. Conclusions: At our tertiary care center the mean total cost of suburethral synthetic sling removal was $3,714, with a slight total cost increase from 2013 to 2014 but a cost saving for those treated as outpatients. This information will be useful for inclusion in the overall cost of mid urethral sling procedures. abstract_id: PUBMED:32564122 Short-, mid-, and long-term incontinence outcomes in women undergoing mid-urethral sling procedures: a retrospective cohort study. Introduction And Hypothesis: The primary aim was to compare subjective treatment success among women in short-, mid-, and long-term follow-up after mid-urethral sling (MUS). Symptom severity, condition-specific quality of life (QOL), and patient satisfaction were also examined. Methods: Women undergoing a primary MUS between 2001 and 2010 were identified by CPT code. Eligible subjects were mailed the Urogenital Distress Inventory (UDI-6), Pelvic Floor Impact Questionnaire (PFIQ-7), Patient Global Impression of Improvement (PGI-I), and Patient Satisfaction Questionnaire (PSQ). Follow-up intervals were short term (≤ 36 months), mid term (37-70 months), and long term (119-200 months). The primary outcome of treatment success was defined as responses of "not at all" or "somewhat" to both stress urinary incontinence (SUI) subscales on the UDI-6. Results: Of 896 respondents, 361 were assessed in the short-term (23.3 ± 7.2 months), 251 in the mid-term (49.8 ± 9.1 months), and 284 in the long-term group (147.9 ± 20.6 months). Treatment success was 75.4% in the short-, 62.3% in the mid-, and 67.0% in the long-term groups (p &lt; 0.01). Logistic regression showed women with mid- and long-term follow-up were nearly half as likely as their short-term counterparts to report treatment success (adjusted odds ratio [aOR] 0.51, 95% confidence interval [CI] 0.36, 0.74). UDI-6 and PFIQ-7 scores differed significantly among the short-, mid- and long-term groups (p &lt; 0.01). Patient satisfaction was similar, 83.3% in the short-, 76.6% in the mid-, and 78.2% in the long-term follow-up (p = 0.31). Conclusion: Women with short-term follow-up had the highest subjective treatment success rates; mid- and long-term follow-up was lower, but sustained after 3 years. Symptom severity and impact on QOL were lowest in the short-term group. However, high satisfaction was noted across all groups. abstract_id: PUBMED:29701328 Repeat retropubic suburethral sling procedure is effective for recurrent female stress urinary incontinence. Objective: The aim of the present study was to investigate the effectiveness of repeat retropubic suburethral sling for recurrent stress urinary incontinence (SUI) in women. Methods: A retrospective review was performed of 35 women with SUI who underwent repeat retropubic suburethral sling procedures between May 1994 and November 2014. Treatment outcome was assessed directly or by telephone interview. Sling position and urethral incompetence during straining and coughing after the procedure were evaluated. Results: The overall continence rate after repeat suburethral sling procedures was 60% (21/35). Among the 35 women, 19 underwent a simple retropubic suburethra sling procedure and 16 underwent combined suburethral sling and other pelvic floor surgery; the success rate for these 2 groups was 84.2% (16/19) and 31.3% (5/16; P = .001), respectively. The success rate was 60% in patients with normal detrusor function (n = 18/30) and in those with detrusor underactivity (n = 3/5; P = 1.0). For patients with intrinsic sphincter deficiency and bladder base hypermobility, the success rate was 63.6% (n = 14/22) and 53.8% (n = 7/13), respectively (P = .36). Among the 25 patients with transrectal ultrasound follow-up data, the success rate following placement of the second sling at the bladder neck, proximal urethra, middle urethra, and distal urethra was 50% (2/4), 87.5% (7/8), 36.4% (4/11), and 0% (0/2), respectively (P = .122). Conclusion: Repeat suburethral sling procedures for recurrent SUI are safe and effective. The position of the second sling at the proximal urethra resulted in a relatively higher continence rate relative to other sites. Answer: The initial success of the Monarc® suburethral sling appears to be moderately durable at mid-term evaluation. A study assessing the mid-term functional outcome for Monarc® transobturator tape after initial success found that after a mean follow-up period of 51 months, the cure rate was 61%. This indicates that 61% of the women who were initially cured were still free from any leakage at the time of the last follow-up. However, this also suggests that there is a significant proportion of patients (39%) who may experience a recurrence of symptoms over time. The study did not identify any pejorative prognostic factors for mid-term failure in their univariate analysis (PUBMED:27032312). This finding is consistent with another study that reported a subjective cure rate for stress urinary incontinence of 78% after Monarc suburethral sling placement, with a statistically significant decrease in mobility at points corresponding to the urethral central aspect, which may contribute to the sustained effectiveness of the procedure (PUBMED:20171657). Moreover, another study comparing short-, mid-, and long-term outcomes after mid-urethral sling procedures, including Monarc, found that while treatment success was highest in the short-term group (75.4%), mid- (62.3%) and long-term (67.0%) follow-up groups showed lower success rates. However, the success was sustained after 3 years, indicating that the initial success can be durable to some extent (PUBMED:32564122). In summary, the initial success of the Monarc® suburethral sling for treating stress urinary incontinence does show some level of durability at mid-term follow-up, with around 61% of patients maintaining continence at approximately 4 years post-procedure. However, there is a notable decline from the initial success rates, and some patients may experience a recurrence of symptoms over time.
Instruction: Induction of cytokines and adhesion molecules in stable hemodialysis patients: is there an effect of membrane material? Abstracts: abstract_id: PUBMED:14583663 Induction of cytokines and adhesion molecules in stable hemodialysis patients: is there an effect of membrane material? Background/aims: The aim of this study was to examine the effects of a biocompatible and a nonbiocompatible hemodialysis membrane on the secretion of cytokines and their specific antagonists as well as on the expression of adhesion molecules. Methods: A crossover study using cuprophane (CU) and polysulfone (PS) dialyzers and measurements of serum tumor necrosis factor (TNF)-alpha, soluble TNF receptors, interleukin (IL)-6, IL-6 receptor, IL-2 receptor, IL-1 receptor antagonist and the adhesion molecules intercellular adhesion molecule 1, endothelial leukocyte adhesion molecule and vascular cell adhesion molecule was performed in 19 hemodialysis patients. 44 healthy volunteers served as controls. Results: All of the measured cytokines and cytokine antagonists as well as all adhesion molecules were significantly elevated in hemodialysis patients compared to controls. There was a significant increase in TNF-alpha during a dialysis session with a CU dialyzer, but only a moderate increase using a PS dialyzer. None of the other cytokines and adhesion molecules were changed during a dialysis session. We were also able to show an upregulation of adhesion molecules in dialysis patients. Conclusion: Our study clearly demonstrates that levels of inflammatory cytokines as well as their antagonists and adhesion molecules are elevated in patients on hemodialysis therapy. Baseline values before the start of a dialysis session did not show any differences with regard to the usage of CU or PS dialysis membranes. However, CU dialyzers led to a significantly greater stimulation of TNF-alpha during the dialysis session in comparison to PS membranes, suggesting a higher degree of bioincompatibility. abstract_id: PUBMED:10584995 Is dialysis membrane type responsible for increased circulating adhesion molecules during chronic hemodialysis? Background: Patients with chronic renal failure under maintenance hemodialysis (HD) present with numerous adverse effects including immunologic alterations. Serious abnormalities of neutrophil function have been reported to be associated with disturbed cell adhesiveness. These adhesion processes are mediated by cytokines and different adhesion molecules. Patients And Methods: In this study, serum concentrations of the intercellular adhesion molecule ICAM-1, vascular cell adhesion molecule VCAM-1 and endothelial leukocyte adhesion molecule E-selectin were investigated during employment of different dialysis membranes (cuprophane: n = 23, cellulose: 8, polysulfone: 26, acrylonitrile: 7). These adhesion parameters from 64 patients before and after a hemodialysis session were investigated parallel to the serum levels of circulating cytokines and their inhibitors. Results: Circulating ICAM-1 levels were not elevated in low-flux membranes and most of the high-flux HD membranes, except for one high-flux polysulfone membrane. cVCAM-1 levels were significantly elevated both in low- and high-flux dialysis membranes, whereas cE-selectin was not increased. cICAM-1 levels were not different before and after hemodialysis in the entire study group. In contrast, cVCAM-1 and cE-selectin levels increased significantly during HD in the entire study group (both p &lt; 0.001). Serum levels did not correlate with the duration of end-stage renal failure and hemodialysis. Levels of circulating cytokine antagonists/inhibitors (Il-lra, Il-2R, TNFsRp55/75) were significantly increased in all patients before and after HD, whereas the serum concentrations of the corresponding circulating cytokines (I1-1beta, Il-1, TNF-alpha) were within normal ranges. Conclusion: Increased levels of cVCAM-1 which suggest an important role for immunological alterations in HD and cytokine-independent changes during HD sessions in all membranes without alterations of cICAM-1 in most membranes and unchanged cE-selectin indicate that processes such as uremia are responsible for these effects rather than membrane characteristics. The level of circulating adhesion molecules does not serve as an appropriate marker of membrane biocompatibility. abstract_id: PUBMED:7802927 Adhesion molecules and inflammatory dermatoses In inflammatory dermatoses, adhesion molecules are involved in the interaction of leukocytes with endothelial cells, extra-cellular matrix and epidermal cells. In dermatosis where epidermal cells are the main targets of inflammation, the keratinocytes are activated and participate to the local immune reaction through the secretion of cytokines and the expression of the adhesion molecule ICAM-1 and the HLA-DR antigens. Induction of ICAM-1 by IFN gamma or TNF alpha on keratinocytes that do not express this molecule in normal skin may account for the recruitment of T cells into the epidermis. In the dermis, an up-regulation of ICAM-1 expression occurs on endothelial cells activated by cytokines (IL-1, TNF alpha...) and is usually correlated with an induction of ELAM-1 and less frequently VCAM-1. These adhesion molecules are involved in the recruitment of inflammatory cells but also in the control of their retention and migration through the skin. abstract_id: PUBMED:7587640 Adhesion molecules in cutaneous inflammation. As in other organs, leukocyte adhesion molecules and their ligands play a major role in cutaneous inflammatory events both by directing leukocyte trafficking and by their effects on antigen presentation. Skin biopsies of inflamed skin from patients with diseases such as as psoriasis or atopic dermatitis reveal up-regulation of endothelial cell expression of P- and E-selectin, vascular cell adhesion molecule 1 and intercellular adhesion molecule 1. Studies of evolving lesions following UVB irradiation, Mantoux reaction or application of contact allergen, demonstrate that expression of these adhesion molecules parallels leukocyte infiltration into skin. When cutaneous inflammation is widespread (e.g. in erythroderma), soluble forms of these molecules are detectable in serum. In vitro studies predict that peptide mediators are important regulatory factors for endothelial adhesion molecules. Intradermal injection of the cytokines interleukin 1, tumour necrosis factor alpha and interferon gamma into normal human skin leads to induction of endothelial adhesion molecules with concomitant infiltration of leukocytes. In addition, neuropeptides rapidly induce P-selectin translocation to the cell membrane and expression of E-selectin. Adhesion molecules also play a crucial role as accessory molecules in the presentation of antigen to T lymphocytes by Langerhans' cells. Expression of selectin ligands by Langerhans' cells is up-regulated by various inflammatory stimuli, suggesting that adhesion molecules may be important in Langerhans' cell migration. The skin, because of its accessibility, is an ideal organ in which to study expression of adhesion molecules and their relationship to inflammatory events. Inflammatory skin diseases are common and inhibition of lymphocyte accumulation in skin is likely to prove of great therapeutic benefit. abstract_id: PUBMED:7772244 Cell adhesion molecules and the immune system Cellular interactions are controlled by complex mechanisms which come into play at the receptors on the cell surface (adhesion molecules: selectins, integrins, superfamily of immunoglobulins), the soluble cell mediators (cytokines) and the components of the tissue matrix (fibronectin, collagen, etc.). Disturbance of one of these systems may induce a pathological condition. The physiological state of the individual therefore depends on the balance of all these components. In the development of inflammation, adhesion molecules play an essential role in the localisation of the inflammatory response. At this level, the vascular endothelium, a governing barrier for the exchanges between blood and the tissues, plays an active part in regulation of the transcapillary permeability, control of proliferation of haematopoietic cells and the phases of the inflammatory response. After they have marginated, the active cells migrate by diapedesis towards the site of inflammation by creation of chemotactic signals as the adhesion between the cells is insufficient to induce their migration. The adherence phenomena depend on a process that is strictly controlled by the cytokines and enable intervention of cell-cell reactions and cell-protein recognition of the extra-cellular matrix. Cytokines play a key role in control of the expression and/or avidity of membrane receptors for ligand(s). An appropriate and rapid response of the circulating cells depends on coordination of the train of events that regulate the functional expression of the adhesion molecules. Use of specific antibodies that prevent cell adherence opens very important therapeutic possibilities because a single blockage of cell adhesion can have an immediate direct impact on development of the inflammatory response. abstract_id: PUBMED:8908538 Adhesion molecules in liver transplantation. Adhesion molecules play a central role in regulation of the immune response after liver allografting. This review gives an overview of the current knowledge of expression and induction of cell-cell, and cell-matrix adhesion molecules after liver transplantation. The current knowledge on the stepwise generation of tissue inflammation, the differential expression of adhesion molecules on liver endothelia, infiltrating leukocytes, and their induction by cytokines is summarized. New diagnostic options and possible therapeutic strategies by using monoclonal antibodies to adhesion molecules are described. abstract_id: PUBMED:10420385 Immunohistochemical detection of cytokines and cell adhesion molecules in the synovial membrane. This paper describes the immunohistochemical techniques which can be used to detect cytokines and cell adhesion molecules in synovial membrane tissue, including a list of reagents and possible problems in each technique. It also describes three methods of quantitation of the resultant immunohistochemical detection, including the recent innovation computer-assisted digital video image analysis, and lists the advantages and disadvantages of each quantitation technique. This information will be a useful summary for any scientist interested in applying such techniques to the detection of cytokines and cell adhesion molecules in human tissue sections. abstract_id: PUBMED:10867380 Adhesion molecules and asthma The inflammation markers in asthma can help the prognosis, diagnosis and monitoring of the respiratory illness. The adhesion molecules (AM) are membrane glycoproteins that intervene in the contact between the two cells or between the cell and the extracellular matrix. Their aim is to maintain the contact between two cells for the time that is necessary in order to establish a communication between them and so that the function indicated takes place. The most important functions of the AM's are to facilitate the adhesion of the circulating leukocytes to the vascular endothelium with the posterior transendothelial migration that contribute, by doing so, to the perpetuation of the inflammatory reaction in bronchial asthma. The AM's can act as receptors or as specific ligands for these receptors and are expressed both in leukocytes and endothelial cells as well as in epithelial cells. There are several AM families that allow for these interactions and that collaborate in the specificity of the cellular recognition in the immunologic reaction. The most known are the selectines, the super family of the immunoglobulins and the integrins, that act in a cascading manner. In bronchial asthma, the inflammatory reaction consists of three sequential processes: the recognition-activation phase, the inflammation phase and the solution phase. The AM's act in the second or the inflammation phase. The positive control of the E-selectine, followed by that of the ICAM-1 on the surface of the endothelium cell, leads in sequence to the adhesion, initially of the neutrophils and then the lymphocytes and monocytes, as a response to the antigenic stimulus. The initial loose adhesion of the leukocytes to the vascular endothelium, activated by contact or by cytokines, takes place by means of the selectines. In the microvasculature the strength of the flow pushes the adhered leukocytes, causing the disappearance of the selectine-ligand interactions which form againg quickly in descending direction as the leukocyte moves. The result is the rolling of the spherical leukocyte along the surface of the endothelium. This leukocyte becomes activated and its cytoskeleton is rearranged, and now has a flat shape and the affinity of its integrins is increased by the endothelial ligands. This leukocyte can die within a few days, can activate itself or exit via the lymphatic vessels. The activated monocytes that are located in the extravascular tissues are differentiated from the histiocytes, which are the final cellular effectors of the inflammatory reaction. The importance of the knowledge of the AM structure and the functions, is due to the fact that there is a possible pharmacological action on them. The data obtained from clinical studies, as well as the in vitro results, confirm the importance of the AM modulation as well as therapeutic approach to bronchial asthma. abstract_id: PUBMED:7802928 Role of adhesion molecules in bronchial inflammation and bronchial hyperreactivity Adhesion molecules are glycoproteins situated in the cell membrane. These molecules permit cells to integrate specifically with the specific cellular receptors or ligands and with the extracellular matrix during intra-tissular migration. Structurally, 4 large families can be distinguished: the immunoglobulin family, the integrins, the selectines, and finally the cadherines. The results of preliminary studies in man seem to confirm those obtained in vivo and in vitro in animals. The in vitro studies have shown that overall the inflammatory cells, such as the eosinophils as well as the endothelial and epithelial cells have many adhesion molecules, the regulation of which is dependent on many cytokinetic mediators, such as interleukin 1 beta (IA-1 beta), tumour necrosis factor (TNF alpha) and interleukin 4 (IL4). In man, it has been observed that there is a significant increase in TNF alpha in the bronchial mucosae of asthmatics. Together, the experimental studies have shown the crucial role of adhesion molecules in specific recruitment and their regulation by cytokines in the physiopathology of bronchial inflammation and hyper-reactivity that are characteristic of asthmatic disease. abstract_id: PUBMED:7676683 The significance of adhesion molecules in oncology Adhesion molecules are membrane proteins responsible for the complex functions of cell adhesion and cellular recognition and are thus of importance in inflammatory as well as neoplastic diseases. Adhesion molecules seem to play a significant role at each level of the metastatic cascade, including the destruction of normal cell-cell as well as cell-substrate cohesion, the penetration of tumor cells into the vascular system and the further spread into distant organs. In this summary an overview of subtypes, structure and function of the major groups of adhesion molecules is given and their possible role in the development, propagation and metastatic spread of malignancies discussed. Cell adhesion and its defects may be of importance in the behaviour of tumor cells and their spread. A better understanding of their function and possible manipulation of their expression, e.g., by cytokines could provide new therapeutic approaches in clinical oncology. Answer: The induction of cytokines and adhesion molecules in stable hemodialysis patients appears to be influenced by the type of membrane material used during the dialysis process. A study examining the effects of biocompatible and nonbiocompatible hemodialysis membranes on the secretion of cytokines and expression of adhesion molecules found that all measured cytokines, cytokine antagonists, and adhesion molecules were significantly elevated in hemodialysis patients compared to controls. Specifically, there was a significant increase in tumor necrosis factor-alpha (TNF-alpha) during a dialysis session with a cuprophane (CU) dialyzer, but only a moderate increase when using a polysulfone (PS) dialyzer. This suggests that CU dialyzers may lead to a greater stimulation of TNF-alpha during the dialysis session, indicating a higher degree of bioincompatibility compared to PS membranes (PUBMED:14583663). Another study investigating serum concentrations of intercellular adhesion molecule 1 (ICAM-1), vascular cell adhesion molecule 1 (VCAM-1), and endothelial leukocyte adhesion molecule (E-selectin) during the use of different dialysis membranes found that cVCAM-1 levels were significantly elevated in both low- and high-flux dialysis membranes. However, cICAM-1 levels were not different before and after hemodialysis in the entire study group, and cE-selectin levels increased significantly during hemodialysis in the entire study group. The study concluded that increased levels of cVCAM-1 suggest an important role for immunological alterations in hemodialysis, and that uremia rather than membrane characteristics might be responsible for these effects. It was also noted that the level of circulating adhesion molecules does not serve as an appropriate marker of membrane biocompatibility (PUBMED:10584995). These findings indicate that the type of membrane material used in hemodialysis can have an effect on the induction of cytokines and adhesion molecules, with some materials potentially being more bioincompatible and leading to greater inflammatory responses.
Instruction: Lack of motivation: a new modifiable risk factor for peritonitis in patients undergoing peritoneal dialysis? Abstracts: abstract_id: PUBMED:25410720 Lack of motivation: a new modifiable risk factor for peritonitis in patients undergoing peritoneal dialysis? Aim: Can we identify modifiable risk factors for peritonitis in patients undergoing peritoneal dialysis (PD)? We aimed to determine whether housing standard, PD exchange technique or patient motivation might be modifiable risks for peritonitis. We also explored the relationship between lack of motivation and depression. Methods: Nurse home visits assessed PD exchange technique, environment and patient motivation. Motivation scores were correlated separately with an Apathy Evaluation Score and a depression score using PHQ-9 questionnaires. Results: Home hygiene, exchange technique and motivation were above average in 53%, 56% and 60%, respectively in 104 patients undergoing PD. After 15 months, 25.9% patients developed peritonitis but nurses' ratings of homes and exchange techniques were not predictive. Low patient motivation was predictive. Patients rated to have above or below median motivation had significantly different Apathy Scores (p = 0.0002). Unmotivated depressed patients were significantly more likely to develop peritonitis compared to motivated depressed patients. Conclusion: Lack of motivation predicted peritonitis particularly if associated with depression. Further studies are required focusing on specific motivation scoring schemes and the psychosocial support that might lead to better outcomes. abstract_id: PUBMED:34279003 Overhydration, A New Risk Factor for Peritonitis in Peritoneal Dialysis. Introduction: Overhydration (OH) remains a recurrent problem in peritoneal dialysis (PD), with deleterious effect in outcomes. Recent evidence suggests a direct relation between OH and increased peritonitis risk. The mechanisms of this connection are not well defined, but gut wall edema and malnutrition are probably involved. Methods: Our aim was to assess OH as a risk factor for peritonitis in patients on PD. Retrospective study was done in a PD program with a bio impedance analysis. The investigator reviewed patient charts and documents. The Fresenius® Body Composition Monitor was used to obtain hydration parameters. OH was considered when Overhydration/Extracellular Water (OH/ECW) parameter was over 15% of the dry weight. The diagnosis of peritonitis was made according to the International Society of Peritoneal Dialysis guidelines. Associations between peritonitis rate and the collected variables were assessed using Chi-square test and Pearson's correlation. Results: An association between OH and the risk of peritonitis was established. Conclusion: OH is prevalent in our patients undergoing PD and it is a modifiable risk factor for peritonitis. The bio impedance analysis is economical and should be used in association with a physical exam and treatment results to achieve the normo-hydrated status in those patients. abstract_id: PUBMED:33692295 Smoking is a Risk Factor for Endogenous Peritonitis in Patients Undergoing Peritoneal Dialysis. Background: Peritonitis is one of the most common complications in patients undergoing peritoneal dialysis, (PD) but it is difficult to predict or prevent. In this study, we analyzed the risk of endogenous peritonitis in patients receiving PD. Methods: We included all patients who underwent PD at our hospital from April 2015 to March 2020. There were 22 cases of peritonitis, including 18 cases of endogenous peritonitis without evidence of exit-site infection or technical failure. We evaluated older age, female sex, obesity, diabetes, diverticulosis, and constipation as potential important risk factors for endogenous peritonitis and included these as confounding factors, along with a current or previous history of smoking, in univariate logistic regression models. Results: A previous or current history of smoking (p = 0.0065) was the most significant risk factor for endogenous peritonitis in the univariate logistic regression model. In addition, smoking was the most significant independent risk factor for endogenous peritonitis (p = 0.0034) in multivariate logistic regression models. Diabetes was also significant in univariate and multivariate logistic regression analysis. Conclusions: Smoking is a significant independent risk factor for endogenous peritonitis in patients undergoing PD. Cessation of smoking may lower the risk of endogenous peritonitis in this patient group. abstract_id: PUBMED:26369571 Hydration status measured by BCM: A potential modifiable risk factor for peritonitis in patients on peritoneal dialysis. Aim: Peritoneal dialysis peritonitis and fluid overhydration (OH) are frequent problems in peritoneal dialysis. The latter can cause gut wall oedema or be associated with malnutrition. Both may lead to increased peritonitis risk. We wished to determine if OH is an independent risk factor for peritonitis (caused by enteric organisms). Methods: Retrospectively study of patients with &gt;2 bioimpedance assessments (Body Composition Monitor). We compared peritonitis rates of patients with above or below the median time-averaged hydration parameter (OH/extracellular water, OH/ECW). Multivariate analysis was performed to determine independent risk factors for peritonitis by enteric organism. Results: We studied 580 patients. Peritonitis was experienced by 28% patients (followed up for an average of 17 months). The overall peritonitis rate was 1:34 patient months. Patients with low OH/ECW values had significantly lower rates of peritonitis from enteric organisms than overhydrated patients (incident rate ratio 1.53, 95% confidence interval 1.38-1.70, P &lt; 0.001). Hydration remained an independent predictor of peritonitis from enteric organisms when multivariate model included demographic parameters (odds ratio for a 1% increment of OH/ECW was 1.05; 95% confidence interval 1.01-1.10, P &lt; 0.02). However, including biochemical parameters of malnutrition reduced the predictive power of overhydration. Conclusion: We found an association between overhydration and increased rates of peritonitis. While this may partly be due to the high co-morbidity of patients (advanced age and diabetes), on multivariate analysis, only inclusion of nutritional parameters reduced this association. It remains to be determined if overhydration will prove to be a modifiable risk factor for peritonitis or whether malnutrition will prove to be more important. abstract_id: PUBMED:33913381 Risk factors for mortality in patients undergoing peritoneal dialysis: a systematic review and meta-analysis. Aim: Inconsistent investigations of the risk factors for all-cause mortality in patients undergoing peritoneal dialysis (PD) were reported. The present meta-analysis aimed to assess the impact of some clinical characteristics on the risk of mortality in PD patients. Methods: PubMed and Embase were systematically searched for studies evaluating the risk factors for all-cause mortality in PD patients. Hazard ratio (HR) and 95% confidence interval (CI) were derived using a random-effect or fixed-effect model considering the heterogeneity across studies. Result: A total of 26 studies were included in this meta-analysis in accordance with the inclusion and exclusion criteria. Age, primary cardiovascular diseases, diabetes mellitus, and high level of alkaline phosphatase showed significant positive associations with elevated risk of all-cause and cardiovascular mortality in PD patients, while hemoglobin acted as a benefit factor. Furthermore, early onset of peritonitis, high peritoneal transport status, elevated body mass index and high-sensitivity C-reactive protein could also considerably increase the risk of all-cause mortality. The absolute serum level of magnesium, potassium, and uric acid required to improve survival in PD patients should be verified further. Conclusions: Multiple factors could affect the risk of mortality in PD patients. abstract_id: PUBMED:17918121 Peritoneal dialysis-related peritonitis: can we predict it? Peritonitis complicating peritoneal dialysis represents a major cause of technique failure, hospitalization, and increased mortality. Peritonitis tends to be recurrent and clustered within particular patients at risk. The aim of this review is to evaluate the potential predictive factors for development of peritoneal dialysis-associated peritonitis based on currently available evidence. Risk factors were divided into medical and non-medical ones, and characterized by a schema of fixed versus modifiable factors. A new direction in the landscape change of the risk factors of peritonitis appears to focus on psychosocial aspects and patient training. Identification of these factors have important clinical implications because of the hitherto lack of well-established strategies to prevent peritonitis complicating peritoneal dialysis. It is hoped that better understanding of the risk factors will allow us to take tangible steps toward minimizing the infectious burden from the Achilles' heel of peritoneal dialysis. abstract_id: PUBMED:36964507 Prevalence of chronic periodontitis in patients undergoing peritoneal dialysis and its correlation with peritoneal dialysis-related complications. Objective: The microinflammatory state can influence the occurrence of dialysis-related complications in dialysis patients. Chronic periodontitis (CP), in which plaque biofilm is considered to be the initiating factor, is a chronic infectious disease in the oral cavity. It is still uncertain whether CP affects the microinflammatory state in peritoneal dialysis (PD) and the occurrence of dialysis-related complications. The purpose of this study was to investigate the correlation between the periodontal index and clinical parameters in peritoneal dialysis patients with CP and dialysis-related complications, including peritoneal dialysis-associated peritonitis (PDAP) and cardiovascular and cerebrovascular events (CCEs). Methods: This was a retrospective cohort study, and 76 patients undergoing PD were enrolled. Clinical parameters, the occurrence of PD-related complications and periodontitis-related indicators, including the gingival index (GI), plaque index (PLI), probing depth (PPD) and clinical attachment loss (CAL), were collected. Correlation analysis was used to explore the correlation between periodontal or clinical parameters and the occurrence of PD-related complications. Results: All the patients had different degrees of periodontitis (mild 9.2%, moderate 72.4%, severe 18.4%); PPD was inversely related to serum albumin (r = - 0.235, p = 0.041); CAL has a positive correlation with serum C-reactive protein (rs = 0.242, p = 0.035); PLI was positively correlated with serum calcium (r = 0.314, p = 0.006). ANOVA, multivariate logistic regression analysis and Kaplan-Meier Survival curve suggested that CAL was a risk factor for the occurrence of PDAP. There was no correlation between periodontal parameters and CCEs or poor prognosis. Conclusion: CP is universally present in PD patients, and the presentation of periodontitis influences the systemic inflammatory state in PD patients. CP is a risk factor for PDAP. abstract_id: PUBMED:33495136 Risk factors for peritoneal dialysis withdrawal due to peritoneal dialysis-related peritonitis. Background: Peritoneal dialysis has become commonly used for renal replacement therapy; however, some patients withdraw from peritoneal dialysis due to complications, including peritoneal dialysis-related peritonitis, resulting in the low number of patients on peritoneal dialysis. Risk factors for peritoneal dialysis withdrawal due to peritoneal dialysis-related peritonitis are less certain. This retrospective study aimed to investigate these risk factors. Methods: We retrospectively analyzed clinical characteristics, laboratory data, and causative microorganisms of 204 episodes of peritoneal dialysis-related peritonitis between 2007 and 2018 at our institution. Results: Of the 204 episodes, 38 resulted in withdrawal from peritoneal dialysis due to peritoneal dialysis-related peritonitis. The number of peritonitis episodes per patient-year and the incidence of cardiovascular disease were significantly higher in the withdrawal group. Similarly, this group had low levels of serum creatinine, urea nitrogen, serum albumin, alanine aminotransferase, cholinesterase and high C-reactive protein, and second dialysate cell counts after antibiotic administration. Multivariate logistic regression analysis revealed that serum albumin (odds ratio: 0.465; 95% confidence interval: 0.249-0.868; P=0.016) and cardiovascular disease (odds ratio: 2.508; 95% confidence interval: 1.184-5.315; P=0.016) exhibited significant differences. Conclusions: The results of this study suggest that hypoalbuminemia and the presence of cardiovascular disease were independent risk factors for withdrawal from peritoneal dialysis due to peritoneal dialysis-related peritonitis. abstract_id: PUBMED:30014626 Longer-Period Effects of Bicarbonate/Lactate-Buffered Neutral Peritoneal Dialysis Fluid in Patients Undergoing Peritoneal Dialysis. High concentrations of lactate are considered to contribute to impairment of the peritoneal membrane. We investigated the longer-period effects of bicarbonate/lactate-buffered neutral peritoneal dialysis fluid (PDF) in patients undergoing PD for about 2 years. Patients undergoing PD were changed from a lactate-buffered neutral PDF to a bicarbonate/lactate-buffered neutral PDF. We then investigated the patients' clinical outcomes and peritoneal membrane functions as well as the surrogate markers in the drained dialysate. Fourteen patients undergoing PD were enrolled. Peritonitis was observed in one patient. No other adverse events were observed. Peritoneal function did not change as the ultrafiltration volume decreased. Fibrin degradation products and vascular endothelial growth factor in the drained dialysate decreased while the interleukin level increased. These results suggest that bicarbonate/lactate-buffered neutral PDF may have beneficial effects in terms of peritoneal preservation and can be safely used in patients undergoing PD. abstract_id: PUBMED:19048383 Coronary artery bypass grafting in patients on maintenance dialysis: is peritoneal dialysis a risk factor of operative mortality? Background: Increasing numbers of patients on dialysis are undergoing coronary artery bypass grafting (CABG). We undertook this retrospective study to identify risk factors of operative mortality in dialysis patients who underwent CABG. Patients And Methods: We performed retrospective analysis of 105 patients who were on dialysis for at least two months before surgery and who underwent CABG in Toronto General Hospital from 1997 to 2006. Using prospectively collected data from the Division of Cardiovascular Surgery Database of Toronto General Hospital, we collected data on comorbidities, procedures, modality change during hospitalization, and operative outcomes. Logistic regression was used to assess risk factors of operative mortality. Results: One hundred and five maintenance dialysis patients (40 PD and 65 HD) who met the inclusion criteria were studied. Overall in-hospital mortality was 7.6%. Atrial fibrillation and pneumonia occurred in 16.2 and 9.5%, respectively, of all dialysis patients. Among PD patients, rates of post-operative dialysate leak and peritonitis were 10 and 12.5%, respectively. Among HD patients, 4.6% experienced post-operative AV access thrombosis. Logistic regression showed older age (&gt;or=70 years) and peritoneal dialysis are independent risk factors of operative mortality. Conclusion: In this retrospective study, older patients on PD had higher operative mortality than HD patients. These findings suggest extra care should be taken when CABG is considered for PD patients over 70 years old. In this study we could not identify the reason(s) for the high mortality of elderly peritoneal dialysis patients undergoing CABG. Answer: Yes, lack of motivation has been identified as a new modifiable risk factor for peritonitis in patients undergoing peritoneal dialysis (PD). A study aimed at identifying modifiable risk factors for peritonitis in PD patients found that low patient motivation was predictive of peritonitis development. Specifically, patients who were rated as having below median motivation had significantly higher rates of peritonitis, particularly if they were also depressed. The study used motivation scores correlated with an Apathy Evaluation Score and a depression score using PHQ-9 questionnaires to assess the relationship between lack of motivation and depression. The conclusion drawn was that lack of motivation, especially when associated with depression, predicted peritonitis, suggesting that psychosocial support and specific motivation scoring schemes might lead to better outcomes (PUBMED:25410720).
Instruction: Should we consider an acceptable drinking level specifically for postmenopausal women? Abstracts: abstract_id: PUBMED:15955777 Should we consider an acceptable drinking level specifically for postmenopausal women? Preliminary findings from the postmenopausal health disparities study. Aims: There are substantial neuroendocrine differences between postmenopausal (PMP) women and women with cyclic ovarian function; thus there are differences in hormonal responses to 'acceptable' levels of drinking, i.e. 7 total weekly drinks (TWD). The aim is to evaluate whether alcoholic beverage consumption for PMP women should be the same for all women, as is currently the case, or different from that for women with cyclic ovarian function. Methods: Carefully collected and cross-checked current drinking data in 318 PMP women from a spectrum of racial and ethnic groups have been analysed to determine if a plateau in estradiol (E2) levels at a particular alcohol consumption level other than 7 TWD can be identified as the drinking level above which no additional benefit in E2 levels occurs. Results: Levels of E2 are significantly different among abstainers and among drinkers within all racial and ethnic groups, with the disturbing exception of black PMP women. Although there are significant differences among racial and ethnic groups in the proportions of women who drink, there are no differences in TWD consumed during the previous month. Between &lt; or =5 TWD and &lt; or =6 TWD, a plateau in the correlation of E2 with increasing drinks/week occurs, and the range of the unstandardized multiple regression coefficient +/- the standard error do not overlap between these same two drinking levels (P &lt; 0.05). Conclusions: Consideration should be given to lowering the acceptable level of drinking for PMP women 5 TWD. Additional studies in which sample sizes for each minority racial and ethnic are larger need to be performed in the future. abstract_id: PUBMED:27548609 Association Between Drinking and Obesity in Pre- and Postmenopausal Women: Korea National Health and Nutrition Examination Survey 2010-2012. Background: Women are more vulnerable to the adverse effects of alcohol than men. The present study aimed to investigate the link between drinking and obesity in pre- and postmenopausal women in Korea. Methods: We performed a cross-sectional study of 4374 premenopausal and 2927 postmenopausal women using a multistage probability cluster survey sample to produce nationally representative estimates. We assessed the subjects' alcohol drinking tendencies rates according to their drinking levels as well as Alcohol Use Disorders Identification Test (AUDIT); obesity was identified based on body mass index (BMI) ≥25 kg/m2, waist circumference (WC) ≥80 cm, and waist-to-height ratio (WHtR) ≥50%. We performed t-tests and chi-square tests to assess the association between drinking and obesity. Results: In premenopausal subjects, obesity indices increased significantly as alcohol consumption rose. Significant correlations between drinking level and obesity factors were found in premenopausal women after adjusting for age (odds ratios [ORs] for BMI, WC, and WHtR were 1.58 [1.08-2.31], 1.94 [1.11-3.00], and 1.80 [1.24-2.61], respectively). Furthermore, an AUDIT score of 20 or higher indicated a significantly higher likelihood of obesity (ORs for BMI, WC, and WHtR were 2.02 [1.18-3.46], 2.75 [1.70-4.87], and 2.86 [1.78-4.59], respectively). There was a significant correlation between AUDIT scores and obesity factors after adjusting for age, energy intake, fat intake, exercise, smoking, education, and income in premenopausal women (ORs for BMI and WHtR were 1.71 [0.85-3.47] and 1.73 [0.97-3.06], respectively). Conclusion: Our results suggest that alcohol is associated with a risk factor for obesity in premenopausal women. abstract_id: PUBMED:12394287 Carbohydrate-deficient transferrin levels reflect heavy drinking in alcohol-dependent women seeking treatment. Background: Carbohydrate-deficient transferrin (CDT) is a biochemical marker that has been shown to be sensitive in detecting heavy drinking in men, but studies examining CDT in women have been inconsistent because of small sample sizes and failure to consider hormonal status. In healthy female subjects, CDT levels are significantly higher in premenopausal women with higher estradiol (E2) levels (&gt;30 pg/ml) and those taking exogenous estrogens (oral contraceptives, hormone replacement therapy) compared with men and postmenopausal women. This study examined the relationship between drinking behavior and CDT levels in a large sample of alcohol-dependent women and contrasted findings in a comparison group of alcohol-dependent men. The study also examined the extent that E2 levels mediated the relationship between CDT levels and heavy drinking in the alcohol-dependent women. Methods: This study examined the association between CDT level at treatment entry and alcohol consumption the month before initiating treatment in 96 women with a DSM-III-R diagnosis of alcohol dependence, as compared with similar data in 123 male alcoholics. To explore the relationship between E2 and CDT, E2 was measured in women at the time of CDT sampling. Linear regression was used to examine whether patterns of alcohol consumption in the 28 days before the CDT blood sampling predicted the CDT level in women and men presenting for treatment for alcohol dependence. Results: CDT levels were higher in women than men and were related to quantitative alcohol consumption (total standard drinks, percentage of days drinking, percentage of days of heavy drinking) in the month before initiating treatment, irrespective of E2 levels in women. Conclusions: These results suggest that in a larger sample of female alcoholics, the amount of alcohol consumed predicted CDT, similar to what has been reported in male alcoholics. The E2 status did not seem to mediate these results. abstract_id: PUBMED:32665024 Changes in excessive alcohol use among older women across the menopausal transition: a longitudinal analysis of the Study of Women's Health Across the Nation. Background: Recent data suggest that excessive alcohol use is increasing among women and older adults. Such trends are concerning, as women are more vulnerable to alcohol-related health consequences, and such health problems may be exacerbated with age. Furthermore, there are sex-specific factors that may influence alcohol consumption among women, including the hormonal changes associated with the menopausal transition and negative affect. The present study sought to investigate transitions in excessive drinking among women across the menopausal transition and included exploration of sex hormones (estradiol; testosterone) and depression. Methods: The present study utilized publicly available data from the Study of Women Across the Nation (SWAN) and included 3302 women (42-52 years old at baseline), who completed 10 years of annual assessments. National Institute on Alcohol Abuse and Alcoholism (NIAAA) criteria were used as guidance when defining excessive drinking within the present dataset. At year 1, 170 women were identified as drinking excessively. Random-effect logistic regressions were used to examine transitions in excessive drinking. Results: Women identified as excessive drinkers were more likely to transition to non-excessive drinking across all menopausal transition stages (ORs range = 3.71-5.11), while women were more likely to transition from non-excessive to excessive drinking during the early peri- and postmenopausal stages (OR = 1.52 and 1.98, respectively). Higher testosterone levels were associated with a decreased likelihood of transitioning to non-excessive drinking (OR = 0.59). Depression and estradiol levels were not related to transitions in drinking. Conclusions: The present study demonstrates that the menopausal transition marks a period of instability in alcohol use among women. Further research is warranted to understand factors related to transitioning in and out of excessive drinking. abstract_id: PUBMED:27258851 Nitrate from Drinking Water and Diet and Bladder Cancer Among Postmenopausal Women in Iowa. Background: Nitrate is a drinking water contaminant arising from agricultural sources, and it is a precursor in the endogenous formation of N-nitroso compounds (NOC), which are possible bladder carcinogens. Objectives: We investigated the ingestion of nitrate and nitrite from drinking water and diet and bladder cancer risk in women. Methods: We identified incident bladder cancers among a cohort of 34,708 postmenopausal women in Iowa (1986-2010). Dietary nitrate and nitrite intakes were estimated from a baseline food frequency questionnaire. Drinking water source and duration were assessed in a 1989 follow-up. For women using public water supplies (PWS) &gt; 10 years (n = 15,577), we estimated average nitrate (NO3-N) and total trihalomethane (TTHM) levels and the number of years exceeding one-half the maximum contaminant level (NO3-N: 5 mg/L, TTHM: 40 μg/mL) from historical monitoring data. We computed hazard ratios (HRs) and 95% confidence intervals (CIs), and assessed nitrate interactions with TTHM and with modifiers of NOC formation (smoking, vitamin C). Results: We identified 258 bladder cancer cases, including 130 among women &gt; 10 years at their PWS. In multivariable-adjusted models, we observed nonsignificant associations among women in the highest versus lowest quartile of average drinking water nitrate concentration (HR = 1.48; 95% CI: 0.92, 2.40; ptrend = 0.11), and we found significant associations among those exposed ≥ 4 years to drinking water with &gt; 5 mg/L NO3-N (HR = 1.62; 95% CI: 1.06, 2.47; ptrend = 0.03) compared with women having 0 years of comparable exposure. TTHM adjustment had little influence on associations, and we observed no modification by vitamin C intake. Relative to a common reference group of never smokers with the lowest nitrate exposures, associations were strongest for current smokers with the highest nitrate exposures (HR = 3.67; 95% CI: 1.43, 9.38 for average water NO3-N and HR = 3.48; 95% CI: 1.20, 10.06 and ≥ 4 years &gt; 5 mg/L, respectively). Dietary nitrate and nitrite intakes were not associated with bladder cancer. Conclusions: Long-term ingestion of elevated nitrate in drinking water was associated with an increased risk of bladder cancer among postmenopausal women. Citation: Jones RR, Weyer PJ, DellaValle CT, Inoue-Choi M, Anderson KE, Cantor KP, Krasner S, Robien K, Beane Freeman LE, Silverman DT, Ward MH. 2016. Nitrate from drinking water and diet and bladder cancer among postmenopausal women in Iowa. Environ Health Perspect 124:1751-1758; http://dx.doi.org/10.1289/EHP191. abstract_id: PUBMED:18616686 One-per-occasion or less: are moderate-drinking postmenopausal women really healthier than their nondrinking and heavier-drinking peers? Background: While some reported benefits of moderate drinking are thought to be the direct results of physiological mechanisms associated with consumption, other effects may be attributable to mediating factors. Both explanations suggest that moderate drinkers may be healthier than nondrinking and heavier-drinking peers. The purpose of this study was to determine whether moderate-drinking postmenopausal women report healthier dietary and exercise patterns, and whether they demonstrate better physiological functioning compared with peers. This study also aimed to describe patterns of relationships between alcohol and measures of general health functioning in postmenopausal women. Methods: One-hundred and fifteen women aged 50 to 65 participated. Participants completed alcohol interviews, diet and exercise questionnaires, and bone density examination. Blood pressure, height, and weight were assessed, and blood was collected to conduct basic chemistry and complete blood count tests. Results: Postmenopausal moderate drinkers failed to demonstrate healthier dietary or exercise habits, and did not exhibit significantly better health-functioning compared with peers. They did evidence positive associations between drinking and healthy behavior, and between drinking and cardiovascular health. Relationships between alcohol and blood analytes differed between drinking groups, some of which suggested possible negative health consequences for higher-end drinkers in both drinking groups. Conclusions: These data suggested that moderate-drinking postmenopausal women are not significantly healthier than their non- or heavier-drinking peers, but may drink as part of a larger effort to take care of their health. Despite this, even 1-drink-per-drinking-day moderate drinkers may have tendencies toward unhealthy conditions. Limitations include small sample size, inability to assess specific cardiac risk and socioeconomic status, small number of correlations, and clinical relevance of analyte values. abstract_id: PUBMED:32189157 Oolong tea drinking boosts calcaneus bone mineral density in postmenopausal women: a population-based study in southern China. Introduction: Observational studies have shown that tea consumption has a potentially beneficial effect on bone health. However, few studies have assessed the effects of types of tea consumed on bone health. We aimed to investigate whether drinking oolong tea is associated with increased calcaneus bone mineral density (BMD) in postmenopausal women. Methods: From an epidemiological survey in Shantou, 476 postmenopausal women aged 40 to 88 years were enrolled in the study. All women were questioned about their demographic features, lifestyle, health status, types of tea consumed, habit of tea consumption, and habitual dietary intake by use of a structured questionnaire. Estimated areal BMD was measured by calcaneal quantitative ultrasound (QUS). Results: As compared with non-tea drinkers, oolong tea drinkers had higher calcaneus BMD (β 34.70 [95% CI 10.38, 59.03]). In addition, calcaneus BMD was significantly increased for those drinking 1-5 cups/day (β 27.43 [95% CI 3.70, 51.16]) but not &gt; 5 cups/day. We observed no linear increase in calcaneus BMD with increasing years of tea consumption and local polynomial regression fitting showed a parabola-shaped association between years of tea consumption and calcaneus BMD. However, symptoms of osteoporosis did not differ by types of tea consumed. Conclusion: Long-term moderate oolong tea consumption may have beneficial effects on bone health in postmenopausal women in Shantou of southern China. abstract_id: PUBMED:33822648 Fluoride in Drinking Water, Diet, and Urine in Relation to Bone Mineral Density and Fracture Incidence in Postmenopausal Women. Background: Although randomized controlled trials (RCTs) have demonstrated that high fluoride increases bone mineral density (BMD) and skeletal fragility, observational studies of low-dose chronic exposure through drinking water (&lt;1.5mg/L, the maximum recommended by the World Health Organization) have been inconclusive. Objective: We assessed associations of fluoride in urine, and intake via diet and drinking water, with BMD and fracture incidence in postmenopausal women exposed to drinking water fluoride ≤1mg/L. Methods: Data were from participants in the Swedish Mammography Cohort-Clinical, a population-based prospective cohort study. At baseline (2004-2009), fluoride exposure was assessed based on urine concentrations (n=4,306) and estimated dietary intake (including drinking water) (n=4,072), and BMD was measured using dual energy X-ray absorptiometry. Incident fractures were ascertained via register-linkage through 2017. Residential history was collected to identify women with long-term consistent drinking water exposures prior to baseline. Results: At baseline, mean urine fluoride was 1.2mg/g creatinine (±1.9) and mean dietary intake was 2.2mg/d (±0.9), respectively. During follow-up, 850, 529, and 187 cases of any fractures, osteoporotic fractures, and hip fractures, respectively, were ascertained. Baseline BMD was slightly higher among women in the highest vs. lowest tertiles of exposure. Fluoride exposures were positively associated with incident hip fractures, with multivariable-adjusted hazard ratios of 1.50 (95% CI: 1.04, 2.17) and 1.59 (95% CI: 1.10, 2.30), for the highest vs. lowest tertiles of urine fluoride and dietary fluoride, respectively. Associations with other fractures were less pronounced for urine fluoride, and null for dietary fluoride. Restricting the analyses to women with consistent long-term drinking water exposures prior to baseline strengthened associations between fractures and urinary fluoride. Discussion: In this cohort of postmenopausal women, the risk of fractures was increased in association with two separate indicators of fluoride exposure. Our findings are consistent with RCTs and suggest that high consumption of drinking water with a fluoride concentration of ∼1mg/L may increase both BMD and skeletal fragility in older women. https://doi.org/10.1289/EHP7404. abstract_id: PUBMED:26885032 A cross-sectional study to estimate associations between education level and osteoporosis in a Chinese postmenopausal women sample. Background: Our research aims to investigate the associations between education level and osteoporosis (OP) in Chinese postmenopausal women. Methods: A large-scale, community-based, cross-sectional study was conducted to examine the associations between education level and OP. A self-reported questionnaire was used to access the demographical information and medical history of the participants. A total of 1905 postmenopausal women were available for data analysis in this study. Multiple regression models controlling for confounding factors to include education level were performed to investigate the relationship with OP. Results: The prevalence of OP was 28.29% in our study sample. Multivariate linear regression analyses adjusted for relevant potential confounding factors detected significant associations between education level and T-score (β = 0.025, P-value = 0.095, 95% CI: -0.004-0.055 for model 1; and β = 0.092, P-value = 0.032, 95% CI: 0.008-0.175 for model 2). Multivariate logistic regression analyses detected significant associations between education level and OP in model 1 (P-value = 0.070 for model 1, Table 5), while no significant associations was reported in model 2 (P value = 0.131). In participants with high education levels, the OR for OP was 0.914 (95% CI: 0.830-1.007). Conclusion: The findings indicated that education level was independently and significantly associated with OP. The prevalence of OP was more frequent in Chinese postmenopausal women with low educational status. abstract_id: PUBMED:10710273 Importance of bioavailable calcium drinking water for the maintenance of bone mass in post-menopausal women. The aim of this research was to establish the importance of calcium intake through mineral water on vertebral bone density in women. To this purpose, we examined 255 women divided into two groups: those regularly drinking a high calcium content mineral water (group A; no.=175) and those using different type of water with a lower calcium content (group B; no.=80). Their dietary daily calcium intake was determined by means of a validated questionnaire (N.I.H. Consensus statement) and vertebral bone density was measured by Dual-Energy X-ray absorptiometry (Unigamma-plus ACN densitometer). Women in group A ingested a significantly higher quantity of calcium in water than women in group B (mean difference 258 mg; 95% confidence limits: 147-370 mg). The average bone density values were slightly but significantly higher in group A as compared to group B (mean+/-SD: 1.044+0,15 vs 1.002+0,14; p=0.03). In addition to age, BMI and menopausal status, calcium intake was a significant predictor of spinal BMD. These 4 variables explained about 35% of the spinal BMD variance. When the analysis was repeated separately for pre- and post-menopausal subjects, calcium remained a significant predictor in post-menopausal women (t=2.28; p=0.02), but not in premenopausal women. These results underline the importance of a lifelong daily calcium intake, resulting by the regular drinking of high bioavailable calcium water, in order to maintain bone mass after the menopause, in comparison to the use of a lower content calcium water. Answer: Based on the findings from the provided abstracts, it appears that there is a case for considering an acceptable drinking level specifically for postmenopausal women. The Postmenopausal Health Disparities Study suggests that postmenopausal (PMP) women may have different hormonal responses to alcohol compared to women with cyclic ovarian function, and that a plateau in estradiol (E2) levels occurs at a lower level of alcohol consumption than the currently accepted 'acceptable' level of 7 total weekly drinks (TWD). The study found that between ≤5 TWD and ≤6 TWD, a plateau in the correlation of E2 with increasing drinks/week occurs, suggesting that the acceptable level of drinking for PMP women might be lowered to 5 TWD (PUBMED:15955777). Furthermore, research on the association between drinking and obesity in pre- and postmenopausal women in Korea found that alcohol is associated with a risk factor for obesity in premenopausal women (PUBMED:27548609). Although this study does not provide specific recommendations for postmenopausal women, it does highlight the potential adverse effects of alcohol on women's health. Additionally, a study on excessive alcohol use among older women across the menopausal transition found that the menopausal transition marks a period of instability in alcohol use among women, with some transitioning from non-excessive to excessive drinking during the early peri- and postmenopausal stages (PUBMED:32665024). This suggests that postmenopausal women may be at a higher risk of developing unhealthy drinking habits. Moreover, a study on nitrate from drinking water and diet and bladder cancer among postmenopausal women in Iowa found that long-term ingestion of elevated nitrate in drinking water was associated with an increased risk of bladder cancer among postmenopausal women (PUBMED:27258851). While this study focuses on nitrate rather than alcohol, it underscores the importance of considering the specific health risks that postmenopausal women face from various substances in their diet and environment. In conclusion, the evidence suggests that postmenopausal women may have different health risks and responses to alcohol consumption compared to other populations, and therefore, it may be beneficial to consider an acceptable drinking level specifically tailored to their needs. Further research is needed to establish precise guidelines.
Instruction: Is it necessary to cure prostate cancer when it is possible? Abstracts: abstract_id: PUBMED:36340999 Cancer Cure and Consequences on Survivorship Care: Position Paper from the Italian Alliance Against Cancer (ACC) Survivorship Care Working Group. A multidisciplinary panel of experts and cancer patients developed a position paper to highlight recent evidence on "cancer cure" (ie, the possibility of achieving the same life expectancy as the general population) and discuss the consequences of this concept on follow-up and rehabilitation strategies. The aim is to inform clinicians, patients, and health-care policy makers about strategies of survivorship care for cured cancer patients and consequences impacting patient lives, spurring public health authorities and research organizations to implement resources to the purpose. Two identifiable, measurable, and reproducible indicators of cancer cure are presented. Cure fraction (CF) is &gt;60% for breast and prostate cancer patients, &gt;50% for colorectal cancer patients, and &gt;70% for patients with melanoma, Hodgkin lymphoma, and cancers of corpus uteri, testis (&gt;90%), and thyroid. CF was &gt;65% for patients diagnosed at ages 15-44 years and 30% for those aged 65-74 years. Time-to-cure was consistently &lt;1 year for thyroid and testicular cancer patients and &lt;10 years for patients with colorectal and cervical cancers, melanoma, and Hodgkin lymphoma. The working group agrees that the evidence allows risk stratification of cancer patients and implementation of personalized care models for timely diagnosis, as well as treatment of possible cancer relapses or related long-term complications, and preventive measures aimed at maintaining health status of cured patients. These aspects should be integrated to produce an appropriate follow-up program and survivorship care plan(s), avoiding stigma and supporting return to work, to a reproductive life, and full rehabilitation. The "right to be forgotten" law, adopted to date only in a few European countries, may contribute to these efforts for cured patients. abstract_id: PUBMED:32984907 Cancer cure for 32 cancer types: results from the EUROCARE-5 study. Background: Few studies have estimated the probability of being cured for cancer patients. This study aims to estimate population-based indicators of cancer cure in Europe by type, sex, age and period. Methods: 7.2 million cancer patients (42 population-based cancer registries in 17 European countries) diagnosed at ages 15-74 years in 1990-2007 with follow-up to 2008 were selected from the EUROCARE-5 dataset. Mixture-cure models were used to estimate: (i) life expectancy of fatal cases (LEF); (ii) cure fraction (CF) as proportion of patients with same death rates as the general population; (iii) time to cure (TTC) as time to reach 5-year conditional relative survival (CRS) &gt;95%. Results: LEF ranged from 10 years for chronic lymphocytic leukaemia patients to &lt;6 months for those with liver, pancreas, brain, gallbladder and lung cancers. It was 7.7 years for patients with prostate cancer at age 65-74 years and &gt;5 years for women with breast cancer. The CF was 94% for testis, 87% for thyroid cancer in women and 70% in men, 86% for skin melanoma in women and 76% in men, 66% for breast, 63% for prostate and &lt;10% for liver, lung and pancreatic cancers. TTC was &lt;5 years for testis and thyroid cancer patients diagnosed below age 55 years, and &lt;10 years for stomach, colorectal, corpus uteri and melanoma patients of all ages. For breast and prostate cancers, a small excess (CRS &lt; 95%) remained for at least 15 years. Conclusions: Estimates from this analysis should help to reduce unneeded medicalization and costs. They represent an opportunity to improve patients' quality of life. abstract_id: PUBMED:36104936 Measuring population-level cure patterns for cancer patients in the United States. While the life expectancy of cancer survivors has substantially improved over time in the United States, the extent to which cancer patients are cured is not known. Population-level cure patterns are important indicators to quantify cancer survivorships. This population-based cohort study included 8978,721 cancer patients registered in the Surveillance, Epidemiology and End Results (SEER) databases between 1975 and 2018. The primary outcome was cure fractions. Five-year cure probability, time to cure and median survival time of uncured cases were also assessed. All four measures were calculated using flexible parametric models, according to 46 cancer sites, three summary stages, individual age and calendar year at diagnosis. In 2018, cure fractions ranged from 2.7% for distant liver cancer to 100.0% for localized/regional prostate cancer. Localized cancer had the highest cure fraction, followed by regional cancer and distant cancer. Except for localized breast cancer, older patients generally had lower cure fractions. There were 38 cancer site and stage combinations (31.2%) that achieved 95% of cure within 5 years. Median survival time of the uncured cases ranged from 0.3 years for distant liver cancer to 10.9 years for localized urinary bladder cancer. A total of 117 cancer site and stage combinations (93.6%) had increased cure fraction over time. A considerable proportion of cancer patients were cured at the population-level, and the cure patterns varied substantially across cancer site, stage and age at diagnosis. Increases in cure fractions over time likely reflected advances in cancer treatment and early detection. abstract_id: PUBMED:31207165 Prognosis and cure of long-term cancer survivors: A population-based estimation. Background: Increasing evidence of cure for some neoplasms has emerged in recent years. The study aimed to estimate population-based indicators of cancer cure. Methods: Information on more than half a million cancer patients aged 15-74 years collected by population-based Italian cancer registries and mixture cure models were used to estimate the life expectancy of fatal tumors (LEFT), proportions of patients with similar death rates of the general population (cure fraction), and time to reach 5-year conditional relative survival (CRS) &gt;90% or 95% (time to cure). Results: Between 1990 and 2000, the median LEFT increased &gt;1 year for breast (from 8.1 to 9.4 years) and prostate cancers (from 5.2 to 7.4 years). Median LEFT in 1990 was &gt;5 years for testicular cancers (5.8) and Hodgkin lymphoma (6.3) below 45 years of age. In both sexes, it was ≤0.5 years for pancreatic cancers and NHL in 1990 and in 2000. The cure fraction showed a 10% increase between 1990 and 2000. It was 95% for thyroid cancer in women, 94% for testis, 75% for prostate, 67% for breast cancers, and &lt;20% for liver, lung, and pancreatic cancers. Time to 5-year CRS &gt;95% was &lt;10 years for testis, thyroid, colon cancers, and melanoma. For breast and prostate cancers, the 5-year CRS &gt;90% was reached in &lt;10 years but a small excess remained for &gt;15 years. Conclusions: The study findings confirmed that several cancer types are curable. Became aware of the possibility of cancer cure has relevant clinical and social impacts. abstract_id: PUBMED:32855903 Bayesian cure-rate survival model with spatially structured censoring. We propose a Bayesian spatial model for time-to-event data in which we allow the censoring mechanism to depend on covariates and have a spatial structure. The survival model incorporates a cure rate fraction and assumes that the time-to-event follows a Weibull distribution, with covariates such as race, stage, grade, marital status and age at diagnosis being linked to its scale parameter. With right censoring being a primary concern, we consider a joint logistic regression model for the death versus censoring indicator, allowing dependence on covariates and including a spatial structure via the use of random effects. We apply the models to examine prostate cancer data from the Surveillance, Epidemiology, and End Results (SEER) registry, which displays marked spatial variation. abstract_id: PUBMED:25149707 Long-term survival, prevalence, and cure of cancer: a population-based estimation for 818 902 Italian patients and 26 cancer types. Background: Persons living after a cancer diagnosis represent 4% of the whole population in high-income countries. The aim of the study was to provide estimates of indicators of long-term survival and cure for 26 cancer types, presently lacking. Patients And Methods: Data on 818 902 Italian cancer patients diagnosed at age 15-74 years in 1985-2005 were included. Proportions of patients with the same death rates of the general population (cure fractions) and those of prevalent patients who were not at risk of dying as a result of cancer (cure prevalence) were calculated, using validated mixture cure models, by cancer type, sex, and age group. We also estimated complete prevalence, conditional relative survival (CRS), time to reach 5- and 10-year CRS &gt;95%, and proportion of patients living longer than those thresholds. Results: The cure fractions ranged from &gt;90% for patients aged &lt;45 years with thyroid and testis cancers to &lt;10% for liver and pancreatic cancers of all ages. Five- or 10-year CRS &gt;95% were both reached in &lt;10 years by patients with cancers of the stomach, colon-rectum, pancreas, corpus and cervix uteri, brain, and Hodgkin lymphoma. For breast cancer patients, 5- and 10-year CRSs reached &gt;95% after 19 and 25 years, respectively, and in 15 and 18 years for prostate cancer patients. Five-year CRS remained &lt;95% for &gt;25 years after cancer diagnosis in patients with liver and larynx cancers, non-Hodgkin lymphoma, myeloma, and leukaemia. Overall, the cure prevalence was 67% for men and 77% for women. Therefore, 21% of male and 31% of female patients had already reached 5-year CRS &gt;95%, whereas 18% and 25% had reached 10-year CRS &gt;95%. Conclusions: A quarter of Italian cancer patients can be considered cured. This observation has a high potential impact on health planning, clinical practice, and patients' perspective. abstract_id: PUBMED:32442822 A biochemical definition of cure after brachytherapy for prostate cancer. Background And Purpose: To identify a PSA threshold value at an intermediate follow-up time after low dose rate (LDR) prostate brachytherapy associated with cure, defined as long-term (10-15 year) freedom from prostate cancer. Materials And Methods: Data from 7 institutions for 14,220 patients with localized prostate cancer treated with LDR brachytherapy, either alone (8552) or with external beam radiotherapy (n = 1175), androgen deprivation (n = 3165), or both (n = 1328), were analyzed. Risk distribution was 42.4% favorable, 49.2% intermediate, and 8.4% high-risk. Patients with clinical failure before 3.5 years were excluded. Kaplan-Meier analysis was used with clinical failure (local, distant, regional or biochemical triggering salvage) as an endpoint for each of four PSA categories: PSA ≤ 0.2, &gt;0.2 to ≤0.5, &gt;0.5 to ≤1.0, and &gt;1.0 ng/mL. PSA levels at 4 years (±6 months) in 8746 patients without clinical failure were correlated with disease status at 10-15 years. Results: For the 77.1% of patients with 4-year PSA ≤ 0.2, the freedom-from-recurrence (FFR) rates were 98.7% (95% CI 98.3-99.0) at 10 years and 96.1% (95% CI 94.8-97.2) at 15 years. Three independent validation cohorts confirmed 97-99% 10-year FFR rates with 4-year PSA ≤ 0.2. Successive PSA categories were associated with diminished disease-free rates at 10 and 15 years. PSA category was strongly associated with treatment success (p &lt; 0.0005). Conclusions: Since 98.7% of patients with PSA ≤ 0.2 ng/mL at 4 years after LDR prostate brachytherapy were disease-free beyond 10 years, we suggest adopting this biochemical definition of cure for patients with ≥4 years' follow-up after LDR brachytherapy. abstract_id: PUBMED:11022760 Is cure of T1c prostate cancer necessary in those for whom it is possible? N/A abstract_id: PUBMED:31032999 Variable selection in semiparametric nonmixture cure model with interval-censored failure time data: An application to the prostate cancer screening study. Censored failure time data with a cured subgroup is frequently encountered in many scientific areas including the cancer screening research, tumorigenicity studies, and sociological surveys. Meanwhile, one may also encounter an extraordinary large number of risk factors in practice, such as patient's demographic characteristics, clinical measurements, and medical history, which makes variable selection an emerging need in the data analysis. Motivated by a medical study on prostate cancer screening, we develop a variable selection method in the semiparametric nonmixture or promotion time cure model when interval-censored data with a cured subgroup are present. Specifically, we propose a penalized likelihood approach with the use of the least absolute shrinkage and selection operator, adaptive least absolute shrinkage and selection operator, or smoothly clipped absolute deviation penalties, which can be easily accomplished via a novel penalized expectation-maximization algorithm. We assess the finite-sample performance of the proposed methodology through extensive simulations and analyze the prostate cancer screening data for illustration. abstract_id: PUBMED:1749526 Prostatic cancer: the case for conservatism. Carcinoma of the prostate is the most common malignancy in men. It is the second leading cause of cancer deaths in men, with a five-year survival rate of 72 percent. One asks, "Is the cure necessary in those in whom it may be possible? Is cure possible in those whom it may be necessary?" Answer: The question of whether it is necessary to cure prostate cancer when it is possible is a complex one that involves considering the individual patient's situation, including the stage and aggressiveness of the cancer, the patient's age, overall health, and personal preferences. Some studies suggest that prostate cancer can often be a slow-growing disease, and not all cases require immediate treatment. For instance, the concept of "the case for conservatism" in prostate cancer treatment is mentioned, suggesting that not all detected prostate cancers may warrant aggressive treatment, especially if they are not causing symptoms or progressing rapidly (PUBMED:1749526). This is particularly relevant for older patients or those with low-risk, localized prostate cancer, where active surveillance or watchful waiting might be considered. On the other hand, when cure is possible, especially in cases of localized/regional prostate cancer, the cure fractions can be very high, reaching up to 100% (PUBMED:36104936). This indicates that with appropriate treatment, many patients with prostate cancer can achieve long-term disease-free survival. Additionally, a biochemical definition of cure after brachytherapy for prostate cancer has been proposed, suggesting that a PSA level ≤ 0.2 ng/mL at 4 years post-treatment is associated with a high likelihood of being disease-free at 10-15 years (PUBMED:32855903). Ultimately, the decision to pursue a cure for prostate cancer when it is possible should be made on a case-by-case basis, taking into account the potential benefits and risks of treatment, the likelihood of cure, the patient's life expectancy, quality of life considerations, and the patient's values and preferences. It is important for patients to discuss their options with their healthcare providers to make an informed decision that aligns with their goals of care.
Instruction: Can delayed time to referral to a tertiary level urologist with an abnormal PSA level affect subsequent Gleason grade in the opportunistically screened population? Abstracts: abstract_id: PUBMED:23657938 Can delayed time to referral to a tertiary level urologist with an abnormal PSA level affect subsequent Gleason grade in the opportunistically screened population? Purpose: There is growing conflict in the literature describing the effect of delayed treatment on outcomes following radical prostatectomy. There is also evidence to suggest progression of low-risk prostate cancer to develop higher grades and volumes of prostate cancer during active surveillance. It is unknown as to what affect a delay in referral of those men with abnormal screened-PSA levels have on subsequent Gleason grade. Methods: We identified 350 men through our rapid access prostate clinic who underwent TRUS biopsy for abnormal age-related PSA and/or abnormal clinical examination. Clinicopathological findings were compared for those with positive versus negative TRUS biopsies, and for those with initial delays in referral (&lt;12 months, 12-18 months, and &gt;18 months). We used ANOVA and Student's t-tests amongst other statistical tools to examine significance of clinical findings. Results: Of the 350 men who underwent TRUS biopsy, those with a delay in referral of 12 months or more were significantly associated with higher PSA titers, clinically palpable disease and likelihood of diagnosis with prostate cancer. A delay of 18 months or more led to a significantly higher risk of being diagnosed with a leading grade 4 prostate cancer, which was further supported using PSA velocity as a diagnostic tool (change &gt;0.4 ng/ml/year). Conclusion: We recommend that repeated asymptomatic abnormal age-related PSA readings and/or abnormal clinical examination in the screened population be referred without delay to a urologist for further assessment, enrolment into an active surveillance program or definitive subsequent treatment. abstract_id: PUBMED:37787299 Outlook of Gleason score in prostate carcinoma and correlation with PSA levels: A study in a tertiary care hospital. Introduction: In India, prostate carcinoma is the fifth most common cause of carcinoma in men, and the fourth in cancer mortality. Incidence of prostate malignancy is increasing in India by one percent every year. Prostate carcinoma is graded using the Gleason scoring system. The Gleason score is very useful for predicting the prognosis of prostate carcinoma. The collaborative use of clinical features, combined with the size of the prostate, prostate-specific antigen levels, and histopathological features helps in making an accurate and early diagnosis of the patients. Material And Method: This present prospective study included all histologically proven cases of prostate carcinoma in the Department of Pathology, Guru Gobind Singh Medical Hospital over a period of one year. A detailed histopathological examination of the H and E stained sections is carried out under light microscope including histological typing, Gleason scoring and grading. The Gleason grade was correlated with serum PSA levels. Results: Out of 80 cases, 28 (35%) were reported as Gleason grade group 3 with a Gleason score of 7 (4 + 3). 12 cases (15%) showed a Gleason grade group 2 with a Gleason score of 7 (3 + 4). Grade group 4 (Gleason score 8) and grade group 5 were observed in 22 cases (27.5%) and 18 cases (22.5%), respectively. Whereas, no case of Gleason grade group 1 with Gleason score of 6 (3 + 3) was seen. Conclusion: Gleason's Group Grade 3 outnumbered all the group grades, contributing 35% (28 cases) of the total cases of prostatic adenocarcinoma. Serum PSA levels were raised (&gt;10 ng/ml) in 100% of cases. Henceforth, it signifies the importance of serum PSA levels in prostate carcinoma. Lymphovascular invasion was associated with higher group grade of prostatic adenocarcinoma. abstract_id: PUBMED:33580599 Prognostic significance of tertiary Gleason pattern in the contemporary era of Gleason grade grouping: A narrative review. Since the Gleason score was developed in 1966 as a histological classification for prostate cancer, it has been widely used in clinical practice and has evolved over time. The concept of a "tertiary Gleason pattern" (also known as a minor Gleason pattern) was first proposed in 2000, and has been used in clinical practice since the 2005 International Society of Urological Pathology conference. The prognostic significance of a tertiary Gleason pattern has been widely validated in various settings of prostate cancer, whereas its definition has yet to be fully established. Currently, a provisional definition of tertiary Gleason pattern is "&lt;5% Gleason pattern 4 or 5 in radical prostatectomy specimens." In contrast, "Gleason grade grouping" was proposed in 2013 and came into use in clinical practice in 2016 according to the 2014 International Society of Urological Pathology conference. Although the prognostic significance of Gleason grade grouping has already been widely confirmed, it does not incorporate the concept of tertiary Gleason pattern. Recently, the 2019 International Society of Urological Pathology conference discussed how to handle tertiary Gleason pattern in the current Gleason scoring system, but no consensus was reached on the issue. This review summarizes the evidence on the prognostic significance of tertiary Gleason pattern and discusses how to deal with it in the context of the contemporary Gleason grade grouping. It also refers to reporting of the percentage of Gleason patterns 4 and 5, as well as quantitative Gleason score models incorporating tertiary Gleason pattern. abstract_id: PUBMED:23417241 Emerging evidence for Gleason grade migration and distance impact in prostate cancer? An analysis of the rapid access prostate clinic in a tertiary referral center: St. Vincent's University Hospital, Dublin (2009-2011). Background: Recent evidence has suggested that the introduction of rapid access prostate cancer programs has led to a more streamlined pathway for patients, and was designed to ultimately reduce referral delays. Aims: To identify the initial impact of the introduction of the rapid access prostate clinic on Gleason grading within the prostate cancer cohort, as well as the impact of distance from a tertiary referral center on subsequent Gleason grading. Methods: A prospective database was maintained from those men attended the rapid access prostate clinic in St. Vincent's University Hospital. Data relating to demographics, biopsy results, retrospective PSA readings, and subsequent treatment pathways were all recorded and analyzed. Statistical significance was taken at p&lt;0.05. Results: Prospective data from the rapid access prostate clinic illustrated similar results in patient demographics, Gleason grade and choice of treatment outcomes to other published institutions, however, for the first time demonstrate emerging evidence of the effect of the rapid access prostate clinic leading to a downward shift in Gleason grade over a 2-year period, as well as data showing an inverse correlation between leading Gleason grade and distance from our tertiary referral center. Conclusion: These results suggest that the introduction of the rapid access prostate clinic has initially begun to demonstrate an initial downgrading in Gleason scoring patterns. Our data also reflects a poorer Gleason score in those patients living further away from the rapid access prostate clinic. This may be in part attributed to a surge in referrals of those patients previously managed outside a tertiary institution, and suggests that patients should undergo prompt referral following suspicion for prostate cancer. abstract_id: PUBMED:32832424 High mortality risk of prostate cancer patients in Asia and West Africa: A systematic review. Globally, prostate cancer (PCa) is the second most preponderant cancer in men. It contributes to the high mortality-to-incidence ratio reported in West Africa and Asia largely due to low screening. The mortality risk is determined or predicted based on the prevalence of high-risk or aggressive PCa using a scoring or grading system such as Gleason score (GS), Gleason grade (GG), and prostate-specific antigen (PSA) level. In this review, peer-reviewed articles found on databases such as Google Scholar, Scopus, Web of Science, PubMed Central and, EMBASE were selected based on adherence to clinical guidelines for the classification of PCa. In West Africa and Asia, the result revealed that the frequency of high-risk PCa was 42% and 51.2% based on GS, 48.8% and 25.3% based on GG pattern, and 87.5% and 44.3% based on PSA level &gt;10 ng/mL, respectively. Data revealed a high prevalence of high-risk PCa both in West Africa and Asia when compared with developed countries. However, the prevalence of high-risk PCa is higher in West Africa than in Asia. Studies have shown that high-risk PCas are associated with germline mutations and such mutations are prevalent in blacks and Asians than in whites. Thus, testing for germline mutations in patients with GS of ≥ 7, GG ≥ 3, high prostate density, low prostate volume, and PSA levels of &gt;4.0 ng/mL may identify those at risk of developing lethal PCa and could reduce the mortality rates in Asia and West Africa. abstract_id: PUBMED:27993581 The effect of limited (tertiary) Gleason pattern 5 on the new prostate cancer grade groups. The risk of recurrence for prostatic adenocarcinoma after prostatectomy, as detected by prostate-specific antigen or other modalities, is based primarily on Gleason score along with pathologic tumor stage and surgical margin status. Recent large multi-institutional data spanning the last decade have supported modification of risk of recurrence stratification based on grade groups: grade group 1 (3+3=6), grade group 2 (3+4=7), grade group 3 (4+3=7), grade group 4 (4+4=8), and grade group 5 (Gleason scores 9 and 10). Using currently accepted grading definitions of grade patterns and grading rules, this study examines how the introduction of a limited, less than 5%, Gleason pattern 5 component at prostatectomy affects prognosis and fits into the grade group schema and reporting. The aggregate data from 2 independent major academic medical centers comprising 7606 patient records were analyzed with respect to biochemical recurrence-free survival. The presence of a limited (tertiary) Gleason pattern 5 component in the context of Gleason scores 3+4=7 (grade group 2) and 4+3=7 (grade group 3) imparts an intermediate prognosis relative to the next highest grade group. As such, we suggest that an additional comment and designation to the grade groups be provided reflecting the increased risk of recurrence in such cases (such as grade group 2+ or 3+). In contrast, the presence of limited (&lt;5%) Gleason pattern 5 in the context of Gleason score 4+4=8 imparts a poor prognosis equivalent to grade group 5 and therefore should be reported as grade group 5. abstract_id: PUBMED:17911498 PSA failure following definitive treatment of prostate cancer having biopsy Gleason score 7 with tertiary grade 5. Context: In 2005, the International Society of Urologic Pathology consensus conference recommended that men with biopsy Gleason score 3 + 4 or 4 + 3 prostate cancer and tertiary pattern 5 should have their cancer classified as Gleason score 8 or 9, respectively. Yet, the management of men with Gleason score 7 vs 8 or 9 prostate cancer differs. Objective: To compare the prognostic significance of Gleason score 7 with tertiary grade 5 vs other Gleason scores with respect to time to prostate-specific antigen (PSA) failure in men with prostate cancer. Design, Setting, And Patients: From 1989 to 2005, 2370 men with clinical tumor category 1c to 3b, node-negative, and nonmetastatic prostate cancer underwent definitive therapy with surgery or radiation therapy with or without hormonal therapy. A pathologist with expertise in genitourinary cancers assigned Gleason scores to the prostate needle biopsy specimens. Cox regression was used to assess whether a significant association existed between the presence of tertiary grade 5 in men with Gleason score 7 disease and time to recurrence compared with men with Gleason score 7 without tertiary grade 5, Gleason score 5 to 6, or 8 to 10 disease, adjusting for known prognostic factors and treatment. Main Outcome Measure: Time to PSA failure. Results: Men with Gleason score 7 and tertiary grade 5 disease had a significantly shorter time to PSA failure than men with 7 without tertiary grade 5 (median time, 5.0 vs 6.7 years, respectively; adjusted hazard ratio (HR), 0.56; 95% confidence interval [CI], 0.32-0.97; P = .04) or score of 6 or less (median time, 15.4 years; adjusted HR, 0.24; 95% CI, 0.13-0.43; P &lt; .001). However, a significant difference was not observed when these men were compared with men with Gleason score 8 to 10 disease (median time, 5.1 years; adjusted HR, 0.96; 95% CI, 0.54-1.71; P = .90). Conclusion: In this study population, men with prostate cancer having biopsy Gleason score 7 and tertiary grade 5 had a higher risk of PSA-failure when compared with men with Gleason score 7 without tertiary grade 5 and had a comparable risk with men with Gleason score 8 to 10. abstract_id: PUBMED:18239761 Abnormal PSA tests--delays in referral. Background: The main benefit of prostate specific antigen (PSA) testing is to help detect prostate cancer at an early, curable stage. Delays between the first abnormal PSA test and biopsy can undermine that benefit, but have not yet been studied. We investigated delays before biopsy together with associated PSA increases as an indicator of disease progression. Methods: We identified 241 patients with a primary care referral because of an elevated PSA result (&gt;4 ng/mL) and no previous prostate biopsy. Prostate specific antigen results and intervals between PSA testing, specialist clinic referral, appointment and biopsy were stratified by age. Results: Median times between first abnormal PSA, referral, consultation and biopsy were modest but associated with increases in PSA. Extended delays (&gt;20 months) between first abnormal PSA and referral occurred in 25% of younger men. A PSA result less than 10 ng/mL was the best predictor of a delay to refer. Discussion: Rising PSA and possible cancer progression during investigation for prostate cancer suggest that prompt care is advisable. abstract_id: PUBMED:12765060 PSA: the difficult position of general practitioners between patients and urologists Objective: To determine the reasons for ordering PSA assay and delayed referral to an urologist following detection of elevated PSA in general practice. Material And Method: Retrospective study of 200 patients referred to the department by general practitioners for elevated PSA from September 2000 to April 2001. Results: The median age was 67 years (range: 52 to 87). The median PSA was 7.9 ng/ml (range: 4.1 to 897). Patients were referred after 1 to 5 PSA assays (mean: 1.6) with a median interval after discovery of elevated PSA of 1.7 months. The referral time was greater than 6 months for 52 patients (26%) and 31 patients (15.6%) were referred after more than two PSA assays were found to be elevated. PSA assay was ordered in the absence of symptoms or abnormal digital rectal examination in 43% of patients over the age of 70 years. Conclusion: Prostate cancer screening is regularly performed in general practice. The information given to general practitioners, faced with a well informed population, appears to be insufficiently clear, as PSA assay is sometimes incorrectly ordered or interpreted. abstract_id: PUBMED:18355149 Biopsies performed at tertiary care centers are superior to referral biopsies in predicting pathologic Gleason sum. Objective: Biopsy grading at tertiary care centers may or may not be superior to biopsies performed at referral institutions. Methods: Referral biopsy and tertiary care center biopsy Gleason sums were studied in 758 men treated with radical prostatectomy (RP) at a tertiary care center between 1992 and 2004. Grade agreement was calculated using the Cohen kappa (ê). Logistic regression models predicting high-grade prostate cancer at RP were fitted using either referral or tertiary care center biopsies. Comparison of bootstrap-corrected predictive accuracy estimates were performed using the Mantel-Haenszel test. Results: Grade agreement between biopsy and RP Gleason sum was higher (P = 0.003) for tertiary care center biopsies v referral biopsies (55.5% v 47.9%; P = 0.003). Upgrading occurred in 39.8% of referral biopsies v 32.6% of tertiary care center biopsies (P = 0.03). Tertiary care center biopsies were more accurate in determining RP Gleason sum than referral biopsies (71.5% v 65.6%, P = 0.04). Conclusion: More accurate prediction of RP Gleason grade may be achieved if biopsy is performed and graded at tertiary care centers. Answer: Yes, delayed time to referral to a tertiary level urologist with an abnormal PSA level can affect subsequent Gleason grade in the opportunistically screened population. A study identified that men with a delay in referral of 12 months or more were significantly associated with higher PSA titers, clinically palpable disease, and likelihood of being diagnosed with prostate cancer. Furthermore, a delay of 18 months or more led to a significantly higher risk of being diagnosed with leading grade 4 prostate cancer (PUBMED:23657938). This suggests that prompt referral following the detection of an abnormal PSA is crucial for early assessment and management, which could potentially prevent the progression to higher Gleason grades.
Instruction: Does preconception care work? Abstracts: abstract_id: PUBMED:25132235 Does preconception care work? Background: To date, there is a lack of evidence to suggest that a systematic and coordinated approach to prepregnancy care might make a difference. Aims: To evaluate whether women who receive preconception care through a structured approach will be more likely to be healthy around the time of conception compared with women who plan their pregnancy but have not been exposed to preconception care. Methods: A case control study was undertaken of women who attended the preconception care service and subsequently conceived, received maternity care and gave birth at Mater Health Services Brisbane between January 2010 and January 2013. Pregnancy information and birth outcomes for each woman who attended the service were matched with those of three women who reported that they had planned their pregnancy but did not attend the service. Records were matched for prepregnancy BMI, age, parity, prepregnancy smoking status and number of health conditions. Results: Pregnant women who attended preconception care were more likely to have received adequate peri-conceptual folate, to report being vaccinated against influenza and hepatitis B, to have consulted with a specialist with the specific aim of optimising a pre-existing health condition and to report less weight gain up until booking. Preterm birth and hypertensive disorders of pregnancy were less common amongst women who had attended preconception care, and there were trends towards a decreased incidence of gestational diabetes, LGA and fetal anomalies. Conclusion: These preliminary data provide some optimism that a comprehensive preconception care service may positively influence maternal and neonatal outcomes. abstract_id: PUBMED:19081435 The clinical content of preconception care: preconception care for men. Little attention has been given to men's preconception health and health care. This paper reviews the key elements of an approach to optimizing the preconception health status of men. Preconception care for men is important for improving family planning and pregnancy outcomes, enhancing the reproductive health and health behaviors of their female partners, and preparing men for fatherhood. Most importantly, preconception care offers an opportunity, similar to the opportunity it presents for women, for disease prevention and health promotion in men. Currently, no consensus exists on service delivery of preconception care for men--who should provide preconception care to whom, where, when, and how, and there are significant barriers to this care including the organization, financing, training, and demand. Finally, much more research on the content and how to effectively market and implement preconception care for men is needed. abstract_id: PUBMED:16617292 Recommendations to improve preconception health and health care--United States. A report of the CDC/ATSDR Preconception Care Work Group and the Select Panel on Preconception Care. This report provides recommendations to improve both preconception health and care. The goal of these recommendations is to improve the health of women and couples, before conception of a first or subsequent pregnancy. Since the early 1990s, guidelines have recommended preconception care, and reviews of previous studies have assessed the evidence for interventions and documented the evidence for specific interventions. CDC has developed these recommendations based on a review of published research and the opinions of specialists from the CDC/ATSDR Preconception Care Work Group and the Select Panel on Preconception Care. The 10 recommendations in this report are based on preconception health care for the U.S. population and are aimed at achieving four goals to 1) improve the knowledge and attitudes and behaviors of men and women related to preconception health; 2) assure that all women of childbearing age in the United States receive preconception care services (i.e., evidence-based risk screening, health promotion, and interventions) that will enable them to enter pregnancy in optimal health; 3) reduce risks indicated by a previous adverse pregnancy outcome through interventions during the interconception period, which can prevent or minimize health problems for a mother and her future children; and 4) reduce the disparities in adverse pregnancy outcomes. The recommendations focus on changes in consumer knowledge, clinical practice, public health programs, health-care financing, and data and research activities. Each recommendation is accompanied by a series of specific action steps and, when implemented, can yield results within 2-5 years. Based on implementation of the recommendations, improvements in access to care, continuity of care, risk screening, appropriate delivery of interventions, and changes in health behaviors of men and women of childbearing age are expected to occur. The implementation of these recommendations will help achieve Healthy People 2010 objectives. The recommendations and action steps are a strategic plan that can be used by persons, communities, public health and clinical providers, and governments to improve the health of women, their children, and their families. Improving preconception health among the approximately 62 million women of childbearing age will require multistrategic, action-oriented initiatives. abstract_id: PUBMED:19051168 Preconception care Healthy mothers and children are the hope and dream of every country. Evidence shows that promoting health in women and couples before pregnancy can promote the health of their offspring. Hence, comprehensive healthcare for women should start before pregnancy (i.e., preconception care). Such represents an effective strategy to lessen adverse maternal and infant health outcomes. The Centers for Disease Control and Prevention (CDC) in the United States established in 2004 a working group to convene conferences addressing preconception care issues, which has since held two summit conferences (in 2005 and 2007) to develop recommendations and guidelines for improving preconception health and care. A new consensus on reproductive health promotion and protection as emerged. Preconception care, the primary preventive approach for women of reproductive age, works to identify and modify biomedical, behavioral, and social risks through preventive and management interventions. No national policy for preconception care currently exists in Taiwan. Hence, it is important to coordinate different resources related to preconception care available through schools, workplaces, outpatient service centers, families and the community in order to establish a uniform national policy and intervention programs that will further enhance the health of women and children in Taiwan. abstract_id: PUBMED:37890235 Current preconception care practice in the Netherlands - An evaluation study among birth care professionals. Objective: To evaluate the current practice of preconception care in the Netherlands and the perceptions of birth care professionals concerning preconception care. Methods: We have developed a digital questionnaire and conducted a cross-sectional study by distributing the questionnaire among 102 organisations: 90 primary care midwifery practices and obstetric departments of 12 hospitals in the Southwest region of the Netherlands between December 2020 and March 2021. One birth care professional per organization was asked to complete the questionnaire. Descriptive statistics were used to present the results. Findings: Respondents of eighty-three organisations (81.4 %) filled in the questionnaire, of whom 74 respondents were independent primary care midwives and 9 respondents were obstetricians. Preconception care mostly consisted of an individual consultation in which personalized health and lifestyle advice was given. Among the respondents, 44.4 % reported that the organization had a preconception care protocol. The way in which the consultation was carried out, as well as the health and lifestyle related questions asked, differed between respondents. More than 85 % of the respondents inquire about the following possible risk factors for complications: maternal illnesses, obstetric history, folic acid supplement intake, alcohol intake, smoking, substance abuse, hereditary disease, prescription medication, dietary habits, overweight, and birth defects in the family. The respondents acknowledged that preconception care should be offered to all couples who wish to become pregnant, as opposed to offering preconception care only to those with an increased risk of complications. Still, respondents do not receive many questions regarding the preconception period or requests for preconception care consultations. Key Conclusion: Birth care professionals acknowledge the need for preconception care for all couples. In the Netherlands, preconception care consists mostly of an individual consultation with recommendations for health and lifestyle advice. However, the identification of risk factors varies between birth care professionals and less than half of the respondents indicate that they have a protocol available in their practice. Furthermore, the demand of parents-to-be for preconception care is low. More research, that includes more obstetricians, is necessary to investigate if there is a difference between the care provided by primary care midwives and obstetricians. Implications For Practice: To increase the awareness and uptake of preconception care, it would be prudent to emphasize its importance to parents-to-be and professionals, and actively promote the use of widespread, standardized protocols for birth care professionals. abstract_id: PUBMED:15921968 Preconception care in diabetes. Preconception care is key to improving the outcome of diabetic pregnancy. Despite evidence showing that preconception care reduces congenital malformation in the offspring of diabetic mothers, most women do not plan their pregnancies with their diabetes team. Issues around managing this complex behaviour include the quality of the relationship with health care providers and the woman's health care beliefs. The elements of good preconception care have recently been defined, but there are problems around access to preconception services. There is a small number of preconception services within England, Wales and Northern Ireland; provision of these services needs to be increased if the goal set by the NSF for diabetes is to be achieved. abstract_id: PUBMED:33148313 Preconception care practices in Nigeria: a descriptive qualitative study. Background: Preconception care is a specialized care targeted at women of reproductive age before pregnancy to detect, treat or counsel them about pre-existing medical and social conditions that may militate against safe motherhood and positive pregnancy outcome. In spite of the known need for preconception care in Nigeria, routine preconception care services are not available in the country. This study explores existing preconception care practices in the country in order to encourage building on it and formalising it for inclusion in routine maternal and child health services in the country. Methods: Forty-one in-depth interviews and 10 focus group discussions were conducted in this descriptive qualitative study to explore the existing preconception care services from the perspectives of community members (women and men in the reproductive age group), community and religious leaders, health care professionals as well as policy makers. Thematic analysis was carried out using MAXQDA 2018. Results: Participants stated that there are no defined preconception care services in the health care system nor are there any structures or guidelines for preconception care in the country. Preconception care services are however provided when health workers perceive a need or when clients demand for it. The services provided include health information, education and counselling, treatment modification, medical check-up and screening. Outside of the health system, there are some traditional, religious and other practices with similar bearing to preconception care which the participants believed could be included as preconception care services. These include premarital counselling services by religious bodies, family life and HIV education within the secondary school system and some screening and outreach services provided by non-governmental and some governmental agencies. Conclusion: There is a need to provide structure and guidelines for preconception care services in the country so that the services can be properly streamlined. This structure can also involve practices that are currently not within the health system. abstract_id: PUBMED:24552853 Preconception care in France. Objective: Preconception care is advocated throughout the world as a tool for improving perinatal outcomes. However, the proportion of women in France who attend a dedicated medical preconception visit is unknown. Methods: We undertook a retrospective study among 401 women who delivered at a maternity clinic or hospital in France to determine how many of them had attended a preconception visit. We also collected information on various preconception care behaviors such as folate supplementation and alcohol or tobacco cessation. Results: In total, 21.6% of the women took a folate prescription, and 91.3% and 68.6% of women stopped alcohol and tobacco consumption, respectively. Of the 80.2% of women who visited a doctor within the 6 months before conception, only 13.8% discussed their planned pregnancy at this visit. Conclusion: Although the rate of vitamin supplementation remains low, there has been an increase in adequate preconception care behavior. This indicates that improvement is possible and should be implemented. abstract_id: PUBMED:27423236 Barriers in the Uptake and Delivery of Preconception Care: Exploring the Views of Care Providers. Objectives To examine health care professionals' views of their role and responsibilities in providing preconception care and identify barriers that affect the delivery and uptake of preconception care. Methods Twenty health care professionals who provide preconception care on a regular basis were interviewed using semi-structured interviews. Results We interviewed twelve community midwives, three General Practitioners, three obstetricians, one cardiologist specialized in congenital heart diseases and one gastroenterologist.We identified four barriers affecting the uptake and delivery of preconception care (PCC): (1) lack of a comprehensive preconception care program; (2) limited awareness of most future parents about the benefits of preconception care, hesitance of GP's about the necessity and effectiveness of PCC; (3) poor coordination and organization of preconception care; (4) conflicting views of health care professionals on pregnancy, reproductive autonomy of patients and professional responsibility. Conclusion We have identified four barriers in the uptake and delivery of preconception care. Our findings support the timely implementation of a comprehensive program of PCC (already advocated by the Health Council of the Netherlands) and increasing awareness and knowledge of PCC from care providers and future parents. We emphasize the need for further research on how organizational barriers lead to suboptimal PCC and how interdisciplinary collaboration and referral can lead to optimally tailored intervention approaches. abstract_id: PUBMED:36339191 Nationwide-free preconception care strategy: Experience from China. Preconception care has emerged as a developing field in maternal and child healthcare worldwide. This care type provides couples of reproductive age with the opportunity for early detection and management of biomedical, behavioral, and social health problems. In 2010, the Chinese government launched a nationwide preconception care program as a welfare project. During the past decade, this project has received international attention, and experiences from the project have been published in the literature. In this review, we summarize the history, implementation, and evaluation of preconception care services in China, and its related maternal and children's health service initiatives, to thereby provide knowledge for policymakers and clinicians in other countries. Answer: Preconception care is a preventive approach aimed at optimizing the health status of individuals planning to conceive, with the goal of improving pregnancy outcomes and the health of offspring. The evidence suggests that preconception care can have positive effects on maternal and neonatal outcomes. A case control study showed that pregnant women who attended preconception care were more likely to have received adequate peri-conceptual folate, be vaccinated against influenza and hepatitis B, consult with a specialist to optimize pre-existing health conditions, and report less weight gain up until booking. Additionally, preterm birth and hypertensive disorders of pregnancy were less common among women who had attended preconception care, with trends towards decreased incidence of gestational diabetes, large for gestational age (LGA) infants, and fetal anomalies (PUBMED:25132235). Preconception care is also important for men, as it can improve family planning and pregnancy outcomes, enhance the reproductive health and health behaviors of their female partners, and prepare men for fatherhood. However, there is currently no consensus on how preconception care should be delivered to men, and more research is needed on the content and effective implementation of such care (PUBMED:19081435). The CDC/ATSDR Preconception Care Work Group and the Select Panel on Preconception Care have provided recommendations to improve preconception health and care in the United States, with the aim of improving the knowledge, attitudes, and behaviors of men and women related to preconception health, ensuring that all women of childbearing age receive preconception care services, reducing risks indicated by a previous adverse pregnancy outcome, and reducing disparities in adverse pregnancy outcomes (PUBMED:16617292). In the Netherlands, preconception care mostly consists of individual consultations with personalized health and lifestyle advice, but the identification of risk factors and the availability of protocols vary between birth care professionals. There is also a low demand for preconception care consultations among parents-to-be, indicating a need for increased awareness and standardized protocols (PUBMED:37890235). In summary, the available evidence suggests that preconception care can be effective in improving maternal and neonatal health outcomes. However, there is a need for more structured and coordinated approaches, increased awareness among potential parents, and standardized protocols to ensure consistent delivery of preconception care services.
Instruction: The solitary pulmonary nodule on chest radiography: can we really tell if the nodule is calcified? Abstracts: abstract_id: PUBMED:11133566 The solitary pulmonary nodule on chest radiography: can we really tell if the nodule is calcified? Objective: This study was designed to assess the ability of radiologists to accurately detect calcification within a solitary pulmonary nodule with chest radiography. Materials And Methods: Thirty-five solitary pulmonary nodules that were examined by both posteroanterior and lateral chest radiography and on thin-section CT were retrospectively identified. Fourteen radiologists blinded to the results of CT assessed the nodules for the presence or absence of calcification using chest radiographs alone. The radiologists then assigned one of six values on the basis of their confidence in that assessment. The accuracy and confidence values for each nodule were analyzed on the basis of the presence or absence of calcification as seen on CT. Receiver operating characteristic (ROC) curves were generated. Results: The positive predictive value of a "definitely calcified" assessment was 0.93. Combining all levels of radiologists' confidence, the sensitivity of the chest radiograph in the detection of calcium was 0.50 and the specificity was 0.87. There was no difference in the confidence levels reported between the calcified and noncalcified nodules, and there was no correlation of nodule size with accuracy or confidence level. Conclusion: The ability of radiologists to detect calcium in a solitary pulmonary nodule by chest radiography was low, as defined by the ROC data. Of the "definitely calcified" nodules, up to 7% may not be calcified and may be potentially malignant. Without documentation of long-term stability, a low threshold for recommending CT may be appropriate. abstract_id: PUBMED:37780816 An incidental finding of lung nodule in patient presenting with posterior chest wall trauma: A case report. Lung nodule is considered a common incidental finding in chest CT evaluation. Most cases are benign and derived from a previous infection. However, here we present a case where a patient presenting to the hospital with posterior chest wall trauma had a solitary pulmonary nodule that is consistent with non-small cell lung cancer (NSCLC). We would like to emphasize the importance of evaluation of incidental findings in chest CT evaluation, in this case, trauma cases by presenting this case. An 81-year-old male was referred to the emergency department with altered consciousness and a history of posterior chest trauma. He complained of a productive cough, but no other symptoms were reported. He had a medical history of Alzheimer's dementia, COVID-19, and hypertension. Physical examination revealed a hematoma on the right side of his back and reduced breath sound. Anteroposterior radiograph showed multiple right posterior rib fractures with right pleural effusion. Contrast-enhanced chest CT revealed a low-density solitary pulmonary nodule with multiple lymphadenopathies, with the presumptive diagnosis of incidentaloma. It was confirmed by pleural analysis with the increase of Cyfra 21-1 tumor marker and consistent with non-small cell lung cancer. In CT studies, incidental solitary pulmonary nodules are common, occurring in 8%-51%. Nodules are often discovered incidentally in trauma patients, but they are not always reported or followed up. It is important to note that each may indicate a clinical significance and represents the potential for malignancy which requires additional workup and further evaluation. abstract_id: PUBMED:36516554 Image synthesis with disentangled attributes for chest X-ray nodule augmentation and detection. Lung nodule detection in chest X-ray (CXR) images is common to early screening of lung cancers. Deep-learning-based Computer-Assisted Diagnosis (CAD) systems can support radiologists for nodule screening in CXR images. However, it requires large-scale and diverse medical data with high-quality annotations to train such robust and accurate CADs. To alleviate the limited availability of such datasets, lung nodule synthesis methods are proposed for the sake of data augmentation. Nevertheless, previous methods lack the ability to generate nodules that are realistic with the shape/size attributes desired by the detector. To address this issue, we introduce a novel lung nodule synthesis framework in this paper, which decomposes nodule attributes into three main aspects including the shape, the size, and the texture, respectively. A GAN-based Shape Generator firstly models nodule shapes by generating diverse shape masks. The following Size Modulation then enables quantitative control on the diameters of the generated nodule shapes in pixel-level granularity. A coarse-to-fine gated convolutional Texture Generator finally synthesizes visually plausible nodule textures conditioned on the modulated shape masks. Moreover, we propose to synthesize nodule CXR images by controlling the disentangled nodule attributes for data augmentation, in order to better compensate for the nodules that are easily missed in the detection task. Our experiments demonstrate the enhanced image quality, diversity, and controllability of the proposed lung nodule synthesis framework. We also validate the effectiveness of our data augmentation strategy on greatly improving nodule detection performance. abstract_id: PUBMED:3980184 Nodule detection with and without a chest image. The detectability of nodules displayed on uniform backgrounds was compared with their detectability on chest images by using receiver operating characteristic (ROC) curves. The images were displayed using a digital television system. Two conditions, chest image and uniform background, were compared at three different levels of added random noise. The viewing conditions were made as similar as possible by setting the luminance of the uniform background equal to the luminance of the lung in the chest image. Nodule detectability was significantly lower for the chest image than for the uniform background at each level of added random noise. The anatomic structure of the chest image interferes with the structural integrity of the nodule image resulting in lower detection performance. abstract_id: PUBMED:29903641 Detection of Pulmonary Nodule Growth with Chest Tomosynthesis: A Human Observer Study Using Simulated Nodules. Rationale And Objectives: Chest tomosynthesis has been suggested as a suitable alternative to CT for follow-up of pulmonary nodules. The aim of the present study was to investigate the possibility of detecting pulmonary nodule growth using chest tomosynthesis. Materials And Methods: Simulated nodules with volumes of approximately 100 mm3 and 300 mm3 as well as additional versions with increasing volumes were created. The nodules were inserted into images from pairs of chest tomosynthesis examinations, simulating cases where the nodule had either remained stable in size or increased in size between the two imaging occasions. Nodule volume growths ranging from 11% to 252% were included. A simulated dose reduction was applied to a subset of the cases. Cases differing in terms of nodule size, dose level, and nodule position relative to the plane of image reconstruction were included. Observers rated their confidence that the nodules were stable in size or not. The rating data for the nodules that were stable in size was compared to the rating data for the nodules simulated to have increased in size using ROC analysis. Results: Area under the curve values ranging from 0.65 to 1 were found. The lowest area under the curve values were found when there was a mismatch in nodule position relative to the reconstructed image plane between the two examinations. Nodule size and dose level affected the results. Conclusion: The study indicates that chest tomosynthesis can be used to detect pulmonary nodule growth. Nodule size, dose level, and mismatch in position relative to the image reconstruction plane in the baseline and follow-up examination may affect the precision. abstract_id: PUBMED:16258894 Diagnostic approach to solitary pulmonary nodule The solitary pulmonary nodule is a common radiologic abnormality that is often detected incidentally. Many malignant and benign diseases can present as a solitary pulmonary nodule on a chest roentgenogram. It is important to differentiate malignant nodules from benign nodules in the least invasive way and to make as specific and an accurate diagnosis as possible. In this paper, a diagnostic approach to the solitary pulmonary nodule was discussed. abstract_id: PUBMED:19127894 The solitary pulmonary nodule The incidental discovery of a solitary pulmonary nodule while performing a CT scan of the chest is a very common clinical problem. The differential diagnosis is large but the main clinical challenge is to exclude or ascertain a neoplasia. The evaluation of preexisting risk factors and the analysis of morphological characteristics of the nodule allow the clinician to solve this challenge in a significant number of cases. When the nature of the lesion remains indeterminate a careful follow-up with volumetric determination is necessary for decision making. abstract_id: PUBMED:32828440 Pulmonary nodule detection on chest radiographs using balanced convolutional neural network and classic candidate detection. Computer-aided detection (CADe) systems play a crucial role in pulmonary nodule detection via chest radiographs (CXRs). A two-stage CADe scheme usually includes nodule candidate detection and false positive reduction. A pure deep learning model, such as faster region convolutional neural network (faster R-CNN), has been successfully applied for nodule candidate detection via computed tomography (CT). The model is yet to achieve a satisfactory performance in CXR, because the size of the CXR is relatively large and the nodule in CXR has been obscured by structures such as ribs. In contrast, the CNN has proved effective for false positive reduction compared to the shallow method. In this paper, we developed a CADe scheme using the balanced CNN with classic candidate detection. First, the scheme applied a multi-segment active shape model to accurately segment pulmonary parenchyma. The grayscale morphological enhancement technique was then used to improve the conspicuity of the nodule structure. Based on the nodule enhancement image, 200 nodule candidates were selected and a region of interest (ROI) was cropped for each. Nodules in CXR exhibit a large variation in density, and rib crossing and vessel tissue usually present similar features to the nodule. Compared to the original ROI image, the nodule enhancement ROI image has potential discriminative features from false positive reduction. In this study, the nodule enhancement ROI image, corresponding segmentation result, and original ROI image were encoded into a red-green-blue (RGB) color image instead of the duplicated original ROI image as input of the CNN (GoogLeNet) for false positive reduction. With the Japanese Society of Radiological Technology database, the CADe scheme achieved high performance of the published literatures (a sensitivity of 91.4 % and 97.1 %, with 2.0 false positives per image (FPs/image) and 5.0 FPs/image, respectively) for nodule cases. abstract_id: PUBMED:25794062 Lung nodule detection by microdose CT versus chest radiography (standard and dual-energy subtracted). Objective: The purpose of this study was to investigate the feasibility of microdose CT using a comparable dose as for conventional chest radiographs in two planes including dual-energy subtraction for lung nodule assessment. Materials And Methods: We investigated 65 chest phantoms with 141 lung nodules, using an anthropomorphic chest phantom with artificial lung nodules. Microdose CT parameters were 80 kV and 6 mAs, with pitch of 2.2. Iterative reconstruction algorithms and an integrated circuit detector system (Stellar, Siemens Healthcare) were applied for maximum dose reduction. Maximum intensity projections (MIPs) were reconstructed. Chest radiographs were acquired in two projections with bone suppression. Four blinded radiologists interpreted the images in random order. Results: A soft-tissue CT kernel (I30f) delivered better sensitivities in a pilot study than a hard kernel (I70f), with respective mean (SD) sensitivities of 91.1%±2.2% versus 85.6%±5.6% (p=0.041). Nodule size was measured accurately for all kernels. Mean clustered nodule sensitivity with chest radiography was 45.7%±8.1% (with bone suppression, 46.1%±8%; p=0.94); for microdose CT, nodule sensitivity was 83.6%±9% without MIP (with additional MIP, 92.5%±6%; p&lt;10(-3)). Individual sensitivities of microdose CT for readers 1, 2, 3, and 4 were 84.3%, 90.7%, 68.6%, and 45.0%, respectively. Sensitivities with chest radiography for readers 1, 2, 3, and 4 were 42.9%, 58.6%, 36.4%, and 90.7%, respectively. In the per-phantom analysis, respective sensitivities of microdose CT versus chest radiography were 96.2% and 75% (p&lt;10(-6)). The effective dose for chest radiography including dual-energy subtraction was 0.242 mSv; for microdose CT, the applied dose was 0.1323 mSv. Conclusion: Microdose CT is better than the combination of chest radiography and dual-energy subtraction for the detection of solid nodules between 5 and 12 mm at a lower dose level of 0.13 mSv. Soft-tissue kernels allow better sensitivities. These preliminary results indicate that microdose CT has the potential to replace conventional chest radiography for lung nodule detection. abstract_id: PUBMED:3487221 Evaluation of a chest phantom for CT nodule densitometry. The characteristics of a chest phantom used for CT nodule densitometry were determined by use of a GE CT 9800 scanner (General Electric, Milwaukee, WI). The supplied reference rods were scanned in different positions within the lung fields of the phantom and with varied chest wall thicknesses. The liver/spleen inserts were added. The CT attenuation values of different-size rods and their mineral content were also tested. The size of the standard rod is the major determinant of its CT number, which varied from 1 to 83 H. The standard rods contained no measurable calcium or other mineral. Position-dependent variability in CT numbers was relatively small with the GE CT 9800 scanner. The simulated chest wall additions and liver/spleen inserts produced only small increases in CT density. We conclude that the GE CT 9800 scanner, as an example of one of the newer CT scanners, shows improved operating characteristics for pulmonary nodule densitometry. Nodule densitometry should be further evaluated with simplified phantoms. Answer: The ability of radiologists to accurately detect calcification within a solitary pulmonary nodule using chest radiography is limited. A study designed to assess this ability found that when radiologists assessed nodules for the presence or absence of calcification using chest radiographs alone, the sensitivity of the chest radiograph in the detection of calcium was only 0.50, and the specificity was 0.87. This indicates that chest radiographs are not highly reliable for detecting calcification within solitary pulmonary nodules. The positive predictive value of a "definitely calcified" assessment was 0.93, but even among these "definitely calcified" nodules, up to 7% may not actually be calcified and could potentially be malignant. The study concluded that there was no significant difference in the confidence levels reported between calcified and noncalcified nodules, and there was no correlation of nodule size with accuracy or confidence level. Therefore, without documentation of long-term stability, it may be appropriate to have a low threshold for recommending CT to further evaluate solitary pulmonary nodules (PUBMED:11133566).
Instruction: Is hypertension a major risk factor in aneurysmal subarachnoid hemorrhage? Abstracts: abstract_id: PUBMED:12212365 Is hypertension a major risk factor in aneurysmal subarachnoid hemorrhage? Objective: The role of hypertension as a major risk factor for the development and rupture of cerebral aneurysms is a subject of considerable debate. Methods: In order to substantiate or weaken the hypothesis of the atherogenic, degenerative origin of this disease, in addition to hypertension the influence of smoking and the covariates body weight, hematocrit, fibrinogen and leukocrit were examined in a hospital-based case-control and cohort study. 141 consecutive patients with subarachnoid hemorrhage (SAH) and verified cerebral aneurysms, admitted between September 1994 and August 1998, were investigated. Results: An association was found between hypertension and SAH of cerebral aneurysm in the entire patient group (P &lt; 0.0001). In the stepwise logistic regression, the exposure odds ratio (OR) for hypertension was 6.8 (CI (95%): 3.53-13.14). Smokers have a twofold higher risk, with an OR of 2.2 (CI (95%): 1.19-4.06). Age was found to have a prognostic impact on the disease (P = 0.0089). Age obviously was a confounding factor for hypertension, which was associated with the outcome (P = 0.048). Conclusion: Patients with hypertension had a nearly seven-fold higher risk of aneurysmal subarchnoid hemorrhage. This seems to substantiate the hypothesis that aneurysm is an acquired and hemodynamically induced chronic disease. abstract_id: PUBMED:9010402 Is family history an independent risk factor for stroke? Objective: To estimate the influence of family history on the occurrence of stroke. Methods: A case-control study was carried out from August 1992 to January 1994. The study population comprised 502 patients with a first stroke, aged between 20 and 70 years, who were treated at 48 affiliated hospitals. The same number of age and sex matched controls were selected from outpatients. Diagnoses were based on CT findings and clinical signs. There were 155 case-control pairs for subarachnoid haemorrhage, 158 for intracerebral haematoma, and 159 for cerebral infarction. Information about the patients and their families was obtained from a questionnaire which included the family histories of each subtype of stroke and other potential risk factors for stroke. The data were analysed focusing on the role of the family histories in the occurrence of stroke. Results: In univariate analysis, the family histories of subarachnoid haemorrhage and intracerebral haematoma were positively associated with each of the subtypes of stroke (odds ratios 11.24 for subarachnoid haemorrhage, 2.39 for intracerebral haematoma), whereas family history of cerebral infarction was not a significant risk factor for its occurrence (odds ratio 1.41). Family history of intracerebral haematoma was correlated with a personal history of hypertension and habitual alcohol consumption. After adjustment for potential risk factors (hypertension, diabetes, hyperlipidaemia, obesity, alcohol consumption, and regular smoking), family history of subarachnoid haemorrhage still remained the most powerful risk factor for subarachnoid haemorrhage, whereas family history of intracerebral haematoma no longer showed a significant association with haematoma. Conclusion: Genetic factors play a major part in the pathogenesis of subarachnoid haemorrhage, and family history of subarachnoid haemorrhage is the strongest independent risk factor for the disease. On the other hand, family history of intracerebral haematoma was not an independent risk factor for haematoma, but it might be a good predictor, which indirectly influences the pathogenesis of intracerebral haematoma via certain hereditary components such as hypertension, and even lifestyle factors such as alcohol consumption. In cerebral infarction, genetic factors play a minor part in its pathogenesis. abstract_id: PUBMED:15096722 Obesity as a risk factor for cerebrovascular disease. Obesity is widely recognized as a risk factor for coronary artery disease, but opinion is divided regarding whether it is an independent risk factor for cerebrovascular disease; even now there is no common view. In this study, the review sought to focus on a prospective study, but since obesity and non-obesity basically cannot be randomly assigned, randomized controlled trials (RCT) are nonexistent. Accordingly, a cohort study (a method of clinical study in which the obesity group is actively followed up for comparison with the non-obesity group in regard to cerebrovascular disease) was mainly conducted. For reference, retrospective case-control studies are also shown. As a result, most epidemiological surveys on the relation between simple obesity and cerebrovascular disease denied any relation. That is, obesity alone, determined only on the basis of height and weight as shown by BMI (body mass index), etc., cannot be an independent risk factor for cerebrovascular disease; obesity can become a risk factor only when accompanied by hypertension, hyperlipidemia, impaired glucose tolerance, etc. Recently, however, most papers conclude that abdominal obesity is a risk factor for cerebral infarction, provided that there are no data confirming that obesity is a risk factor for hemorrhagic cerebrovascular disease (cerebral hemorrhage and subarachnoid hemorrhage). abstract_id: PUBMED:28292035 Dyslipidaemia as a risk factor in the occurrence of stroke in Nigeria: prevalence and patterns. Introduction: Stroke is a major public health problem worldwide. Hypertension, diabetes mellitus, dyslipidaemia and smoking are some of the common modifiable risk factors in the occurrence of stroke. Therefore, this study was designed to assess the prevalence and patterns of dyslipidaemia among individuals with acute stroke. Methods: This is a retrospective descriptive cross-sectional study, carried out in the Departments of Medicine at the LAUTECH Teaching hospital, Ogbomoso and General Hospital, Orile-Agege, Lagos, South-West, Nigeria, over a 18-month period between September 2012 and February 2014. One hundred and six (106) patients with acute stroke confirmed with computed tomography (CT) brain scan were recruited. Clinical features, risk factors, lipid profiles and stroke patterns were identified. Results: Mean age was significantly higher in ischaemic stroke compared to haemorrhagic (64.08±10.87 Vs, 56.21±12.38years, p=0.001). There was slight male preponderance in both stroke types (1.3:1). Out of 106 patients, 65 (61.3%) had ischaemic stroke, 38 (35.8%) intracerebral haemorrhage and 3 (2.9%) with subarachnoid haemorrhage. Dyslipidaemia is the most frequent risk factor (85.9%), followed by hypertension (66.0%) and diabetes mellitus (15.1%). Dyslipidaemia was significantly higher in the ischaemic stroke compared to haemorrhagic. Reduced HDL-cholesterol is the most prevalent fraction of lipid abnormalities (74.5%). Conclusion: Dyslipidaemia, particularly low HDL-C, was the most frequent risk factor in our patients with stroke. Hence, prevention of dyslipidaemia as well as other risk factors is key to reducing the burden of stroke in our country. abstract_id: PUBMED:14615610 Prospective study of major and minor ST-T abnormalities and risk of stroke among Japanese. Background And Purpose: The association between minor ST-T abnormalities and stroke incidence has not been well elucidated. We sought to examine the relationship between nonspecific minor or major ST-T abnormalities and the incidence of stroke among Japanese men and women. Methods: A 15.4-year prospective study was conducted with 10 741 men and women aged 40 to 69 years in 4 Japanese communities. Electrocardiograms were taken at baseline and were read according to the Minnesota Code. The incidence of stroke was ascertained using systematic surveillance. Results: During the 15.4-year follow-up, 602 strokes (339 ischemic strokes, 129 intracerebral hemorrhages, 80 subarachnoid hemorrhages, and 54 unclassified strokes) occurred. Both men and women with major ST-T abnormalities had approximately 3-fold higher age-adjusted relative risk and 2-fold higher multivariate-adjusted relative risk of total stroke than did those without such abnormalities. Men with minor ST-T abnormalities had a 2.3-fold higher age-adjusted relative risk of total stroke, both ischemic and hemorrhagic, than did those without such abnormalities. After we adjusted further for hypertension category, the relative risk for minor ST-T abnormalities was reduced substantially but remained statistically significant: 1.8 (95% CI, 1.3 to 2.4) for total stroke, 1.9 (95%CI, 1.3 to 2.8) for ischemic stroke, and 1.7 (95% CI, 1.0 to 3.0) for hemorrhagic stroke. For women, however, there was no relation between minor ST-T abnormalities and stroke incidence. Conclusions: Minor ST-T abnormalities have predictive value for the risk of total stroke, both ischemic and hemorrhagic, among middle-aged Japanese men, as do major ST-T abnormalities for both sexes. abstract_id: PUBMED:10971020 Recent infection as a risk factor for intracerebral and subarachnoid hemorrhages. Previous infection has been shown to be a risk factor for acute cerebral ischemia. We tested the hypothesis that recent infection is also a risk factor for intracerebral hemorrhage (ICH) and subarachnoid hemorrhage (SAH). We performed a case-control study with 56 consecutive patients with ICH, 44 consecutive patients with SAH, and 56 and 44 neurological control patients, respectively. Infection within 4 weeks was associated with SAH independently of hypertension and smoking (p = 0.049). There was no significant association between infection and ICH. Recent infection, primarily upper respiratory tract infection, may be a risk factor for SAH by contributing to the formation and rupture of aneurysms. abstract_id: PUBMED:24629055 Risk factors in patients with perimesencephalic hemorrhage. Background And Purpose: Smoking and hypertension are risk factors for aneurysmal subarachnoid hemorrhage (aSAH), whilst excessive alcohol consumption is less consistently linked with aSAH. Perimesencephalic hemorrhage (PMH) is a benign subset of non-aneurysmal subarachnoid hemorrhage. The exact cause of PMH is unknown, and its risk factor profile may help to elucidate the pathogenesis. The influence of smoking, hypertension and excessive alcohol consumption on the occurrence of PMH was studied. Methods: Seventy-nine patients admitted with a PMH to the University Medical Center Utrecht were studied. As controls 574 persons were selected from five different general practices in the referral region of the University Medical Center Utrecht. All participants filled in a questionnaire about smoking habits, the presence of hypertension and alcohol consumption before their hemorrhage. Odds ratios (ORs) with corresponding 95% confidence intervals (CIs) were calculated to assess the association of risk factors and PMH, and multivariable logistic regression was used to adjust for possible confounding by age and sex. Results: Adjusted ORs for the occurrence of PMH were 1.7 (95% CI 1.0-2.8) for smoking cigarettes, cigars, pipes or any combination of these, 1.1 (95% CI 0.6-2.0) for hypertension and 1.1 (95% CI 0.5-2.1) for excessive alcohol consumption. Conclusions: Similar to aSAH, smoking is a risk factor for PMH and excessive alcohol consumption is not. In contrast to aSAH, hypertension is not a risk factor for PMH. This implies that the pathophysiological mechanisms causing PMH might be slightly different from those causing aSAH. abstract_id: PUBMED:25658785 Aneurysm diameter as a risk factor for pretreatment rebleeding: a meta-analysis. Object: Aneurysmal rerupture prior to treatment is a major cause of death and morbidity in aneurysmal subarachnoid hemorrhage. Recognizing risk factors for aneurysmal rebleeding is particularly relevant and might help to identify the aneurysms that benefit from acute treatment. It is uncertain if the size of the aneurysm is related to rebleeding. This meta-analysis was performed to evaluate whether an association could be determined between aneurysm diameter and the rebleeding rate before treatment. Potentially confounding factors such age, aneurysm location, and the presence of hypertension were also evaluated. Methods: The authors systematically searched the PubMed, Embase, and Cochrane databases up to April 3, 2013, for studies of patients with aneurysmal subarachnoid hemorrhage that reported the association between aneurysm diameter and pretreatment aneurysmal rebleeding. The Grading of Recommendations Assessment, Development and Evaluation (GRADE) criteria were used to evaluate study quality. Results: Seven studies, representing 2121 patients, were included in the quantitative analysis. The quality of the studies was low in 2 and very low in 5. Almost all of the studies used 10 mm as the cutoff point for size among other classes, and only one used 7 mm. An analysis was performed with this best unifiable cutoff point. Overall rebleeding occurred in 360 (17.0%) of 2121 patients (incidence range, from study to study, 8.7%-28.4%). The rate of rebleeding in small and large aneurysms was 14.0% and 23.6%, respectively. The meta-analysis of the 7 studies revealed that larger size aneurysms were at a higher risk for rebleeding (OR 2.56 [95% CI 1.62-4.06]; p = 0.00; I (2) = 60%). The sensitivity analysis did not alter the results. Five of the 7 studies reported data regarding age; 4 studies provided age-adjusted results and identified a persistent relationship between lesion size and the risk of rebleeding. The presence of hypertension was reported in two studies and was more prevalent in patients with rebleeding in one of these. Location (anterior vs posterior circulation) was reported in 5 studies, while in 4 there was no difference in the rebleeding rate. One study identified a lower risk of rebleeding associated with posterior location aneurysms. Conclusions: This meta-analysis showed that aneurysm size is an important risk factor for aneurysmal rebleeding and should be used in the clinical risk assessment of individual patients. The authors' results confirmed the current guidelines and underscored the importance of acute treatment for large ruptured aneurysms. abstract_id: PUBMED:27077917 Cholesterol as a Risk Factor for Subarachnoid Hemorrhage: A Systematic Review. Background: The role played by total cholesterol (TC) in risk for subarachnoid hemorrhage (SAH) is unclear because studies report both high and low TC each as a risk factor. We performed a systematic review to clarify associations between lipid profile and SAH. Methods: Our literature search comprised Pubmed, Scopus, and Cochrane Library databases with no language, publication year, or study type limitations. The Preferred Reporting Items for Systematic reviews and Meta-analyses (PRISMA) checklist guided our reporting. Data forms adapted from the Critical Appraisal Skills Program (CASP), and Cochrane Collaboration guidelines provided a platform for risk-of-bias evaluation. We used a random effects model to calculate pooled estimates and assessed heterogeneity with I2-statistics. Results: Of the final 21 studies reviewed, 12 were prospective and 9 retrospective. All studies assessed TC, four assessed HDL, and none LDL in risk for SAH. Heterogeneity among all, retrospective, and Asian studies was high (I2 = 79.5%, I2 = 89.0%, and I2 = 84.3%) and considerable in prospective (I2 = 46.0%). We therefore focused on qualitative analysis and found that only two studies had a low risk of bias. According to these studies high TC increases risk for SAH in men, whereas the role of HDL remained unclear. Conclusion: The low-risk-of-bias studies suggest that elevated TC levels elevate risk for SAH in men. Due to the high prevalence of hypercholesterolemia, population attributable risk (PAR) of hypercholesterolemia may exceed the PARs of smoking and hypertension in men. Apart from diabetes and obesity, the risk-factor profile of SAH seems to resemble that of other cerebrovascular diseases, at least in men. abstract_id: PUBMED:28483775 Age-Specific Vascular Risk Factor Profiles According to Stroke Subtype. Background: Ischemic and hemorrhagic stroke are increasingly recognized as heterogeneous diseases with distinct subtypes and etiologies. Information on variation in distribution of vascular risk factors according to age in stroke subtypes is limited. We investigated the prevalence of vascular risk factors in stroke subtypes in relation to age. Methods And Results: We studied a prospective multicenter university hospital-based cohort of 4033 patients. For patients with ischemic stroke caused by large artery atherosclerosis, small vessel disease, or cardioembolism and for patients with spontaneous intracerebral hemorrhage or aneurysmal subarachnoid hemorrhage, we calculated prevalences of vascular risk factors in 4 age groups: &lt;55, 55 to 65, 65 to 75, and ≥75 years, and mean differences with 95% CIs in relation to the reference age group. Patients aged &lt;55 years were significantly more often of non-white origin (in particular in spontaneous intracerebral hemorrhage and aneurysmal subarachnoid hemorrhage patients) and most often smoked (most prominent for aneurysmal subarachnoid hemorrhage patients). Patients aged &lt;55 years with ischemic stroke caused by large artery atherosclerosis or small vessel disease more often had hypertension, hyperlipidemia, and diabetes mellitus than patients with ischemic stroke of cardiac origin. Overall, the frequency of hypertension, hyperlipidemia, and diabetes mellitus increased with age among all stroke subtypes, whereas smoking decreased with age. Regardless of age, accumulation of potentially modifiable risk factors was most pronounced in patients with ischemic stroke caused by large artery atherosclerosis or small vessel disease. Conclusions: The prevalence of common cardiovascular risk factors shows different age-specific patterns among various stroke subtypes. Recognition of these patterns may guide tailored stroke prevention efforts aimed at specific risk groups. Answer: Yes, hypertension is considered a major risk factor in aneurysmal subarachnoid hemorrhage (SAH). A study found a significant association between hypertension and SAH of cerebral aneurysm, with patients with hypertension having a nearly seven-fold higher risk of aneurysmal subarachnoid hemorrhage (PUBMED:12212365). This supports the hypothesis that aneurysms are acquired and hemodynamically induced chronic diseases. Additionally, another study indicated that recent infection, particularly upper respiratory tract infection, may be a risk factor for SAH by contributing to the formation and rupture of aneurysms, independently of hypertension and smoking (PUBMED:10971020). However, it is important to note that while hypertension is a significant risk factor, other factors such as genetic predispositions, lifestyle factors, and recent infections also play a role in the pathogenesis of SAH.
Instruction: Are results of bariatric surgery different in the Middle East? Abstracts: abstract_id: PUBMED:28000089 The Peri-operative Bariatric Surgery Care in the Middle East Region. Background: Bariatric surgery is common in the Middle East region. However, regional accreditation bodies and guidelines are lacking. We present the current peri-operative practice of bariatric surgery in the Middle East region. Setting: Public and private practice in the Middle East region. Methods: A questionnaire was designed to study trends of peri-operative care in bariatric surgery. It was sent to members of the Pan Arab Society for Metabolic and Bariatric Surgery (PASMBS). Results: Ninety-three surgeons (88.6%) responded, 63.4% were in private practice, 68.5% have been in practice for more than 5 years, and 61.1% performed more than 125 cases per year. Laparoscopic sleeve gastrectomy (LSG) was the commonest procedure performed, then laparoscopic Roux-en-Y gastric bypass (LRYGB), one anastomosis gastric bypass/mini gastric bypass (OAGB/MGB), and laparoscopic adjustable gastric banding (LAGB). Pre-operatively as a routine, 65% referred patients for dietitian and (78.3%) for smoking cessation. In contrast as a routine, 22.6% referred patients to a psychologist, 30% screened for obstructive sleep apnea (OSA), and when they did, they did not use a questionnaire. For patients 50 years of age, 22% performed a screening colonoscopy and 33.7% referred patients to a cardiologist. Intra-operatively as a routine, 25.3% placed a drain and 42.2% placed urinary catheters. In contrast, 77.1% performed a leak test (82.7% as a methylene blue leak test). Post-operatively, 79.5% used chemoprophylaxis for venous thromboembolism and 89% required patients to take vitamins. In contrast, 25% prescribed ursodeoxycholic acid. Conclusion: The wide variation in the peri-operative care of bariatric surgery in the Middle East region highlights the need for regional guidelines based on international guidelines. abstract_id: PUBMED:34332910 Bariatric surgery in the Middle East and North Africa: narrative review with focus on culture-specific considerations. There is an increasing volume of bariatric surgeries in the Middle East and North Africa (MENA), but the context of bariatric surgery in the region is not fully understood. Incorporating culture-specific considerations in the provision of care to patients who undergo bariatric surgery may help to optimize outcomes after surgery. We conducted a narrative review of published research studies on bariatric surgery in the MENA region, highlighting cultural and contextual aspects relevant to the care of bariatric surgery patients who undergo surgery in this geographic area. The authors searched the following online databases: PubMed, CINAHL, Embase, and Academic Search Elite from 2010-2020 for studies conducted in 18 countries in the MENA region. This narrative review identifies cultural-specific considerations that may affect bariatric care and outcomes in 6 domains: knowledge of bariatric surgery; mental health, body image, and quality of life; influence of family; religion and lifestyle; preoperative practices; and healthcare access. Provision of culturally congruent care may help patients to achieve the best possible outcomes after bariatric surgery. Results may inform efforts to provide safe and culture-specific care in the MENA region, as well as those who migrate or seek care in other countries. More research is warranted on this heterogeneous population to optimize postsurgery weight trajectory and psychosocial adjustment. abstract_id: PUBMED:30937877 First Successful Large-Scale Introduction of an Enhanced Recovery after Bariatric Surgery (ERABS) Program in the Middle East: The Results and Lessons Learned of Tawam Hospital/Johns Hopkins, a Tertiary Governmental Center in the UAE. Background: Although enhanced recovery after bariatric surgery (ERABS) has proven to be safe and cost-effective, this concept is relatively new in the Middle East. Methods: A retrospective analysis of consecutive registered cohorts of patients who underwent primary and purely laparoscopic sleeve gastrectomy (LSG) or laparoscopic Roux-en-Y gastric bypass (LRYGB) were compared before introduction of ERABS (2010-2014) and after ERABS (2015-2017) at Tawam Hospital/Johns Hopkins, the UAE. Results: A total of 462 eligible bariatric patients (LSG 414 and LRYGB 48) were operated on before and 1602 (LSG 1436 and LRYGB 166) after introduction of the ERABS. Significant improvements of mean patient time of the patient being within the OR for LSG (from 2:27 to 1:23 min, p = 0.000) and LRYGB (from 3:17 to 1:59 min, p = 0.000) were achieved when comparing pre-ERABS with after introduction of ERABS. Furthermore, there was a significant decrease in LOS in both LSG (from 3.2 to 1.5 days, p = 0.000) and in LRYGB (from 3.5 to 1.7 days, p = 0.000). Major (CD classification III-IV) complications decreased significantly in LSG (from 13.8 to 0.8%, p = 0.000) and were similar in LRYGB (from 4.2% to 3.0%, p = NS). The readmission rate for LSG (from 2.9 to 2.6%, p = NS) or LRYGB (from 0 to 4.8%, p = NS) and the reoperation rates after LSG (from 0.7 to 0.5%, p = NS) and LRYGB (from 0 to 2.4%, p = NS) did not differ between both groups following introduction of ERABS. Conclusions: Implementation of a standardized ERABS program in the Middle East is feasible and safe and leads to reduced LOS and OR times. abstract_id: PUBMED:23619317 Are results of bariatric surgery different in the Middle East? Early experience of an international bariatric surgery program and an ACS NSQIP outcomes comparison. Background: Bariatric operations performed at the Bariatric and Metabolic Institute Abu Dhabi are submitted randomly from the entire surgery volume at Sheikh Khalifa Medical City to the American College of Surgeons (ACS) NSQIP. Our aim is to report our early experience and compare our bariatric surgery outcomes with ACS NSQIP hospitals of similar size. Study Design: We queried the ACS NSQIP database for bariatric surgery codes between August 2009 and August 2012 for hospitals with &gt;500 beds. Statistical analysis was performed (p &lt; 0.05). Results: We performed 275 bariatric operations compared with a total of 29,715 at other NSQIP hospitals. The ACS NSQIP bariatric surgery cohort at the Bariatric and Metabolic Institute Abu Dhabi represents 275 of 312 (89.3%) of our entire bariatric surgery volume. Our patients were statistically significantly younger (mean age 36 vs 44.8 years), healthier (American Society of Anesthesiologists scores 1 to 2 in 78.6% vs 35.7%), and heavier (body mass index 47.4 vs 45.5). In addition, we had fewer diabetic (18.5% vs 27.3%) and hypertensive (21.1% vs 52.2%) patients. We performed more Roux-en-Y gastric bypass (69.8% vs 54.5%) and sleeve gastrectomy (24.8% vs 17.2%) and fewer laparoscopic adjustable gastric banding (0.8% vs 22.7%). Outcomes were similar with regard to rates of reoperation, wounds, urinary tract infection, bleeding, thromboembolic, respiratory, and overall complications. We had lower septic, cardiac, and renal failure complications; lower mortality, and longer hospital stay by 0.4 days. We achieved 94.9% 30-day follow-up compared with 90.7% at other ACS NSQIP hospitals. Conclusions: This is the first report comparing outcomes of an international bariatric surgery program (Bariatric and Metabolic Institute Abu Dhabi) with ACS NSQIP bariatric surgery programs. Our outcomes are equivalent to ACS NSQIP bariatric surgery programs. abstract_id: PUBMED:32661957 The Effect of Fasting during Ramadan on Outcomes after Bariatric Surgery at an Academic Medical Center in the Middle East. Introduction: Ramadan is characterized by a month of fasting between dawn and sunset. Cleveland Clinic Abu Dhabi is located in the United Arab Emirates, where the majority of our patients observe Ramadan. In many practices, bariatric surgery has been delayed until after Ramadan in order to avoid postoperative complications. However, there are no studies that evaluate this theory. Objectives: Our study aimed to evaluate perioperative outcomes between bariatric surgeries performed before/during Ramadan and at times distant to Ramadan. Methods: All patients who underwent primary bariatric surgery at Cleveland Clinic Abu Dhabi between September 2015 and July 2019 were retrospectively reviewed. The Pre-Ramadan (PR) cohort included patients that underwent surgery during the month of Ramadan and up to 3 months before Ramadan. The Non-Ramadan (NR) cohort included patients from the remainder 8 months. Results: Our study included 542 patients: 185 patients in the PR cohort and 357 patients in the NR cohort. Baseline characteristics were similar. There were no statistically significant differences in perioperative outcomes, including emergency department (ED) visits (23.2% PR vs. 24.4% NR, p = .77), readmission rates (2.2% PR vs. 4.5% NR, p = .17), reoperation (1.1% PR vs. 3.6% NR, p = .09), and complications within 30 days (3.8% PR vs. 7.0% NR, p = .13). Conclusions: There is no difference among patients who undergo surgery before/during Ramadan and at times distant to Ramadan. Based on this data, there does not appear to be an increased risk of performing bariatric surgery prior to or during Ramadan in a comprehensive, multi-disciplinary program. abstract_id: PUBMED:28070477 The longitudinal trajectory of post-surgical % total weight loss among middle-aged women who had undergone bariatric surgery. Middle-aged women are at a higher risk of being obese. We examined the trajectory of post-surgical % total weight loss (%TWL) among middle-aged female bariatric patients. We fitted sequential generalized estimating equations models to analyze a sample of women who received bariatric surgery in 1995-2012, aged 40-65 years at the time of surgery (N = 158,292) whose pre-operative body mass index (BMI) was ≥ 30 kg/m2 in the Bariatric Outcomes Longitudinal Database. The %TWL computed by 100% × (pre-surgery BMI - post-surgery BMI) / pre-surgery BMI showed different trajectories depending on type of surgery. For gastric banding, %TWL increased rapidly right after bariatric surgery and started to decrease around 1 year after surgery. For Roux-en-Y gastric bypass (RYGB) and sleeve gastrectomy, %TWL overall did not show remarkable changes from around 1 year after surgery. The highest increase in %TWL was observed in patients whose pre-operative BMI was 40 or higher and those who had undergone RYGB (ps &lt; 0.001). Whereas the trajectories of %TWL among patients with sleeve gastrectomy and gastric banding did not differ much between different pre-operative BMI groups, the trajectories for RYGB were notably different between different pre-operative BMI groups (ps &lt; 0.001). Middle-aged female bariatric patients are likely to achieve the highest %TWL if they receive RYGB and if their pre-operative BMI is 40 or higher. Further research is warranted to corroborate the present study's finding on the long-term effect of different types of bariatric surgery on %TWL among middle-aged women. abstract_id: PUBMED:35765304 Resuming Bariatric Surgery Procedures During COVID-19 Pandemic: Early Surgical Outcomes at a Tertiary Referral Center in the Middle East. Background: In response to the COVID-19 pandemic, elective surgical procedures have been delayed. Even with the implementation of surgical societies' recommendations, patient safety remains a concern. This study evaluates the postoperative outcomes in patients undergoing bariatric surgery after reopening (RO) elective surgery during the COVID-19 pandemic. Methods: All patients who underwent bariatric surgery from September 2015 to July 2020 were included. Patients were divided into two cohorts: the pre-COVID-19 (PC) cohort and the RO cohort. Propensity score weighting was used to evaluate postoperative outcomes. Results: Our study included 1076 patients, 1015 patients were in the PC and 61 patients in the RO. Sixty-four percent were female with a mean age of 37 years and median body mass index of 41 kg/m2. There were no statistically significant differences in 30 days perioperative outcomes, including emergency department visits 24.8% PC versus 19.7% RO (p = 0.492), readmission 4.2% PC versus 8.2% RO (p = 0.361), reoperation 2.6% PC versus 0% RO (p = 0.996), and major complications 4.0% PC versus 4.9% RO (p = 0.812). No patients in the RO contracted COVID-19. Conclusions: With the appropriate policies and precautionary measures, there appear to be no differences in the 30-day postoperative outcomes before and during the COVID-19 pandemic. abstract_id: PUBMED:35029061 East Asian perspectives in metabolic and bariatric surgery. The prevalence of diabetes and obesity continues to rise in East Asia. As the risk of diabetes increases at a lower body mass index (BMI) in East Asians than in Europeans, the threshold of BMI values for metabolic and bariatric surgery (MBS) is lower in East Asians. MBS is considered upon reaching a BMI of 27.5 kg/m2 and is recommended at a BMI of ≥ 32.5 kg/m2 , depending on the status of glucose homeostasis. The most commonly performed MBS in East Asia is sleeve gastrectomy, followed by Roux-en-Y gastric bypass (RYGB). Because the incidence of gastric cancer is higher in East Asia than in other regions, concerns regarding surveillance for gastric cancer might be related to a preference for sleeve gastrectomy over RYGB in this region. Even though there is a paucity of data on direct comparisons of the efficacy of MBS among different ethnic groups, the degree of weight reduction in East Asians is not inferior to other ethnic groups. Moreover, studies suggest that the diabetes remission rate in East Asians seemed to be higher than in other ethnic groups. Future studies involving multiethnic groups are necessary to identify possible ethnic differences in diabetes remission and to determine the appropriate BMI threshold for MBS according to ethnicity. abstract_id: PUBMED:25906817 History and current status of bariatric and metabolic surgeries in East Asia. Morbid obesity and obesity-related comorbidities are major public health problems around the world, including in East Asia. Although Asian populations have been shown to have an elevated risk of type 2 diabetes mellitus, hypertension, and hyperlipidemia at a relatively lower BMI than Western populations, the Asian perspective on bariatric and metabolic surgeries has not been actively discussed until recently. Bariatric surgery in East Asia has undergone a paradigm shift since the first laparoscopic vertical bariatric surgery was performed in Taiwan in 1998, which was subsequently followed by the introduction of laparoscopic adjustable gastric banding, Roux-en-Y gastric bypass, and mini-gastric bypass. Since 2006, use of laparoscopic sleeve gastrectomy has remarkably increased because of its ease, speed, and safety. With regard to metabolic surgery, a randomized controlled trial in Taiwan demonstrated that laparoscopic Roux-en-Y gastric bypass is more effective than laparoscopic sleeve gastrectomy for surgical treatment of poorly controlled type 2 diabetes mellitus and metabolic syndrome. In this article, we describe the history of bariatric procedures, revision surgeries, weight-loss effect, and metabolic surgeries in Asian populations, and we also discuss minimally invasive bariatric surgery with some important citations. abstract_id: PUBMED:33686579 Preoperative Nutritional Deficiencies in Bariatric Surgery Candidates in Korea. Background: All bariatric surgical procedures may compromise the nutritional status of patients, but nutritional deficiencies vary by region and culture. However, there are no preoperative nutritional guidelines for bariatric patients in East Asia. Here, we aimed to evaluate the preoperative nutritional status of East Asian bariatric surgical patients. Methods: We performed a retrospective cohort study of 215 consecutive patients who underwent bariatric surgery between January and December 2019 at a single tertiary institution in Korea. Medical background evaluation, anthropometric measurements, and laboratory tests were performed before surgery. Results: Vitamin D deficiency was identified in 80.0% of participants and 13.8% had insufficiency. The prevalence of vitamin D deficiency or insufficiency and the mean vitamin D concentration did not significantly differ between the sexes. Vitamin B1 (thiamine) deficiency was the second most common deficiency (18.3%), followed by folate (14.2%), iron (11.8%), and zinc (7.6%) deficiencies. The prevalence of anemia did not significantly differ between the sexes (1.3% in men vs. 7.4% in women, p=0.10), but the prevalence of iron deficiency was significantly higher in women (1.3% vs. 17.9%, p&lt;0.001). The prevalences of copper (2.3%) and selenium (3.2%) deficiencies were low, and none of the participants had vitamin B12 or magnesium deficiency. Conclusion: There were high prevalences of vitamin D, folate, vitamin B1, and iron deficiencies in bariatric patients in Korea. Nutritional deficiencies should be corrected prior to surgery to prevent subsequent further depletion. Routine analysis of vitamin B12, magnesium, copper, and selenium before surgery should be considered and studied in more detail. Answer: The results of bariatric surgery in the Middle East do not appear to be significantly different from those in other regions, according to the available literature. A study comparing the outcomes of an international bariatric surgery program in Abu Dhabi with the American College of Surgeons National Surgical Quality Improvement Program (ACS NSQIP) found that the outcomes were equivalent to those of ACS NSQIP bariatric surgery programs. The patients in Abu Dhabi were younger, healthier, and heavier, with fewer diabetic and hypertensive patients, but the rates of reoperation, wounds, urinary tract infection, bleeding, thromboembolic, respiratory, and overall complications were similar. The study also reported lower septic, cardiac, and renal failure complications, lower mortality, and a slightly longer hospital stay by 0.4 days (PUBMED:23619317). Additionally, the implementation of an Enhanced Recovery after Bariatric Surgery (ERABS) program in the Middle East, specifically at Tawam Hospital/Johns Hopkins in the UAE, showed that it is feasible and safe, leading to reduced length of stay (LOS) and operating room (OR) times. Major complications decreased significantly for laparoscopic sleeve gastrectomy (LSG) and were similar for laparoscopic Roux-en-Y gastric bypass (LRYGB) after the introduction of ERABS (PUBMED:30937877). Furthermore, a study conducted at Cleveland Clinic Abu Dhabi found no difference in perioperative outcomes between bariatric surgeries performed before/during Ramadan and at times distant to Ramadan, suggesting that the timing of surgery in relation to the fasting month does not increase the risk of complications (PUBMED:32661957). However, it is important to note that there is a wide variation in peri-operative care of bariatric surgery in the Middle East region, which highlights the need for regional guidelines based on international guidelines (PUBMED:28000089). Cultural-specific considerations may also affect bariatric care and outcomes in the Middle East and North Africa (MENA) region, and incorporating these considerations may help optimize outcomes after surgery (PUBMED:34332910). In summary, while there may be some regional variations in practice and patient demographics, the results of bariatric surgery in the Middle East are not significantly different from those reported in other regions, and efforts are being made to standardize and optimize care in accordance with international best practices.
Instruction: Do lipids and apolipoproteins predict coronary heart disease under statin and fibrate therapy in the primary prevention setting in community-dwelling elderly subjects? Abstracts: abstract_id: PUBMED:21129746 Do lipids and apolipoproteins predict coronary heart disease under statin and fibrate therapy in the primary prevention setting in community-dwelling elderly subjects? The 3C Study. Purpose: To evaluate associations of standard lipids and apolipoproteins with incident coronary heart disease (CHD) in older adults according to lipid-lowering treatment (LLT) in the primary prevention setting. Methods: Within the 3C Study of men and women aged ≥ 65 years, standard lipids, apolipoproteins A-1 and B100 and hs-CRP were measured in baseline blood samples from 199 participants who developed a first CHD event over 4 years of follow-up and from 1081 subjects randomly selected from the initial cohort (case cohort study). Standardized hazard ratios (HRs) were estimated by the Cox proportional hazard model. Results: In the random sample, 75.3% were free of LLT (non-users), 11.5% received statins and 13.4% fibrates. Among the non-users, all lipid parameters were significantly associated with future CHD (n = 145) after adjustment for age, gender, study center and educational level, and their HRs were comparable. For instance, the HR for LDL-cholesterol was 1.38 (95% CI: 1.13-1.69). These associations also existed and were stronger among statin users (n = 27 CHD), as shown by an HR for LDL-cholesterol of 2.20 (95% CI: 1.27-3.81). Additional adjustment for traditional risk factors and hs-CRP marginally modified HR estimates in those receiving or not receiving statins. Among fibrate users (n = 27 CHD), significant associations were observed for triglycerides only (1.68; 95% CI = 1.04-2.72) in fully adjusted analyses. Conclusion: In older adults, standard lipids and apolipoproteins are stronger predictors of CHD in those receiving statins than in those who are not in the primary prevention setting. Under fibrate treatment, only triglycerides were independent predictors of CHD. abstract_id: PUBMED:21930719 Cardiovascular risk management in community-dwelling elderly: opportunities for prevention. Background: There is little information on the prevalence of hypertension and other modifiable cardiovascular risk factors in community-dwelling elderly in the Netherlands and the potential impact of improving antihypertensive treatment on major cardiovascular events. Design: Cross-sectional analysis of Dutch community-dwelling subjects aged 70-78 years without dementia who were included in the cluster randomized preDIVA trial (Prevention of Dementia by Intensive Vascular care). Methods: The prevalence of hypertension and other cardiovascular risk factors are described for participants with and without a history of cardiovascular disease (CVD). Projected benefits of blood pressure decrease are calculated using data from a meta-analysis and a large national registry. Results: Of 3534 subjects, more than one-third (n = 1230, 35.2%) have a history of cardiovascular disease. Overall, 63% of subjects have two or more cardiovascular risk factors amenable to treatment. Systolic blood pressure (SBP) is ≥160 mmHg in 37% of patients with CVD, of which 28% is untreated. In subjects without a history of CVD, 41% have a SBP ≥ 160 mmHg of which 52% is untreated. A 5-15 mmHg decrease in SBP is projected to prevent 12-32% of coronary heart disease and 16-41% of strokes, respectively. This corresponds with 14-38 prevented cases within 2 years in our intervention group (n = 1895). Conclusions: Hypertension and other cardiovascular risk factors are very common in elderly subjects. Current (primary and secondary) prevention programmes appear insufficient. Improved antihypertensive treatment has the potential to prevent a substantial proportion of strokes and coronary heart disease in this population. abstract_id: PUBMED:25591028 Statins for the primary prevention of cardiovascular disease in the elderly. Objective: The objective is to review the evidence evaluating the efficacy of statin therapy for primary prevention of cardiovascular (CV) disease in the elderly. Data Sources: A literature search of MEDLINE and PubMed (1966-January 2013) using the terms HMG-CoA reductase inhibitor, statin, primary prevention, elderly, and geriatrics was performed. The search was limited to clinical trials, meta-analyses, and subanalyses, including primary prevention patients. Bibliographies of selected articles were examined to identify additional clinical trials. Study Selection: Fourteen clinical trials, subanalyses, and meta-analyses were reviewed. A total of seven clinical trials and subanalyses evaluating statin therapy versus placebo in the elderly primary prevention patients with a primary endpoint of hard coronary heart disease were included. Data Extraction: Data collected from the clinical trials and subanalyses included number of elderly patients randomized, therapy, duration of follow-up, and the incidence of coronary events. Data Synthesis: The average annual rates of first CV event increases as patients age. There is strong evidence that supports the use of statins for secondary prevention; although primary prevention, specifically in the elderly, is less defined. This paper reviews the literature specifically for primary prevention, for which the results have shown a trend toward decreased first occurrence of coronary heart disease with statin therapy in elderly patients. Conclusion: Statin therapy should be considered as a primary prevention therapy against coronary disease for elderly patients. Evidence-based clinical benefits are seen in this patient population. However, clinical judgment and consideration of comorbidities that may impact life expectancy should be assessed to determine appropriateness for individual patients. abstract_id: PUBMED:16892920 Cost effectiveness of statin therapy for the primary prevention of coronary heart disease. Cholesterol lowering statin therapy accounted for over 10% of total drug acquisition costs (87.5 million euros) under the Community Drugs Schemes in Ireland in 2004. Not surprisingly, the issue of value for money and cost effectiveness arises. Whilst statin therapy has proven cost effective in patients with established cardiovascular disease this is the first economic evaluation of statin therapy for primary prevention in the Irish healthcare setting. Using economic modelling techniques the incremental cost effectiveness of statin therapy in high risk individuals i.e. 10 year risk exceeding 15% ranged from 17,900 euros per life year gained (LYG) to 33,800 euros/LYG under the GMS scheme. Corresponding figures for the Drug Payment scheme were 24,500 euros/ LYG to 48,500 euros/LYG. All the statins could be considered cost effective i.e. threshold below 50,000 euros/LYG however atorvastatin proved the most cost effective statin in this pharmacoeconomic study. abstract_id: PUBMED:8619505 Acute exposure to carbon monoxide does not affect plasma lipids, lipoproteins, and apolipoproteins. Study Objectives: To examine the effects of acute exposure to carbon monoxide and hypoxia on plasma lipids, lipoproteins, and apolipoproteins. Design: Random-order assignment to blinded, inhaled exposures of carbon monoxide and hypoxia. Setting: Research laboratory of ambulatory subjects. Subjects: 10 elderly, male nonsmokers with chronic stable angina. Intervention: Random-order two-hour inhaled exposure to clean air at sea level, carbon monoxide at sea level, carbon monoxide at high altitude, and clean air at high altitude. Measurements: Fasting plasma lipids, lipoproteins, and apolipoproteins before and after exposures. Results: No differences were noted between fasting plasma lipid, lipoprotein, or apolipoprotein levels before and after exposures. Conclusion: Acute exposure to carbon monoxide and high altitude does not affect fasting plasma lipid, lipoprotein, or apolipoprotein levels. abstract_id: PUBMED:11256844 Statin therapy: where are we? Where do we go next? Statin therapy reduces coronary artery disease morbidity and mortality in primary and secondary prevention trials including patients with elevated and average cholesterol levels. The association between reduction of total or low-density lipoprotein cholesterol and preventive benefit is well established. However, additional risk factors for coronary artery disease need to be incorporated into risk assessment to provide an accurate measure of global risk for use in lifestyle intervention and drug therapy guidelines. Assessment of outcomes in the Air Force/Texas Coronary Atherosclerosis Prevention Study primary prevention trial, which involved patients with average cholesterol levels and reduced high-density lipoprotein cholesterol (HDL-C), suggests the importance of on-treatment values of apolipoproteins B and A-I in predicting first major events in such a population. Other data, including trials of fibrate therapy showing reduction in coronary artery disease events, support the importance of triglycerides and HDL-C in coronary artery disease risk. Challenges for future treatment guidelines include incorporation of emerging and novel risk factors into risk assessment, refinement of global risk measurement, and simplification for application to clinical practice. abstract_id: PUBMED:25752015 Statin treatment in primary and secondary prevention--a statement Although lipid-lowering therapy in patients with established coronary heart disease (secondary prevention) is generally accepted, its benefit is often questioned in asympto- matic patients. The ongoing debate about the usefulness of statin therapy has disturbed many patients, especially in the French- and Italian-speaking parts of Switzerland, which lead too often to treatment discontinuation, even in patients who would benefit the most from it. In the primary prevention, the reduction in LDL cholesterol levels with statins decreases the risk for cardiovascular events. The higher the baseline risk, the greater the benefits in terms of absolute risk reduction; hence, using a scoring tool to evaluate the cardiovascular risk is needed. For patients at low risk, lifestyle interventions are preferable. abstract_id: PUBMED:28223407 Polygenic Risk Score Identifies Subgroup With Higher Burden of Atherosclerosis and Greater Relative Benefit From Statin Therapy in the Primary Prevention Setting. Background: Relative risk reduction with statin therapy has been consistent across nearly all subgroups studied to date. However, in analyses of 2 randomized controlled primary prevention trials (ASCOT [Anglo-Scandinavian Cardiac Outcomes Trial-Lipid-Lowering Arm] and JUPITER [Justification for the Use of Statins in Prevention: An Intervention Trial Evaluating Rosuvastatin]), statin therapy led to a greater relative risk reduction among a subgroup at high genetic risk. Here, we aimed to confirm this observation in a third primary prevention randomized controlled trial. In addition, we assessed whether those at high genetic risk had a greater burden of subclinical coronary atherosclerosis. Methods: We studied participants from a randomized controlled trial of primary prevention with statin therapy (WOSCOPS [West of Scotland Coronary Prevention Study]; n=4910) and 2 observational cohort studies (CARDIA [Coronary Artery Risk Development in Young Adults] and BioImage; n=1154 and 4392, respectively). For each participant, we calculated a polygenic risk score derived from up to 57 common DNA sequence variants previously associated with coronary heart disease. We compared the relative efficacy of statin therapy in those at high genetic risk (top quintile of polygenic risk score) versus all others (WOSCOPS), as well as the association between the polygenic risk score and coronary artery calcification (CARDIA) and carotid artery plaque burden (BioImage). Results: Among WOSCOPS trial participants at high genetic risk, statin therapy was associated with a relative risk reduction of 44% (95% confidence interval [CI], 22-60; P&lt;0.001), whereas in all others, the relative risk reduction was 24% (95% CI, 8-37; P=0.004) despite similar low-density lipoprotein cholesterol lowering. In a study-level meta-analysis across the WOSCOPS, ASCOT, and JUPITER primary prevention, relative risk reduction in those at high genetic risk was 46% versus 26% in all others (P for heterogeneity=0.05). Across all 3 studies, the absolute risk reduction with statin therapy was 3.6% (95% CI, 2.0-5.1) among those in the high genetic risk group and 1.3% (95% CI, 0.6-1.9) in all others. Each 1-SD increase in the polygenic risk score was associated with 1.32-fold (95% CI, 1.04-1.68) greater likelihood of having coronary artery calcification and 9.7% higher (95% CI, 2.2-17.8) burden of carotid plaque. Conclusions: Those at high genetic risk have a greater burden of subclinical atherosclerosis and derive greater relative and absolute benefit from statin therapy to prevent a first coronary heart disease event. Clinical Trial Registration: URL: http://www.clinicaltrials.gov. Unique identifiers: NCT00738725 (BioImage) and NCT00005130 (CARDIA). WOSCOPS was carried out and completed before the requirement for clinical trial registration. abstract_id: PUBMED:12641482 Statin therapy in the elderly: does it make good clinical and economic sense? HMG-CoA reductase inhibitors (statins) have been established as the dominant treatment for coronary heart disease (CHD). This dominance is based on an impressive body of clinical trial evidence showing significant benefits in primary prevention of cardiovascular events in individuals at risk for CHD and in secondary prevention of such events in patients with CHD and high or normal plasma cholesterol levels. There is, however, significant room for improvement in the treatment of CHD with respect both to drug efficacy and to the disparity between evidence-based medicine and actual clinical practice particularly in relation to treatment strategies for the elderly. Current statins fall short of requirements for 'ideal' lipid-lowering treatment in several respects; 'super' statins and other agents currently in development may satisfy more of these requirements. Moreover, available therapies are not applied optimally, because of physician nonacceptance and/or patient noncompliance; thus, the majority of patients with CHD or its risk factors still have cholesterol levels that exceed guideline targets. There is also evidence that older patients with CHD, or at high risk of CHD, are undertreated - possibly because of concerns regarding the increased likelihood of adverse events or drug interactions or doubts regarding the cost effectiveness of statin therapy in this population. This group is of particular clinical relevance, since it is showing a proportionate rapid expansion in most national populations. To address their potential healthcare needs, the ongoing Pravastatin in the Elderly at Risk (PROSPER) study is assessing the effects of pravastatin in elderly patients (5804 men and women aged 70-82 years) who either have pre-existing vascular disease or are at significant risk for developing it, with the central hypothesis that statin therapy (pravastatin 40 mg/day) will diminish the risk of subsequent major vascular events compared with placebo. After a 3.2-year treatment period, a primary assessment will be made of the influence of statin treatment on major cardiovascular events (a combination of CHD death, nonfatal myocardial infarction, and fatal or nonfatal stroke). Optimal deployment of the currently available agents and of newer agents (no matter how well they satisfy requirements for ideal treatment) ultimately depends on the establishment of an evidence base and may require far-reaching educational programmes that change the way risk factor management is viewed by caregivers and patients alike. abstract_id: PUBMED:3100468 Sex hormones, lipids, lipoprotein cholesterols, and apolipoproteins in normal and obese subjects: atherogenic relationships. Our aim in the current study of 20 normal controls, 28 overweight, and 26 severely overweight (obese) subjects was to assess interrelationships of obesity, endogenous estradiol (E2) and testosterone (T), and the E2/T ratio with major independent explanatory variables for coronary heart disease (CHD), including lipids, lipoproteins, and apolipoproteins. Most of the lipid and lipoprotein variables (total, high-, low-, and very-low-density lipoprotein cholesterols) as well as apolipoproteins A1, A2, and B did not vary significantly with the presence of obesity. With increasing relative ponderosity, there were, however, increasing levels of total triglycerides and VLDL triglyceride. Levels of FSH, LH, prolactin, and testosterone did not differ significantly with obesity. The obese subjects had the highest E2 and E2/T levels; overweight subjects had intermediate levels which were also significantly higher than in the controls. Using multiple regression analyses, in obese subjects increasing T was associated with increasing apo B, and increasing E2 was correlated with decreasing apo A1. Opposite relationships were found in the normal controls where increasing T and increasing Quetelet indices were associated with diminished apo B and increasing E2 was associated with increasing A1. Obesity's association with increased CHD risk may be mediated through increasing E2 and apo B and reducing apo A1. Since obese subjects have higher E2 levels and often have lower T, they are likely to have a pattern of endogenous sex hormones (higher E2, lower T, higher E2/T ratios) similar to those observed in young men with premature myocardial infarction. Answer: Yes, lipids and apolipoproteins do predict coronary heart disease (CHD) under statin and fibrate therapy in the primary prevention setting in community-dwelling elderly subjects. According to the 3C Study, standard lipids and apolipoproteins were found to be stronger predictors of CHD in those receiving statins than in those who were not receiving any lipid-lowering treatment (LLT). Specifically, among statin users, the hazard ratio (HR) for LDL-cholesterol predicting future CHD was 2.20, indicating a strong association. In contrast, among fibrate users, significant associations were observed for triglycerides only, with an HR of 1.68 in fully adjusted analyses (PUBMED:21129746). This suggests that while both statins and fibrates are associated with a reduction in the risk of CHD, the predictive value of specific lipids and apolipoproteins may differ depending on the type of LLT being used.
Instruction: Terrible triad injuries of the elbow: does the coronoid always need to be fixed? Abstracts: abstract_id: PUBMED:33138199 Does the Coronoid Always Need to Be Fixed in Terrible Triad Injuries of the Elbow? Mid-Term Postoperative Outcomes Following a Standardized Protocol. The purpose of this study was to investigate mid-term outcomes and complications after operative treatment according to a standardized protocol for terrible triad injuries. Twenty-four patients that were treated by a single surgeon with a standardized surgical protocol were retrospectively reviewed. After the complete reconstruction of radial head and/or lateral collateral ligament (LCL) complex through a lateral approach, coronoid process, and/or medial collateral ligament (MCL) complex through a medial approach were fixed if the elbow is unstable. For coronoid fractures, only type III were fixed in four cases (16.7%). Twenty-two LCL (91.7%) and five MCL (20.8%) complexes were repaired. At the final follow-up, the mean MEPS and Quick-DASH score were 91.5 and 17.3, respectively. There was no recurrent instability after operation in all cases. This study revealed that operative treatment that was based on our standardized protocol for terrible triad injuries yielded satisfactory mid-term clinical and radiographic outcomes without any recurrent instability. These results suggest that Type I and II coronoid fractures in terrible triad injuries do not need to be fixed if the radial head and ligamentous complex are completely reconstructed. abstract_id: PUBMED:26550399 Severe injury of bilateral elbow joints with unilateral terrible triad of the elbow and unilateral suspected terrible triad of the elbow complicated with olecranon fracture: one case report. Terrible triad of the elbow is characterized as posterior dislocation of the elbow joint accompanied by the fractures of the radial head and coronoid process of the ulna, which is rarely seen in clinical practice, especially because the mild fracture is barely detected by imaging method In this study, we reported one case of serious complex bilateral elbow injury, presenting with unilateral typical terrible triad of the elbow and suspected terrible triad of the elbow complicated with olecranon fracture on the other side. Clinical experience was obtained during the diagnosis and treatment procedures. abstract_id: PUBMED:37587961 Terrible triad injuries are no longer terrible! Functional outcomes of terrible triad injuries: a scoping review. Background: The terrible triad injury (TTI) of the elbow is a combination of a posterolateral dislocation of the elbow joint combined with fractures of the radial head and coronoid process most often caused by a fall on an outstretched hand. The injury pattern was named for its poor outcomes and high complication rates following surgical repair, but increased understanding of elbow anatomy and biomechanics has led to the development of standardized surgical protocols in an attempt to improve outcomes. Most existing literature on terrible triad injuries is from small retrospective cohort studies and surgical techniques to improve outcomes. Therefore, the purpose of this scoping review is to provide an overview of the functional outcomes, prognosis, and complications following current surgical treatment of TTIs. Methods: A scoping review was performed to evaluate the literature. In total, 617 studies were identified and screened by 2 reviewers, with 43 studies included for qualitative analysis. These 43 studies underwent data extraction for functional outcomes using the Mayo Elbow Performance Score (MEPS) and Disabilities of the Arm, Shoulder, and Hand score (DASH) and were stratified accordingly. Secondary outcome measures assessed in the study were a range of motion (ROM) and complication rate. Results: The average MEPS was 90 (excellent) from a total of 37 studies with 1609 patients, and the average DASH score was 16 from 16 studies with 441 patients. Another 6 studies with a total of 127 patients reported a mean Q-DASH score of 13. A total of 39 studies consisting of 1637 patients had a mean forearm rotation of 135 degrees, and 36 studies consisting of 1606 patients had a mean flexion-extension arc of 113 degrees. Among the studies, there was a 30% complication rate with a need for revision surgery in 7.8% of cases. The most common complications were radiographic evidence of heterotopic ossification (11%) and ulnar nerve neuropathy (2.6%). Discussion/conclusions: This study shows that current surgical treatment for terrible triad injuries has resulted in improved outcomes. Based on primary outcome measures using MEPS and DASH scores, almost all of the studies have highlighted good or excellent functional outcomes. This highlighted the marked improvement in outcome scores since the term was coined, suggesting that terrible triad injuries may no longer be so terrible. abstract_id: PUBMED:29290879 Complex Elbow Dislocations and the "Terrible Triad" Injury. Background: The elbow is the second most commonly dislocated joint in adults and up to 20% of dislocations are associated with a fracture. These injuries can be categorised into groups according to their mechanism and the structures injured. Methods: This review includes a literature search of the current evidence and personal experiences of the authors in managing these injuries. Results: All injuries are initially managed with closed reduction of the ulno-humeral joint and splinting before clinical examination and radiological evaluation. Dislocations with radial head fractures should be treated by restoring stability, with treatment choice depending on the type and size of radial head fracture. Terrible triad injuries necessitate operative treatment in almost all cases. Traditionally the LCL, MCL, coronoid and radial head were reconstructed, but there is recent evidence to support repairing of the coronoid and MCL only if the elbow is unstable after reconstruction of lateral structures. Surgical treatment of terrible triad injuries carries a high risk of complications with an average reoperation rate of 22%. Varus posteromedial rotational instability fracture-dislocations have only recently been described as having the potential to cause severe long-term problems. Cadaveric studies have reinforced the need to obtain post-reduction CT scans as the size of the coronoid fragment influences the long-term stability of the elbow. Anterior dislocation with olecranon fracture has the same treatment aims as other complex dislocations with the added need to restore the extensor mechanism. Conclusion: Complex elbow dislocations are injuries with significant risk of long-term disability. There are several case-series in the literature but few studies with sufficient patient numbers to provide evidence over level IV. abstract_id: PUBMED:37969512 Terrible triad injury of the elbow: a spectrum of theories. For more than one century, understanding the injury mechanism leading to the terrible triad of the elbow (TTE) was a significant challenge for surgeons. We aimed to summarize: (1) the history of the treatment of TTE and (2) the increasing scientific knowledge that supported its evolution. Five electronic databases were searched between 1920 and 2022. Results were reported as a comprehensive review of the relevant literature. Between 1940 and 1980, surgical exploration allowed observation of complex elbow instability involving both radial head, coronoid process, and ligament(s) injuries. In 1966, Osborne introduced the concept of posterolateral rotatory instability as the first mechanism injury to explain the complex elbow instability. From 1980 to 1995, a biomechanical revolution by American pioneers critically improved our understanding of elbow instability. After 1992, a few unifying theories and surgical protocols were provided, but those have divided the surgeons' population. The formalization of the TTE treatment allowed avoiding of terrible short-term outcomes. However, post-traumatic osteoarthritis (PTOA) at long-term follow-up is still an issue. No consensual surgical protocol for the treatment of TTE has been widely accepted. While the outcomes of the TTE have been improved, the rate of PTOA at long-term follow-up is still high regardless of the treatments. The terrible triad has given way to the subtle triad with persistent microinstability of the elbow. The next challenge for elbow surgeons is to diagnose and fix this persistent subclinical instability after surgery in order to prevent the onset of PTOA. abstract_id: PUBMED:33194099 Radial Head Fixation vs Replacement in Terrible Triad: Preliminary Results of a Prospective Cohort Study with Patient Reported Outcome. Introduction: The combination of posterior elbow dislocation, radial head fracture, and coronoid fracture has been named "terrible triad" as this injury is difficult to treat and yield poor outcomes. Some studies compared the results of radial head fixation to replacement in isolated radial head fractures, very few reports have tried to compare both treatment modalities in the setting of a terrible triad injury.The aim of this study was to compare the functional outcome of radial head fixation vs replacement in terrible triad injuries of the elbow. Methods: A single-center, prospective cohort study was conducted at an academic Level 1 Trauma Center from April 2016 to April 2019.A total of 30 patients with terrible triad injury of the elbow were enrolled. The primary outcome was the Quick Disability of Arm, Shoulder and Hand (Quick-DASH) at 1 year. The secondary outcomes were to report the Mayo Elbow Performance Score (MEPS) at 1 year and elbow range of motion. Results: At final follow up for each patient (13.4 ± 1.2 months) The Quick-DASH score at 1 year for fixation group was 5 ± 2.3, for the replacement group it was 7.1 ± 7.1. The MEPS at 1 year for fixation group was 93.6 ± 8.4, for the replacement group it was 90.9 ± 9.4.Loss of elbow extension at 1 year for fixation group was 17.1° ± 10.7°, for replacement group it was 18.75° ± 11.5°. Conclusion: A systematic approach to address the bony and soft tissue components of terrible triad elbow injury is crucial to obtain satisfactory outcome. Our study shows that radial head repair and replacement, in the setting of terrible triad injury, yields comparable results. Level Of Evidence: Level II, Therapeutic study. abstract_id: PUBMED:27526068 Surgical reconstruction of comminuted coronoid fracture in terrible triad injury of the elbow. The terrible triad injury of the elbow is the combination of an elbow dislocation, a radial head fracture and a coronoid process fracture. In this study, we explored the outcome of a modified protocol for terrible triad injury of the elbow in a consecutive series of 14 patients, with a focus on reconstruction of comminuted coronoid fractures. Fourteen patients with terrible triad injuries of the elbow were retrospectively reviewed at a mean follow-up of 23 months (range, 15-30 months) and were clinically and radiographically evaluated. For comminuted coronoid fractures, autografting with resected radial head fragment or ilium fragment with cartilage surface and transosseous suture with non-absorbable suture were performed. Internal fixation of the radial head was performed in six cases and arthroplasty in five. The collateral ligaments were repaired. Mean flexion at last follow-up was 125°, ranging from 100° to 135°. Mean extension loss was 13°, ranging from 0° to 38°. Mean pronation was 70° and mean supination was 66°. No patient experienced dislocation of the radial head prosthesis. The mean Mayo Elbow Performance Score (MEPS) was 87 (range, 75-100), with six excellent cases and eight good cases. According to our intraoperative examination, no patient demonstrated unacceptable residual instability in extension following restoration of all of the osseous and ligamentous lesions. In conclusion, our protocol can achieve stable reconstruction of the coronoid process, which promotes the functional outcome of surgical treatment on terrible triad injuries of the elbow. abstract_id: PUBMED:37323987 Wide-Awake Approach for Terrible Triad Injury Fixation: A Report of Two Cases. The wide-awake local anesthesia no tourniquet (WALANT) technique has been widely used for various upper extremity surgeries, but its use for surgical fixation of terrible triad injuries has yet to be reported in the literature. Two cases of terrible triad injuries surgically treated under WALANT are presented. Coronoid screw fixation and radial head replacement were performed for the first case, and radial head fixation and coronoid suture lasso were performed for the second. Intraoperative stability during the active range of motion of both the elbows was tested after fixation. Difficulties encountered included pain near the coronoid due to its deep location causing difficulty injecting local anesthetic, and shoulder pain during the surgery due to prolonged preoperative immobilization. WALANT for terrible triad fixation is a viable alternative to general and regional anesthesia in a select number of patients with the added benefit of intraoperative elbow stability testing during active range of motion. abstract_id: PUBMED:28314869 Operative treatment of terrible triad injury of the elbow : Open reduction and internal fixation Aim Of Surgery: Reconstruction of the most important ligamentous and osseus structures of the elbow after terrible triad injury via the radial head to the lateral collateral ligament complex (LCL) and if necessary beginning at the coronoid process. The aim is a stable concentrically guided elbow with early functional follow-up treatment. The approach depends on the intraoperatively tested stability. Indications: Osteoligamentous terrible triad injury pattern with or without subluxation position following reduction and temporary immobilization. Contraindications: Inoperable due to comorbidities. Concentric elbow with radial head fracture without impairment of pronation/supination, coronoid fragment &lt;50% and stable range of motion up to 30°. Operative Technique: Lateral access according to Kaplan or Kocher in order to address the anterior capsule/coronoid tip. Stabilization of the radial head with mini fragment screws and plates or radial head prosthesis. Osseous reinsertion of the LCL at its origin with transosseous sutures/bone anchors on the radial epicondyle of the humerus. In cases of persisting instability (hanging arm test) treatment with lateral movement fixation and/or the medial collateral ligaments from medial. Follow-up Treatment: Immobilization in upper arm plaster cast in the first postoperative days, active assistive pain-adapted movement therapy in the cast from postoperative day 1 and after 6-8 weeks resistive therapy in the whole elbow. Results: Control of 15 terrible triad patients (mean age 45.9 years, range 20-87 years) after 9.6 months (range 2.6-31.6 months), extent of movement flexion/extension 131/14/0°, pronation/supination 78/0/67°. Arthrolysis after an average of 38 weeks in 4 patients, signs of joint arthrosis in 8, heterotopic ossification in 7 and neuropathic complaints in the region of the ulnar nerve in 1 patient. Early functional therapy with reproducible results by stabilization of osteoligamentous structures. abstract_id: PUBMED:30690677 Terrible triad injuries of the elbow. Terrible triad injury of the elbow is a complex injury that is classically defined as elbow dislocation along with fractures of the coronoid process of the ulna and the radial head. The injury is usually associated with typical soft-tissue disruptions (with common involvement of the lateral collateral ligament complex, elbow joint capsule, as well as the common extensor and flexor-pronator tendons) that are best understood in the context of injury mechanism as well as the role and relevance of the various elbow stabilizers. The goals of this article are to review the pertinent anatomy, mechanism of injury, classification and imaging of terrible triad injuries of the elbow with brief descriptions of treatment, and complications of this complex injury. Answer: Terrible triad injuries of the elbow, which involve a combination of elbow dislocation, radial head fracture, and coronoid process fracture, are complex and challenging to treat. The decision to fix the coronoid process in these injuries is not always straightforward. According to a study investigating mid-term outcomes and complications after operative treatment according to a standardized protocol for terrible triad injuries, it was found that Type I and II coronoid fractures in terrible triad injuries do not need to be fixed if the radial head and ligamentous complex are completely reconstructed (PUBMED:33138199). This suggests that not all coronoid fractures require fixation, particularly if stability can be achieved through other means. Another study on the surgical reconstruction of comminuted coronoid fractures in terrible triad injuries of the elbow indicated that stable reconstruction of the coronoid process promotes functional outcomes, but this study focused on comminuted fractures, which are more complex than Type I and II fractures (PUBMED:27526068). Furthermore, a review of complex elbow dislocations and the "Terrible Triad" injury mentioned that traditionally, the lateral collateral ligament (LCL), medial collateral ligament (MCL), coronoid, and radial head were reconstructed, but recent evidence supports repairing the coronoid and MCL only if the elbow is unstable after reconstruction of lateral structures (PUBMED:29290879). In summary, the current understanding suggests that the coronoid does not always need to be fixed in terrible triad injuries of the elbow. The decision to fix the coronoid process should be based on the type of fracture and the stability of the elbow after reconstruction of other structures, particularly the radial head and ligamentous complex.
Instruction: Treatment-related fractures after combined modality therapy for soft tissue sarcomas of the proximal lower extremity: Can the risk be mitigated? Abstracts: abstract_id: PUBMED:27126136 Treatment-related fractures after combined modality therapy for soft tissue sarcomas of the proximal lower extremity: Can the risk be mitigated? Purpose: The purpose of this study was to investigate the incidence of and risk factors associated with femur fracture after combined modality therapy for soft tissue sarcomas (STS) of the proximal lower extremity. Methods And Materials: We reviewed the records of 596 patients with STS of the proximal lower extremity consecutively treated with surgery and radiation therapy (RT) from 1966 to 2012. One hundred ninety-seven patients (33%) received 50 Gy to the entire femur circumference (n = 197, 33%); 265 patients (45%) received perioperative chemotherapy, and during surgery, 155 patients (26%) had bone exposure, whereas 82 patients (14%) had the periosteum stripped. The Kaplan-Meier method was used to estimate actuarial outcome rates, and both Cox regression modeling and competing risk analyses using the method of Fine and Gray were performed. Results: Median follow-up time was 110 months (range, 6-470 months). The actuarial 10-year local control and overall survival rates were 88% (95% confidence interval [CI], 84%-90%) and 62% (95% CI, 57%-66%). Twelve patients had pathologic fractures, which were associated with 50 Gy to the entire bone circumference (P &lt; .001), bone exposure (P &lt; .001), and periosteal stripping during surgery (P &lt; .001) and use of perioperative chemotherapy (P = .04). Using a competing risk model, bone exposure (P = .001; sub-hazard ratio [SHR], 9.13; 95% CI, 2.5-33.0), periosteal stripping (P &lt; .001; SHR, 13.03; 95% CI, 4.0-43.0), and perioperative chemotherapy (P = .03; SHR, 4.03; 95% CI, 1.1-14.4) were significantly associated with fracture. The actuarial 10-year fracture rate was 2% (95% CI, 1%-3%) when the 50-Gy isodose line encompassed the entire bone circumference without the other risk factors, whereas it increased to 37% (95% CI, 12%-45%) when all 4 treatment-related factors were present. Conclusions: Although femur fractures are rare, a component from each of the 3 therapeutic modalities contributes to the overall risk, and a multidisciplinary approach to mitigating fracture risk is needed. Although avoiding circumferential bone coverage with the 50-Gy isodose line may be a valuable dosimetric parameter, more rigorous dosimetric studies are required. abstract_id: PUBMED:17036354 Complications of combined modality treatment of primary lower extremity soft-tissue sarcomas. Background: Correlations between various patient, tumor, and treatment characteristics and complications in patients undergoing combined modality treatment for primary lower extremity soft-tissue sarcomas were investigated. Methods: Using the M. D. Anderson Radiation Oncology database, the records of the subset of patients treated with combined radiation and limb-sparing surgery for primary lower extremity soft-tissue sarcomas were retrospectively reviewed from the years 1960 to 2003. Results: In all, 412 patients were identified. With a median follow-up of 9.3 years, there were a total of 113 (27%) acute wound complications and 41 (13% at 20 years) chronic radiation-related limb complications. Preoperative radiation and tumor sizes &gt;5 cm were associated with an increased risk of acute wound complications (34% preoperative vs. 16% postoperative, P &lt; .001; and 31% &gt;5 cm vs. 17% &lt; or =5 cm, P = .005). At 20 years the radiation-related complication rate was higher in patients with a groin or thigh tumor location (16% vs. 4% other; P = .008), prior acute wound complications (20% vs. 10% no surgical complication), and a radiation dose &gt; or =60 grays (Gy) (18% vs. 9% for dose &lt; 60 Gy; P = .04). Five fractures occurred, resulting in a crude overall fracture rate of 1.2%. Conclusions: Patients treated with preoperative radiation for larger tumors are more likely to have acute surgical wound complications. Acute wound complications followed by postoperative radiation are associated with chronic radiation-related limb problems, as are higher radiation dose and proximal tumor location. The fracture rate is so low that prophylactic fixation is not warranted. abstract_id: PUBMED:18663942 Results of a combined therapy of patients with metastatic bone diseases In the department of bone and soft tissue tumors of Institute of oncology the authors performed a combined treatment (surgery + combined chemotherapy + bisphosphonates + radiation) in 20 patients with bone metastases. Results showed that postoperative complications were observed in 2 patients, new bone lesions in 4 patients. Those satisfactory results prove the efficacy of these treatment options. abstract_id: PUBMED:20108335 Prediction of pathologic fracture risk of the femur after combined modality treatment of soft tissue sarcoma of the thigh. Background: The objective of the current study was to formulate a scoring system to enable decision making for prophylactic stabilization of the femur after surgical resection of a soft tissue sarcoma (STS) of the thigh. Methods: A logistic regression model was developed using patient variables collected from a prospectively collected database. The study group included 22 patients who developed a radiation-related pathological fracture of the femur after surgery and radiotherapy for an STS of the thigh. The control group of 79 patients received similar treatment but did not sustain a fracture. No patients received chemotherapy. The mean follow-up was 8.6 years. The variables examined were age, gender, tumor size, radiation dose (low [50 grays (Gy)] vs high [&gt; or = 60 Gy]), extent of periosteal stripping (&lt;10 cm, 10-20 cm, and &gt;20 cm), and thigh compartment involvement (posterior, adductor, anterior or other [ie, abductors and groin]). Results: On the basis of an optimal regression model, the ability to predict radiation-associated fracture risk was 91% sensitive and 81% specific. The area under the receiver operating characteristic curve was 0.9, which supports this model as a very accurate predictor of fracture risk. Conclusions: Radiation-related fractures of the femur after combined surgery and radiotherapy for STS are uncommon, but are difficult to manage and their nonunion rate is extremely high. The results of the current study suggest that it is possible to predict radiation-associated pathological fracture risk using patient and treatment variables with high sensitivity and specificity. This would allow for the identification of high-risk patients and treatment with either close follow-up or prophylactic intramedullary nail stabilization. The presentation of this model as a nomogram will facilitate its clinical use. abstract_id: PUBMED:15687153 Fractures following radiotherapy and limb-salvage surgery for lower extremity soft-tissue sarcomas. A comparison of high-dose and low-dose radiotherapy. Background: The purpose of the present retrospective study was to determine the correlation between radiation therapy and the risk of postradiation fracture following combined therapy for the treatment of soft-tissue sarcomas of the lower extremity. Methods: Three hundred and sixty-four patients with lower extremity sarcomas that had been treated with combined external beam radiation therapy and limb-salvage surgery were evaluated on the basis of a combined chart and data-base review. For the purposes of analysis, high-dose radiation was defined as 60 or 66 Gy and low-dose radiation was defined as 50 Gy. The timing of irradiation was defined as preoperative, postoperative, or preoperative followed by a postoperative boost. Univariate and multivariate analyses were used to determine which factors were associated with fracture risk. Results: Twenty-seven fractures occurred in twenty-three patients. Twenty-four fractures occurred in twenty patients who had been managed with high-dose radiation. Seventeen of these patients had received postoperative radiation (with fifteen patients receiving 66 Gy and two receiving 60 Gy), and three had received preoperative radiation with a postoperative boost (total dose, 66 Gy). Three fractures occurred in three patients who had received preoperative, low-dose radiation (50 Gy). Of the twenty-three patients who sustained a pathologic fracture, eighteen were female and five were male. The crude median time to fracture was forty-three months. Most fractures occurred in the femoral shaft (thirteen) or the femoral neck (eight). High-dose radiation was associated with a greater risk of fracture when compared with low-dose radiation (p = 0.007). Conclusions: Women more than fifty-five years of age who are managed with removal of a thigh sarcoma combined with radiation therapy have a higher risk of pathologic fracture. The frequency of pathologic fractures associated with higher doses (60 or 66 Gy) of radiation is significantly higher than that associated with lower doses (50 Gy). abstract_id: PUBMED:35856506 Long-term experience with intraoperative electron radiation therapy boost in extremity sarcoma. Background: In patients with extremity soft tissue sarcoma (eSTS), we describe outcomes of preoperative external beam radiation therapy (EBRT), limb-sparing surgery (LSS), and intraoperative electron radiation therapy (IOERT). Methods: One hundred and eighteen patients with eSTS treated between October 17, 2002 and July 28, 2021 were identified. EBRT was delivered preoperatively followed by LSS and IOERT. Results: The median follow-up was 24.9 months. The presentation was primary in 102 (94%) patients and recurrent in 6 (6%) patients. Tumor location was lower extremity in 82 (76%) patients and upper extremity in 26 (24%) patients. Stage distribution was as follows: 3 (3%) IA, 24 (22%) IB, 31 (29%) II, 24 (22%) IIIA, and 25 (23%) IIIB. Final surgical margins were negative in 96 (89%) patients. The 5-year local control, failure-free survival, and overall survival were 94%, 75%, and 64%, respectively. Univariate analysis identified age &gt;50, lower extremity, and higher grade as significant negative prognostic factors for overall survival. Grade 3 fracture or osteoradionecrosis requiring surgical fixation, neuropathy, and lymphedema occurred in 7 (6%), 1 (1%), and 0 patients, respectively. Conclusions: This study represents one of the largest series using preoperative EBRT, LSS, and IOERT for eSTS, with high local control and a low rate of late severe toxicity. abstract_id: PUBMED:28229172 External-beam radiation therapy combined with limb-sparing surgery in elderly patients (&gt;70 years) with primary soft tissue sarcomas of the extremities : A retrospective analysis. Purpose: To report our experience with EBRT combined with limb-sparing surgery in elderly patients (&gt;70 years) with primary extremity soft tissue sarcomas (STS). Methods: Retrospectively analyzed were 35 patients (m:f 18:17, median 78 years) who all presented in primary situation without nodal/distant metastases (Charlson score 0/1 in 18 patients; ≥2 in 17 patients). Median tumor size was 10 cm, mainly located in lower limb (83%). Stage at presentation (UICC7th) was Ib:3%, 2a:20%, 2b:20%, and 3:57%. Most lesions were high grade (97%), predominantly leiomyosarcoma (26%) and undifferentiated pleomorphic/malignant fibrous histiocytoma (23%). Limb-sparing surgery was preceded (median 50 Gy) or followed (median 66 Gy) by EBRT. Results: Median follow-up was 37 months (range 1-128 months). Margins were free in 26 patients (74%) and microscopically positive in 9 (26%). Actuarial 3‑ and 5‑year local control rates were 88 and 81% (4 local recurrences). Corresponding rates for distant control, disease-specific survival, and overall survival were 57/52%, 76/60%, and 72/41%. The 30-day mortality was 0%. Severe postoperative complications were scored in 8 patients (23%). Severe acute radiation-related toxicity was observed in 2 patients (6%). Patients with Charlson score ≥2 had a significantly increased risk for severe postoperative complications and acute radiation-related side effects. Severe late toxicities were found in 7 patients (20%), including fractures in 3 (8.6%). Final limb preservation rate was 97%. Conclusion: Combination of EBRT and limb-sparing surgery is feasible in elderly patients with acceptable toxicities and encouraging but slightly inferior outcome compared to younger patients. Comorbidity correlated with postoperative complications and acute toxicities. Late fracture risk seems slightly increased. abstract_id: PUBMED:16442951 Late treatment-related complications in 214 patients with extremity soft-tissue sarcoma treated by surgery and postoperative radiation therapy. Background: We assessed the occurrence of long-bone fracture and other side effects in a group of 214 consecutive patients who underwent radical excision for soft-tissue sarcoma of the limb followed by postoperative irradiation. Methods: Two hundred fourteen patients underwent postoperative irradiation after radical excision of soft-tissue sarcoma of the limb; 156 (73%) received postoperative brachytherapy (BRT) plus external-beam radiation therapy (EBRT), and 58 (27%) underwent postoperative EBRT only. All patients were followed-up for a median time of 4.5 years (range 3 months to 10 years). Results: Seven patients developed bone fracture, which is considered severe morbidity; time between surgery and occurrence of fracture ranged between 10 and 72 months (average 31). Severe sclerosis with impairment of limb function was diagnosed in 5 and peripheral nerve damage in 3 patients. Wound complications were detected in 8 patients. Conclusions: In our series, no statistically significant correlation between bone fracture and clinical features or "technical" parameters was found, but all of the patients who experienced bone fracture (7 of 7) were postmenopausal women &gt;55 years old. abstract_id: PUBMED:32430264 Allograft Alone vs. Allograft with Intramedullary Vascularized Fibular Graft for Lower Extremity Bone Cancer: A Systematic Review and Meta-Analysis. Background: Limb salvage surgery has become a first-line treatment in the setting of lower extremity cancer. Despite a multitude of techniques, the most effective modality for osseous reconstruction has not been described. We aim to examine outcomes of allograft reconstruction alone compared to an allograft combined with a vascularized fibula graft (VFG) (Capanna Technique) through a meta-analysis of the literature. Methods: A systematic review of the literature was conducted through the online databases PubMed, Embase, and Web of Science, examining articles utilizing an allograft with or without an intramedullary VFG for the reconstruction of lower extremity osseous defects following oncological resection. A meta-analysis examined outcomes, including nonunion, infection, fractures, explantation, and functional limb status, that were evaluated dependent upon reconstructive modality. Results: Twenty-five articles were deemed appropriate for inclusion. The allograft with VFG group had considerably lower rates of nonunion (13%) in comparison to the allograft alone group (21.4%) (p &lt; .001). Rates of infection (7.9% vs. 9%, respectively) and fracture (19.6% vs. 19.1%) were not statistically different. The allograft with VFG group also had significantly lower rates of explantation (6.57%) compared to the allograft alone cohort (18.11%) (p &lt; .001). Functional outcomes were similar across groups as measured by Musculoskeletal Tumor Society scores (88.22% vs. 87.77%). Conclusions: Allograft reconstruction with a supplementary inlay VFG decreases the risk of nonunion and explantation compared to allograft alone reconstruction in the setting of oncological resection of the osseous lower extremity. This technique warrants increased consideration in the operative planning of lower limb reconstruction following tumor extirpation. abstract_id: PUBMED:21067882 Complications of bone tumors after multimodal therapy. Purpose: To define and compare the complications of bone tumors after resection, extracorporeal irradiation and re-implantation, with or without radiotherapy. Materials And Methods: Eighty patients (40 males and 40 females, ages 4-77 years) with 61 malignant and 19 benign bone tumors were evaluated for local and distant complications after treatment. Two groups of patients were studied: (1) 53 patients had resection without (43 patients) or with external beam radiotherapy (RadRx) (10 patients) and (2) 27 patients underwent extracorporeal irradiation and re-implantation without (22 patients) or with RadRx (5 patients). Patient follow-up varied from 1 month to 13.63 years with mean follow-up of 4.7 years. Imaging studies included bone and chest radiography, spin echo T1- and T2-weighted (or STIR) magnetic resonance imaging (MRI), dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI), computed tomography (CT) for thoracic and abdominopelvic metastases and 3-phase technetium-99m-labeled-methylene-diphosphonate (Tc99m MDP) scintigraphy for bone metastases. Results: DCE-MRI differentiated the rapidly enhancing recurrences, residual tumors and metastases from the slowly enhancing inflammation, and the non-enhancing seromas and fibrosis. Recurrences, metastases (mainly to lung and bone), and seromas were greater than twice as frequent in patients after resection than after ECCRI. Although 11.3% of post-resection patients had residual tumor, no ECRRI-treated patient had residual tumor. In contrast, after ECRRI, infection was almost three times as frequent and aseptic loosening twice as frequent as compared with the post-resection patients. Bones treated with RadRx and/or ECRRI showed increased prevalence of fractures and osteoporosis. In addition, muscle inflammation was more common in the externally irradiated patient as compared with the patient who did not receive this therapy. However, another soft tissue complication, heterotopic ossification, was rare in the patient after RadRx, but 25.6% of patients after resection and 40.9% after ECRRI showed heterotopic ossification. Unusual complications after resection or ECRRI involved adjacent nerves with partial denervation, amputation neuroma, or entrapment (secondary to recurrence or fibrosis) after resection or ECRRI with or without RadRx. One patient developed a posterior tibial artery pseudoaneurysm after ECRRI. Conclusions: Follow-up of patients with benign and malignant bone tumors demonstrated the efficacy of DCE-MRI for distinguishing rapidly enhancing viable tumor from the slowly enhancing or non-enhancing benign processes after different therapies. Although recurrences, residual tumors, metastases and seromas were more common after resection, fractures, osteoporosis, infection, and muscular atrophy predominated in the ECRRI-treated patient. RadRx further predisposed post-resection and post-ECRRI patients to develop fractures, osteoporosis and infection and was the major cause of persistent muscle inflammation at MRI. Because complications can evolve and resolve years after treatment, the patients with bone tumors, particularly sarcomas, must receive life-time multimodal imaging for maximal diagnosis and treatment. Answer: Treatment-related fractures after combined modality therapy for soft tissue sarcomas of the proximal lower extremity are a recognized complication, but there are strategies that can potentially mitigate the risk. A study found that the risk factors associated with femur fracture after combined modality therapy included 50 Gy radiation to the entire bone circumference, bone exposure, periosteal stripping during surgery, and use of perioperative chemotherapy (PUBMED:27126136). The actuarial 10-year fracture rate was significantly higher when all four treatment-related factors were present compared to when none were present. To reduce the risk of fractures, it is suggested that a multidisciplinary approach be adopted. Avoiding circumferential bone coverage with the 50-Gy isodose line may be a valuable dosimetric parameter, although more rigorous dosimetric studies are required to confirm this (PUBMED:27126136). Additionally, the use of intraoperative electron radiation therapy (IOERT) in combination with preoperative external beam radiation therapy (EBRT) and limb-sparing surgery (LSS) has shown high local control and a low rate of late severe toxicity, including fractures (PUBMED:35856506). In elderly patients, the combination of EBRT and limb-sparing surgery is feasible with acceptable toxicities and a slightly increased late fracture risk, suggesting that comorbidity correlates with postoperative complications and acute toxicities (PUBMED:28229172). Furthermore, the use of allograft reconstruction with a supplementary inlay vascularized fibula graft (VFG) has been shown to decrease the risk of nonunion and explantation compared to allograft alone reconstruction following tumor extirpation in the lower extremity (PUBMED:32430264). In conclusion, while treatment-related fractures are a concern, careful planning and a multidisciplinary approach that includes consideration of dosimetric parameters, the use of IOERT, and the selection of appropriate reconstruction techniques can help mitigate the risk of fractures after combined modality therapy for soft tissue sarcomas of the proximal lower extremity.
Instruction: Are fitness centres part of the health care sector? Abstracts: abstract_id: PUBMED:18761875 Are fitness centres part of the health care sector? Introduction: Disease in the musculoskeletal system accounts for the largest proportion of chronic disease in Denmark, and the associated costs amount to billions of kroner every year. Prevention and treatment have focussed on exercise and training. Training in fitness centres is one of the most popular forms of exercise in Denmark and the number of users is increasing rapidly. We suspect that musculoskeletal problems are common among members of fitness centres, and that good communication between the centres and the health care sector would optimize treatment. The purpose of the present study is to describe the extent of musculoskeletal problems among members of fitness centres and the degree of communication between the centres and the health care sector. Materials And Methods: Information regarding age, sex, musculoskeletal complaints, possible treatment, and whether there had been any communication between health care providers and the fitness centres before or during the period of training was collected among members of five fitness centres in Denmark. Results: 485 (94%) out of a total of 516 members participated in the study. 56% reported that they had one or more musculoskeletal problem when joining the centre. Out of these, 77% stated that musculoskeletal problems were the main or a contributing reason for joining the centre. More than half the participants with musculoskeletal complaints had received some kind of treatment within the previous year. However, communication between health care providers and fitness centres was uncommon. Conclusion: The fitness sector is growing rapidly and more than 50% of members suffer from musculoskeletal problems. Most of these also receive treatment for their problems but there is very little and almost no formal communication between the health care sector and the fitness centres. abstract_id: PUBMED:30370564 Reforms in the Portuguese health care sector: Challenges and proposals. Portugal has one of the most complete public systems worldwide. Since 1979, the Portuguese National Health Service (NHS) was developed based on the integration and complementarity between different levels of care (primary, secondary, continued, and palliative care). However, in 2009, the absence of economic growth and the increased foreign debt led the country to a severe economic slowdown, reducing the public funding and weakening the decentralized model of health care administration. During the austerity period, political attention has focused primarily on reducing health care costs and consolidating the efficiency and sustainability with no structural reform. After the postcrisis period (since 2016), the recovery of the public health system begun. Since then, some proposals have required a reform of the health sector's governance structure based on the promotion of access, quality, and efficiency. This study presents several key issues involved in the current postcrisis reform of the Portuguese NHS response structure to citizens' needs. The article also discusses the implications of this Portuguese experience based on current reforms with impact on the future of citizens' health. abstract_id: PUBMED:8693366 Implications of the development of managed health care in the South African private health care sector. The South African private health care sector has been looking to managed health care (MHC) to control the unsustainable cost escalations of the last decade. This paper draws on experience of MHC in other countries, particularly the USA, to assess its potential for solving the private sector's difficulties. In addition, it looks at problems which may be associated with MHC in a South African setting. The conclusion reached is that MHC alone cannot be seen as a panacea for the private sector's financial problems, although it may produce a degree of saving and be part of a solution. It is argued that MHC per se seems unlikely to compromise equity, quality of care or the public health care sector, and that it may potentially promote national health policy objectives. However, if MHC's benefits are to be maximised and potential negative effects controlled, ongoing monitoring of MHC, coupled with an appropriate regulatory and incentive environment, will be required. abstract_id: PUBMED:30950425 India's health and wellness centres: realizing universal health coverage through comprehensive primary health care. In common with other countries in the World Health Organization South-East Asia Region, disease patterns in India have rapidly transitioned towards an increased burden of noncommunicable diseases. This epidemiological transition has been a major driver impelling a radical rethink of the structure of health care, especially with respect to the role, quality and capacity of primary health care. In addition to the Pradhan Mantri Jan Arogya Yojana insurance scheme, covering 40% of the poorest and most vulnerable individuals in the country for secondary and tertiary care, Ayushman Bharat is based on an ambitious programme of transforming India's 150 000 public peripheral health centres into health and wellness centres (HWCs) delivering universal, free comprehensive primary health care by the end of 2022. This transformation to facilities delivering high-quality, efficient, equitable and comprehensive care will involve paradigm shifts, not least in human resources to include a new cadre of mid-level health providers. The design of HWCs and the delivery of services build on the experiences and lessons learnt from the National Health Mission, India's flagship programme for strengthening health systems. Expanding the scope of these components to address the expanded service delivery package will require reorganization of work processes, including addressing the continuum of care across facility levels; moving from episodic pregnancy and delivery, newborn and immunization services to chronic care services; instituting screening and early treatment programmes; ensuring high-quality clinical services; and using information and communications technology for better reporting, focusing on health promotion and addressing health literacy in communities. Although there are major challenges ahead to meet these ambitious goals, it is important to capitalize on the current high level of political commitment accorded to comprehensive primary health care. abstract_id: PUBMED:27358251 Performance of private sector health care: implications for universal health coverage. Although the private sector is an important health-care provider in many low-income and middle-income countries, its role in progress towards universal health coverage varies. Studies of the performance of the private sector have focused on three main dimensions: quality, equity of access, and efficiency. The characteristics of patients, the structures of both the public and private sectors, and the regulation of the sector influence the types of health services delivered, and outcomes. Combined with characteristics of private providers-including their size, objectives, and technical competence-the interaction of these factors affects how the sector performs in different contexts. Changing the performance of the private sector will require interventions that target the sector as a whole, rather than individual providers alone. In particular, the performance of the private sector seems to be intrinsically linked to the structure and performance of the public sector, which suggests that deriving population benefit from the private health-care sector requires a regulatory response focused on the health-care sector as a whole. abstract_id: PUBMED:12836890 Reform and remembrance: the place of the private sector in the future of health care policy. Although the nation failed during the past decade to enact large-scale, structural change in government health policy, it has seen health care in the private sector remodeled dramatically during the same period. In this article I argue that a new round of equally significant changes is quite possible, this time at the hands of the national government. More specifically, I argue that for a variety of reasons, both enduring and more recently born, support for the private sector and the market in health care is relatively weak: that given likely trends in costs, demographics, and inequalities, it is likely to get even weaker; and that in the potential coming crisis of the health care system. there will be a real opportunity for seizing the agenda and winning policy battles on the part of would-be reformers pushing large-scale, public sector-oriented changes that go well beyond the recent reform efforts directed at managed care and HMOs. abstract_id: PUBMED:17540472 'Where is the public health sector?' Public and private sector healthcare provision in Madhya Pradesh, India. Objective: This paper aims to empirically demonstrate the size and composition of the private health care sector in one of India's largest provinces, Madhya Pradesh. Methodology: It is based on a field survey of all health care providers in Madhya Pradesh (60.4 million in 52,117 villages and 394 towns). Seventy-five percent of the population is rural and 37% live below poverty line. This survey was done as part of the development of a health management information system. Findings: The distribution of health care providers in the province with regard to sector of work (public/private), rural-urban location, qualification, commercial orientation and institutional set-up are described. Of the 24,807 qualified doctors mapped in the survey, 18,757 (75.6%) work in the private sector. Fifteen thousand one hundred forty-two (80%) of these private physicians work in urban areas. The 72.1% (67793) of all qualified paramedical staff work in the private sector, mostly in rural areas. Conclusion: The paper empirically demonstrates the dominant heterogeneous private health sector and the overall the disparity in healthcare provision in rural and urban areas. It argues for a new role for the public health sector, one of constructive oversight over the entire health sector (public and private) balanced with direct provision of services where necessary. It emphasizes the need to build strong public private partnerships to ensure equitable access to healthcare for all. abstract_id: PUBMED:10094054 Consolidation in the health care sector. This article discusses trends in health care sector consolidations. Attention is given to the reasons behind consolidation efforts and consolidation failures and the potential impact of consolidations on physicians, patients, and the transformation of health care delivery. abstract_id: PUBMED:37816073 Impacts of unconditional cash transfers on health care utilisation in informal sector households in Togo. Background: Most Togolese population earns their income from informal sector, and they are very often exposed to health outcomes. Cash transfers impact healthcare utilization by improving household's social capital, socio-economic status, lifestyle choice, and physical health. The aim of this paper was to analyse the impact of unconditional cash transfers on health care utilisation in informal sector households. Methods: We used the propensity-score method to compare health care utilisation by households that received cash transfers from nonbeneficiary households and simulated a potential confounder to assess the robustness of the impacts of the estimated treatment (i.e., cash transfer). Data were obtained from a national survey that covered 1405 households. Results: The results show that women benefited the most from cash transfers (73.1%). Our estimates indicate that health care utilisation increased by 28.3% among workers in the informal sectors who benefited from unconditional cash transfers compared to nonbeneficiaries. The greatest impact was found on agriculture households with an increase by 31.3% in the health care utilisation. In general, cash transfer beneficiaries are more likely to use public health centres; there was an increase in public health facility attendance of 21.3%. Conclusions: Cash transfers are a valuable social protection instrument that improve health care utilisation of populations in the informal sector. Policymakers could use cash transfer as the infusion of income and/or assets that may impact health outcomes. Cash transfers are an opportunity to alleviate barriers of access to health care by older people. Future research must examine impact of cash transfer on health of vulnerable groups such as older people, children, and people with disabilities. abstract_id: PUBMED:21835488 Going private: clinicians' experience of working in UK independent sector treatment centres. Objectives: With increased possibility that public healthcare services in the UK will be outsourced to the private sector, this study investigates how clinicians working in Independent Sector Treatment Centres perceive the differences between public and private sectors. Methods: Qualitative interviews with 35 clinicians recruited from two ISTCs. All participants were transferred to the independent sector from the public National Health Service. Interview data were analysed to identify shared experience about the variable organisation and delivery of services. Results: Clinicians perceived differences between public and independent sectors in the areas of 'environment and facilities', 'management', 'work organisation and care delivery', and 'patient experience'. The independent sector was described as offering a positive alternative to public services in regard to service environment and patient experience, but there were concerns about management priorities and the reconfiguration of work. Conclusions: Clinicians' experience of moving between sectors reveals mixed experiences. Although some improvements might legitimise the growing role of the independent sector, there remain doubts about the commercialisation of services, the motives of managers and the impact of clinical roles and capabilities. With policies looking to expand the mixed economy of public healthcare services, the study suggests clinicians will not automatically embrace a move between sectors. Answer: Based on the provided abstracts, fitness centres are not formally considered part of the health care sector, but they do play a role in the health and wellness of individuals, particularly in relation to musculoskeletal problems. The study from Denmark indicates that a significant proportion of fitness centre members suffer from musculoskeletal problems, and many join the centres with the intention of addressing these issues through exercise and training (PUBMED:18761875). However, the study also notes that there is very little formal communication between the health care sector and fitness centres, suggesting that while fitness centres contribute to health care in an informal capacity, they are not integrated into the health care system as formal health care providers. The other abstracts do not directly address the role of fitness centres in the health care sector but discuss various aspects of health care systems, reforms, and the private sector's involvement in health care delivery. For instance, the abstract discussing the Portuguese health care sector (PUBMED:30370564) and the one on India's health and wellness centres (PUBMED:30950425) focus on reforms and the structure of health care delivery in their respective countries. The abstracts on managed health care in South Africa (PUBMED:8693366), the performance of the private health care sector (PUBMED:27358251), and the role of the private sector in health care policy (PUBMED:12836890) discuss the implications of private sector involvement in health care without specific reference to fitness centres. Similarly, the abstract on health care provision in Madhya Pradesh, India (PUBMED:17540472), discusses the public and private health care sectors without mentioning fitness centres. In summary, while fitness centres may contribute to individual health and wellness, particularly in the context of musculoskeletal health, they are not formally recognized as part of the health care sector based on the information provided in the abstracts (PUBMED:18761875).
Instruction: Nondepressed linear skull fractures in children younger than 2 years: is computed tomography always necessary? Abstracts: abstract_id: PUBMED:22563059 Nondepressed linear skull fractures in children younger than 2 years: is computed tomography always necessary? Background: Current recommendations are that young children with a skull fracture following head injury undergo computed tomography (CT) examination of their head to exclude significant intracranial injury. Recent reports, however, have raised concern that radiation exposure from CT scanning may cause malignancies. Objective: To estimate the proportion of children with nondisplaced linear skull fractures who have clinically significant intracranial injury. Methods: Retrospective review of patients younger than 2 years who presented to an emergency department and received a diagnosis of skull fracture. Results: Ninety-two patients met the criteria for inclusion in the study; all had a head CT scan performed. None suffered a clinically significant intracranial injury. Conclusion: Observation, rather than CT, may be a reasonable management option for head-injured children younger than 2 years who have a nondisplaced linear skull fracture on plain radiography but no clinical signs of intracranial injury. abstract_id: PUBMED:1994944 Evaluation of minor head trauma in children younger than two years. The recent medical literature emphasizes the limitations of skull films in the evaluation of minor head trauma. However, the emergency medicine literature places little emphasis on the particular risks in children younger than 2 years old with blunt head injury. These children have immature bone and unfused sutures that may increase risk of cranial injury and delayed complications. A case is presented to illustrate this point. Unlike severe head trauma, where evaluation is directed toward computed tomography, the literature continues to be controversial regarding the indications for skull radiographs and computed tomography in minor head trauma. The authors recommend a low threshold for radiographic imaging in blunt heat injuries in children younger than 2 years. abstract_id: PUBMED:24977773 Natural history and clinical implications of nondepressed skull fracture in young children. Background: Head injury is the most common cause of neurologic disability and mortality in children. Previous studies have demonstrated that depressed skull fractures (SFs) represent approximately one quarter of all SFs in children and approximately 10% percent of hospital admissions after head injury. We hypothesized that nondepressed SFs (NDSFs) in children are not associated with adverse neurologic outcomes. Methods: Medical records were reviewed for all children 5 years or younger with SFs who presented to our Level I trauma center during a 4-year period. Data collected included patient demographics, Glasgow Coma Scale (GCS) score at admission, level of consciousness at the time of injury, type of SF (depressed SF vs. NDSF), magnitude of the SF depression, evidence of neurologic deficit, and the requirement for neurosurgical intervention. Results: We evaluated 1,546 injured young children during the study period. From this cohort, 563 had isolated head injury, and 223 of them had SF. Of the SF group, 163 (73%) had NDSFs, of whom 128 (78%) presented with a GCS score of 15. None of the NDSF patients with a GCS score of 15 required neurosurgical intervention or developed any neurologic deficit. Of the remaining 35 patients with NDSF and GCS score less than 15, 7 (20%) had a temporary neurologic deficit that resolved before discharge, 4 (11%) developed a persistent neurologic deficit, and 2 died (6%). Conclusion: Children 5 years or younger with NDSFs and a normal neurologic examination result at admission do not develop neurologic deterioration. Level Of Evidence: Epidemiological study, level III. abstract_id: PUBMED:35292456 Skull x-ray scans after minor head injury in children younger than 2 years of age. Question: A 1-year-old child presented to my clinic after falling from a couch and landing on his head. In the clinic, the child was asymptomatic and acting normally. Does this child need imaging of his head? If so, how do I decide between sending the child for a skull x-ray scan and sending him to the hospital for a computed tomography (CT) scan? Answer: The Pediatric Emergency Care Applied Research Network clinical decision rule for CT scans helps identify children younger than 2 years of age at high risk of intracranial injury. Application of the rule, which has 100% sensitivity and 53.8% specificity for clinically important traumatic brain injury in children, is the first step to assessing children with minor head injury. If a CT scan is not needed urgently, it is recommended that children younger than 2 years get a skull x-ray scan only if a skull fracture is suspected. The presence and type of skull fracture can determine if a CT scan of the head and consultation with a neurosurgeon is warranted. abstract_id: PUBMED:9894994 Head trauma in children younger than 2 years: are there predictors for complications? Objectives: To determine the incidence of skull fracture (SF) and intracranial injury (ICA) among children younger than 2 years evaluated in a pediatric emergency department for head trauma; whether historical features and/or physical findings are predictive of injury type; and whether clinical criteria could allow a selective approach to radiographic imaging. Design: Retrospective medical record review. Setting: Tertiary pediatric emergency department. Patients: Case series of 278 children aged younger than 24 months evaluated for head injury. Main Outcome Measures: Presence of SF and/or ICA. Results: Diagnoses at discharge included 227 minor head injuries, 39 isolated SF, 9 ICA with SF, and 3 isolated ICA. Children younger than 12 months had the highest incidence of SF/ICA (29%) vs 4% for children aged 13 to 24 months (P&lt;.001). Seven percent of complications from SF/ICA resulted from falls 3 ft (0.9 m) or less [corrected]. Incidence of behavioral change, loss of consciousness, emesis, and seizures did not differ significantly between those with minor head injuries and those with SF/ICA. Scalp abnormalities were more common in children with SF/ICA (P&lt;.001). Sixty-two percent of children with isolated SF and 58% of children with ICA had no history of loss of consciousness, emesis, seizure, or behavioral change. Ninety-two percent of children with isolated SF and 75% of children with ICA had normal levels of consciousness and nonfocal neurologic examinations at diagnosis. Among children who fell 3 ft or less (0.9 m) [corrected] and had no loss of consciousness, emesis, seizure, behavioral change, or scalp abnormality, none of 31 (95% confidence interval [CI], 0-0.10) children younger than 24 months and none of 20 (95% CI, 0-0.15) children younger than 12 months had SF/ICA. Conclusions: Both SF and ICA are common in children younger than 2 years evaluated for head trauma. Children younger than 12 months are at highest risk. Injuries resulted from relatively minor falls and occurred in alert, neurologically normal children. Clinical signs and symptoms were insensitive predictors of SF/ICA; however, a grouping of features (fall &lt; or = 3 ft [0.9 m], no history of neurologic symptoms, and normal scalp physical examination results) identified a subset of children at low risk for complications. abstract_id: PUBMED:24930859 Children with minor head injury in the emergency department: Is skull radiography necessary for children under 2 years? Background: Current guidelines on the management of mild head trauma (traumatic brain injury/TBI) do not include the presence of a skull fracture in determining the risk of intracranial injury. However, in our setting cranial radiography is still performed frequently to rule out the presence of skull fracture. Objective: To estimate the prevalence of clinically-important traumatic brain injuries (ciTBI) in children younger than two years of age with mild TBI. Patients And Methods: Descriptive observational study. All children attended in emergency department with mild TBI (Glasgow ≥14 points) for a year were included. We defined ciTBI as intracranial injuries that caused death or required neurosurgery, intubation for more than 24 hours, inotropic drugs or mechanical ventilation. Results: The study included 854 children, of which 457 (53.5%) were male. The median patient age was 11.0 months (P25-75: 7.5-17.0 months). In 741 cases (86.8%) the mechanism of TBI was a fall. In 438 cases (51.3%) skull radiography was performed. Eleven children (1.3%) had intracranial injury, but none met the criteria for ciTBI (estimated prevalence of ciTBI was 0%; CI 95%: 0%-0.4%). Conclusion: Children younger than two years of age with mild TBI have low prevalence of ciTBI. Consequently, it is possible to monitor children younger than two years with a TBI without performing skull radiography. abstract_id: PUBMED:24508443 Utility of head computed tomography in children with a single extremity fracture. Objectives: To determine the clinical and forensic utility of head computed tomography (CT) in children younger than 2 years of age with an acute isolated extremity fracture and an otherwise-negative skeletal survey. Study Design: Retrospective chart review of children younger than 2 years of age who obtained a skeletal survey in the Cincinnati Children's Hospital Medical Center Emergency Department during the 159-month study period. Clinically important head injury was determined based on previously defined Pediatric Emergency Care Applied Research Network criteria. Forensically significant head injury was defined as that which increased the concern for inflicted injury. The rate of head CT relative to patient age and location of fracture (proximal vs distal extremity, upper vs. lower extremity) was determined via χ2 tests. Results: Of the 320 children evaluated, 37% received neuroimaging, 95.7% of which had no signs of skull fracture or intracranial trauma. Five children (4.3%) with head imaging had traumatic findings but no children in the study had clinically significant head injury. Three of these children had previous concerns for nonaccidental trauma and findings on head CT that were forensically significant. There was a greater rate of head imaging in children in the younger age groups and those with proximal extremity fractures (P &lt; .05). Conclusions: In young children who present with an isolated extremity fracture, clinicians should consider obtaining head CT in those who are younger than 12 months of age, have proximal extremity fractures, or who have previous evaluations for nonaccidental trauma. Evaluation with head CT in children without these risk factors may be low yield. abstract_id: PUBMED:29499992 Point-of-Care Ultrasound for the Diagnosis of Skull Fractures in Children Younger Than Two Years of Age. Objectives: To determine the accuracy of skull point-of-care ultrasound (POCUS) for identifying fractures in children younger than 2 years of age with signs of head trauma, and the ability of POCUS to identify the type and depth of fracture depression. Study Design: This was a multicenter, prospective, observational study of children younger than 2 years of age with nontrivial mechanisms of injury and signs of scalp/skull trauma. Patients were enrolled if they underwent computed tomography (CT). Patients underwent clinical evaluation, in addition to a cranial POCUS in the emergency department (ED). From the POCUS examinations, we documented whether fractures were present or absent, their location, characteristics, and depth. POCUS and CT findings were compared to calculate the diagnostic accuracy. Results: We enrolled a convenience sample of 115 of 151 (76.1%) eligible patients. Of the 115 enrolled, 88 (76.5%) had skull fractures. POCUS had a sensitivity of 80 of 88 (90.9%; 95% CI 82.9-96.0) and a specificity of 23 of 27 (85.2%; 95% CI 66.3-95.8) for identifying skull fractures. Agreement between POCUS and CT to identify the type of fracture as linear, depressed, or complex was 84.4% (97 of 115) with a kappa of 0.75 (95% CI 0.70-0.84). Conclusions: POCUS performed by emergency physicians may identify the type and depth of fractures in infants with local physical signs of head trauma with substantial accuracy. Emergency physicians should consider POCUS as an adjunct to clinical evaluation and prediction rules for traumatic brain injuries in children younger than 2 years of age. abstract_id: PUBMED:31688803 Isolated Increased Intracranial Pressure and Unilateral Papilledema in an Infant With Traumatic Brain Injury and Nondepressed Basilar Skull Fracture. Traumatic brain injury is one of the most common pediatric injuries; totaling more than 500,000 emergency department visits per year. When the injury involves a skull fracture, sinus venous thrombosis and the risk of resultant increased intracranial pressure (ICP) are a concern. We describe a previously healthy 11-month-old female infant with nondepressed skull fracture who developed increased ICP in the absence of intracranial changes on imaging. Funduscopic examination revealed unilateral papilledema, and opening pressure on lumbar puncture was elevated at 35 cm of H2O. Computed tomography scan demonstrated a nondepressed occipital bone fracture. However, further imaging, including magnetic resonance imaging with angiogram/venogram, did not reveal any intracranial abnormalities. In particular, there was no evidence of sinus venous thrombosis. Given her presentation and signs of increased ICP, she was started on acetazolamide and improved dramatically. A thorough literature search was completed but yielded no information on infants with increased ICP after nondepressed skull fracture in the absence of radiographic findings to suggest a cause for the increase in pressure. Trauma alone can lead to increased ICP secondary to several processes, although this is expected in moderate to severe head trauma. Our case demonstrates that increased ICP can be present in infants with mild traumatic brain injury in the absence of intracranial pathology. This should be considered in patients who present with persistent vomiting that is refractory to antiemetics. abstract_id: PUBMED:19562916 Computerized tomography of children with seizure disorders. Background: Neuro-imaging is generally considered as part of the evaluation of seizures and epilepsy. There is limited information about its usefulness in our environment. We describe the pattern of CT findings in children with seizures in our environment. Method: We carried out a retrospective review of the computerized tomography findings in children with recurrent seizures over a one year period, November 2005 to October 2006. Results: During the study period, 49 infants and children had computerized tomography performed on them out of which 19 had CT done for recurrent seizures. They ranged in age from 4 months to 16 years with 13 of them being boys. Generalized tonic--clonic seizures was the most predominant seizure type, being present in 10 of the 19 (52.6%) children while simple partial seizure, myoclonic jerk and mixed seizure types were present in 2 cases each. Abnormal scan was demonstrated in 10 of the 19 children (52.6%) with 3 of them having double cerebral lesions, giving a total of 13 cerebral lesion demonstrated by the CT scan. Cerebral infarct was the most common lesion demonstrated, being present in 5 of the 13 lesions (38.5%). Others were cerebral atrophy in 4 cases (30.8%), moderate ventricular dilatation 2 (15.4%) and 1 each ofporencephalic cyst, hydrocephalus and linear skull fracture. Of the 10 children with abnormal scan, 90% of them had significant past medical history, with birth asphyxia (44.4%) and meningitis (33.3%) being the commonest significant past medical history in them. CCONCLUSION There is a high incidence of abnormal scan findings in children with seizure disorder in our environment compared to what is obtained from the developed countries. Cerebral infarct appears to be the most common abnormal CT findings in our children with seizures. Answer: The necessity of computed tomography (CT) for nondepressed linear skull fractures in children younger than 2 years is a subject of debate. A retrospective review of patients younger than 2 years who presented to an emergency department with a diagnosis of skull fracture found that none of the 92 patients with nondisplaced linear skull fractures suffered a clinically significant intracranial injury, suggesting that observation, rather than CT, may be a reasonable management option for head-injured children younger than 2 years who have a nondisplaced linear skull fracture on plain radiography but no clinical signs of intracranial injury (PUBMED:22563059). Another study evaluated 1,546 injured young children and found that none of the children 5 years or younger with nondepressed skull fractures (NDSFs) and a normal neurologic examination result at admission developed neurologic deterioration (PUBMED:24977773). This supports the idea that CT scans may not be necessary in all cases, especially when the child presents with a normal neurologic examination. However, the literature continues to be controversial regarding the indications for skull radiographs and CT in minor head trauma, with some recommending a low threshold for radiographic imaging in blunt head injuries in children younger than 2 years due to their immature bone and unfused sutures that may increase the risk of cranial injury and delayed complications (PUBMED:1994944). The Pediatric Emergency Care Applied Research Network clinical decision rule for CT scans helps identify children younger than 2 years of age at high risk of intracranial injury. If a CT scan is not needed urgently, it is recommended that children younger than 2 years get a skull x-ray scan only if a skull fracture is suspected (PUBMED:35292456). In conclusion, while CT scans are a valuable tool for assessing intracranial injury, they may not be necessary in all cases of nondepressed linear skull fractures in children younger than 2 years, particularly when the child has a normal neurologic examination and there are no clinical signs of intracranial injury. Observation and the use of clinical decision rules may be reasonable approaches to manage these cases while minimizing radiation exposure from CT scanning.
Instruction: Is the chronologic age a contra-indication for surgical palliation of unresectable periampullary neoplasms? Abstracts: abstract_id: PUBMED:15565629 Is the chronologic age a contra-indication for surgical palliation of unresectable periampullary neoplasms? Methods: Early and late results of surgical palliation for unresectable periampullary neoplasms were evaluated in 24 patients older than 70 years and compared with the same results obtained from 33 younger patients. The two groups of patients were comparable, except for age. Biliary bypass associated to gastric bypass was the most common performed procedure. Results: No significant differences in the results (morbidity, mortality, and outcome) were found in the two groups of patients. In addition, the results of palliative surgery in the elderly were compared with those obtained from a comparable group of 35 patients palliated with endoscopic stent insertion: surgical palliation resulted in better long-term results. Conclusions: This study provides evidence that the chronologic age is not a contra-indication for surgical palliation of periampullary neoplasms and that surgery provides a better quality of residual life. abstract_id: PUBMED:10359359 Surgical palliation of unresectable periampullary adenocarcinoma in the 1990s. Background: Advances in the nonoperative staging and palliation of periampullary carcinoma have dramatically changed the management of this disease. Currently, surgical palliation is used primarily for patients found to be unresectable at the time of laparotomy performed for the purpose of determining resectability. Study Design: A review of all patients undergoing operative management for periampullary adenocarcinoma at a single, high-volume institution was performed. The review focused on patients found to be unresectable who, therefore, underwent surgical palliation. Results: Between December 1991 and December 1997, 256 patients with unresectable periampullary adenocarcinoma were operatively palliated. During the same time period, 512 patients underwent pancreaticoduodenectomy (PD) for periampullary carcinoma. Sixty-eight percent of patients were unresectable secondary to liver metastases or peritoneal metastases, and 32% were deemed unresectable because of local vascular invasion. Of the 256 patients, 51% underwent double bypass (hepaticojejunostomy [HJ] and gastrojejunostomy [GJ]), 11% underwent HJ alone, 19% underwent GJ alone, and 19% did not undergo any form of bypass. Celiac block was performed in 75% of patients. Palliated patients were significantly younger, with a mean age of 64.0 years compared with 65.8 years in the resected group (p = 0.04). Gender and race distributions were similar in the 2 groups, with 57% of palliated patients and 55% of resected patients being men (p = NS) and 91% of patients in each group being Caucasian (p = NS). Palliative procedures were performed with a mortality rate of 3.1%, compared to 1.9% in those successfully resected (p = NS). Those undergoing operative palliation had a significantly lower incidence of postoperative complications when compared with those undergoing pancreaticoduodenectomy (22% versus 35%, p&lt;0.0001) and had significantly shorter lengths of stay (10.3 days versus 14.8 days, p&lt;0.0001). As expected, palliated patients had a significantly poorer prognosis, with 1-, 2- and 4-year survivals of 25%, 9%, and 6% (median 6.5 months), respectively, compared with 75%, 47%, and 24% in their resectable counterparts (median 21 months, p&lt;0.0001). Conclusions: Surgical palliation continues to play an important role in the management of periampullary carcinoma. In this high-volume center, 33% of patients undergoing operative management of this disease were unresectable. Surgical palliation can be accomplished with acceptable perioperative mortality (3.1%) and morbidity (22%), with excellent longterm results. abstract_id: PUBMED:7678945 Current status of surgical palliation of periampullary carcinoma. In recent years, the use of nonoperative palliation for unresectable periampullary carcinoma has increased markedly, in part, because of the high morbidity and mortality rates after surgical palliation. The current analysis was undertaken to determine whether or not decreases in morbidity and mortality rates, recently observed after resection of periampullary carcinoma, are now being seen in the surgical palliation of unresectable periampullary carcinoma. During a 54 month period, 118 consecutive patients underwent surgical exploration with the finding of unresectable periampullary adenocarcinoma. Jaundice was the most common complaint at admission, being present in 73 percent of the patients. Abdominal or back pain, or both, was present in 71 percent of the patients and weight loss was observed in 61 percent of the patients. The most commonly performed procedure was combined biliary bypass and gastrojejunostomy, being performed upon 75 percent of the patients. A gastrojejunostomy was performed upon 107 of 118 patients (91 percent). The hospital mortality rate was 2.5 percent. Postoperative complications occurred in 37 percent of the patients but were seldom life-threatening. Wound infection was the most frequent postoperative complication (10 percent), followed by cholangitis (8 percent) and delayed gastric emptying (8 percent). During the late follow-up period, only 4 percent of the patients had gastric outlet obstruction, and only 2 percent had recurrent jaundice. The mean survival time postoperatively was 7.7 months. These results demonstrate that patients with unresectable periampullary carcinoma can undergo surgical palliation with minimal perioperative mortality, acceptable morbidity and good long term palliation. We conclude that surgical palliation is the treatment of choice for carefully selected patients with unresectable periampullary carcinoma. abstract_id: PUBMED:15362733 Palliation of unresectable periampullary neoplasms. "surgical" versus "non-surgical" approach. Background/aims: A series of 84 patients with unresectable periampullary neoplasms, observed during an eight-year period (1992-1999), is reviewed for a critical analysis of modalities of treatment and clinical outcome. Two different approaches, a preference for "non-surgical" palliation and a preference for "surgical" palliation, in two consecutive periods, are compared. Methodology: In the first period (1992-1995) endoscopic retrograde cholangiopancreatography with endoprosthesis insertion was performed routinely and, after diagnostic assessment, patients were divided into two groups: those with apparently resectable neoplasms, candidates for surgery, in whom the decision to perform a surgical palliation by biliary bypass was taken intraoperatively, and those with unresectable neoplasms, in whom the endoprosthesis insertion was considered as definitive palliation. Since January 1996, a different approach was started; surgical bypass was considered the palliation of choice and only patients unfit for surgery underwent endoscopic stenting. Results: Evaluation of the results showed that the first strategy was associated with a high incidence of complications and unsatisfactory long-term results, with frequent hospital readmissions, poor quality of residual life and therefore failure of palliation. On the contrary, the latter approach with preference for surgical palliation was associated with lower morbidity and mortality, with persistent relief of symptoms and better quality of life in a larger percentage of patients. Conclusions: This study provides evidence that, in patients with unresectable periampullary carcinoma, surgical palliation provides better long-term results than endoscopic palliation. The results were also consistent with the advantage of associating a gastric bypass to the biliary bypass, also in the absence of gastric outlet obstruction. abstract_id: PUBMED:30825212 Surgical palliation of biliary obstruction: Bypass in the era of drainage. Unresectable pancreatico-biliary cancers frequently cause biliary obstruction. Symptomatic management of biliary obstruction is most commonly accomplished with percutaneous or endoscopic drainage. When Nonsurgical means have been exhausted, effective surgical palliation may be performed by way of choledochojejunostomy. abstract_id: PUBMED:18522888 Palliative surgical bypass for unresectable periampullary carcinoma. Background: Around 60% to 80% of patients with periampullary carcinoma are unresectable either due to distant metastasis or local vascular invasion. With the advancement of endoscopic interventional procedures, the role of surgical bypass has diminished. However, surgical bypass is still appropriate in patients with unresectable disease discovered at the time of surgery. This study was conducted to assess the results of palliative surgical bypass for patients with unresectable periampullary carcinoma at our hospital, a tertiary referral center of Northern India. Method: The study group comprised 204 patients who had undergone surgical bypass for advanced periampullary carcinoma over the last 15 years. Results: Between January 1990 and December 2004, 204 patients (128 males, 76 females) consisting of 179 patients with carcinoma of head of the pancreas, 14 patients with ampullary carcinoma, 8 patients with lower end cholangiocarcinoma and 3 patients with duodenal carcinoma underwent surgical bypass. Their average age was 51 years (range 20-78 years). Both biliary and gastric bypasses were done in 158 (77.45%), biliary bypass alone in 37 (18.13%), and gastric bypass alone in 9 (4.32%). Biliary bypass was done by Roux-en-Y hepaticojejunostomy, and gastric bypass by retrocolic gastrojejunostomy. The overall postoperative mortality and morbidity were 0.98% and 26.9%, respectively. The patients who died had undergone previously endoscopic intervention. Complications included wound infection in 12.25% of the patients, bile leak in 5.12%, delayed gastric emptying in 5.38%, ascitic leak from drains in 8.8%, and upper gastrointestinal bleeding in 1.96%. The incidences of wound infection and bile leak both were significantly higher in patients who had had preoperative biliary stenting. None of the patients who had undergone Roux-en-Y hepaticojejunostomy+retrocolic gastrojejunostomy required any intervention later in their life. Conclusions: Surgical bypass is a safe procedure with negligible mortality and minimal morbidity. It has not lost its relevance and is an appropriate treatment in patients deemed unresectable on peroperative assessment. abstract_id: PUBMED:25841505 Advances in loco-regional palliation of unresectable cholangiocarcinomas. Cholangiocarcinoma (CC) represents the second most common primary hepatic malignancy after hepatocellular carcinoma, accounting for 10% to 15% of primary liver tumors. Its prognosis is often poor due to its late presentation and its relative resistance to current chemotherapy and radiotherapy regimens. At the time of diagnosis, more than half of the patients are not candidates for surgical therapy due to either invasion of vascular structures or distant metastases. Therefore, for a large proportion of patients, palliation remains the only option for better quality of life and, in some circumstances, longer survival. The main goals of palliative interventions for CC are: restoration of the biliary drainage and resolution of pruritis caused by obstructive jaundice, improvement of the nutritional status and reduction of pain. During the last decades, novel therapies have been introduced to optimize the management of patients with unresectable CC. These therapies can be resource intensive and have some adverse effects; therefore patient selection is crucial. This article discusses the current options for palliation of CC. abstract_id: PUBMED:10493479 Is prophylactic gastrojejunostomy indicated for unresectable periampullary cancer? A prospective randomized trial. Objective: This prospective, randomized, single-institution trial was designed to evaluate the role of prophylactic gastrojejunostomy in patients found at exploratory laparotomy to have unresectable periampullary carcinoma. Summary Background Data: Between 25% and 75% of patients with periampullary cancer who undergo exploratory surgery with intent to perform a pancreaticoduodenectomy are found to have unresectable disease. Most will undergo a biliary-enteric bypass. Whether or not to perform a prophylactic gastrojejunostomy remains unresolved. Retrospective reviews of surgical series and prospective randomized trials of endoscopic palliation have demonstrated that late gastric outlet obstruction, requiring a gastrojejunostomy, develops in 10% to 20% of patients with unresectable periampullary cancer. Methods: Between May 1994 and October 1998, 194 patients with a periampullary malignancy underwent exploratory surgery with the purpose of performing a pancreaticoduodenectomy and were found to have unresectable disease. On the basis of preoperative symptoms, radiologic studies, or surgical findings, the surgeon determined that gastric outlet obstruction was a significant risk in 107 and performed a gastrojejunostomy. The remaining 87 patients were thought by the surgeon not to be at significant risk for duodenal obstruction and were randomized to receive either a prophylactic retrocolic gastrojejunostomy or no gastrojejunostomy. Short- and long-term outcomes were determined in all patients. Results: Of the 87 patients randomized, 44 patients underwent a retrocolic gastrojejunostomy and 43 did not undergo a gastric bypass. The two groups were similar with respect to age, gender, procedure performed (excluding gastrojejunostomy), and surgical findings. There were no postoperative deaths in either group, and the postoperative morbidity rates were comparable (gastrojejunostomy 32%, no gastrojejunostomy 33%). The postoperative length of stay was 8.5+/-0.5 days for the gastrojejunostomy group and 8.0+/-0.5 days for the no gastrojejunostomy group. Mean survival among those who received a prophylactic gastrojejunostomy was 8.3 months, and during that interval gastric outlet obstruction developed in none of the 44 patients. Mean survival among those who did not have a prophylactic gastrojejunostomy was 8.3 months. In 8 of those 43 patients (19%), late gastric outlet obstruction developed, requiring therapeutic intervention (gastrojejunostomy 7 patients, endoscopic duodenal stent 1 patient; p &lt; 0.01). The median time between initial exploration and therapeutic intervention was 2 months. Conclusion: The results from this prospective, randomized trial demonstrate that prophylactic gastrojejunostomy significantly decreases the incidence of late gastric outlet obstruction. The performance of a prophylactic retrocolic gastrojejunostomy at the initial surgical procedure does not increase the incidence of postoperative complications or extend the length of stay. A retrocolic gastrojejunostomy should be performed routinely when a patient is undergoing surgical palliation for unresectable periampullary carcinoma. abstract_id: PUBMED:7511967 Comparison of resectable and unresectable periampullary carcinomas. Two hundred and fifty-eight patients with pathologically proved periampullary carcinomas who underwent surgical treatment between the years 1965 and 1992 were evaluated. Comparison was carried out between the resectable and unresectable groups. Carcinoma of the pancreatic head occurred in less than one-half (47 percent) of the patients, and only 23 percent were resectable. In contrast, carcinoma of the ampulla of Vater had a similar rate of occurrence, but a much higher resectable rate (86 percent). Thus, carcinoma of the pancreatic head was the minor group (19 percent) in the resectable patients we studied. The main clinical presentations and durations of symptoms before diagnosis did not differ in the resectable and unresectable groups, so it was impossible to predict the resectability by symptoms. Incidences of diabetes mellitus and diarrhea increased twofold in the unresectable group. Preoperative biopsy was difficult to perform for those with carcinoma of the pancreatic head. Comparing pancreatoduodenectomy and palliative operation, pancreatoduodenectomy resulted in a higher complication rate (43 versus 13 percent), a higher surgical mortality rate (17 versus 9 percent) and a longer hospitalization period (31 versus 20 days), but there was no statistical difference in the median survival time between the resectable and unresectable carcinomas of the pancreatic head (seven and one-half versus five months). Most of the patients (81 percent in the resectable group and 70 percent in the unresectable group) we studied died of cachexia with tumor recurrence. Although the advantage of pancreatoduodenectomy for resectable carcinoma of the pancreatic head was questioned, we still recommend this procedure for all periampullary carcinomas to avoid depriving the occasional patients with pancreatic carcinomas of long term survival and forfeiting the chance of cure for some misdiagnosed patients with other more favorable periampullary carcinomas. abstract_id: PUBMED:27865277 Surgical Therapy for Pancreatic and Periampullary Cancer. Surgery is the key component of treatment for pancreatic and periampullary cancers. Pancreatectomy is complex, and there are numerous perioperative and intraoperative factors that are important for achieving optimal outcomes. This article focuses specifically on key aspects of the surgical management of periampullary and pancreatic cancers. Answer: No, the chronologic age is not a contra-indication for surgical palliation of unresectable periampullary neoplasms. A study comparing surgical palliation outcomes in patients older than 70 years with those in younger patients found no significant differences in morbidity, mortality, and outcome between the two age groups. Furthermore, surgical palliation in the elderly resulted in better long-term results compared to endoscopic stent insertion, suggesting that surgery provides a better quality of residual life regardless of age (PUBMED:15565629).
Instruction: Is local excision after complete pathological response to neoadjuvant chemoradiation for rectal cancer an acceptable treatment option? Abstracts: abstract_id: PUBMED:21178856 Is local excision after complete pathological response to neoadjuvant chemoradiation for rectal cancer an acceptable treatment option? Purpose: The role of local excision in patients with good histological response to neoadjuvant chemoradiation for locally advanced rectal cancer is unclear, mainly because of possible regional nodal involvement. This study aims to evaluate the correlation between pathological T and N stages following neoadjuvant chemoradiation for locally advanced rectal cancer and the outcome of patients with mural pathological complete response undergoing local excision. Methods: This investigation was conducted as a retrospective analysis. Between January 1997 and December 2007, 320 patients with T3 to 4Nx, TxN+ or distal (≤ 6 cm from the anus) T2N0 rectal cancer underwent neoadjuvant concurrent chemoradiation followed by surgery. Radiotherapy was standard and chemotherapy consisted of common fluoropyrimidine-based regimens. Results: After chemoradiation, 93% patients had radical surgery, 6% had local excision, and 3% did not have surgery. In the 291 patients undergoing radical surgery, the pathological T stage correlated with the N stage (P = .036). We compared the outcome of patients with mural complete pathological response (n = 37) who underwent radical surgery (group I) and those (n = 14) who had local excision only (group II). With a median follow-up of 48 months, 4 patients in group I had a recurrence and none in group II had a recurrence; one patient died in group I and none died in group II. Disease-free survival, pelvic recurrence-free survival, and overall survival rates were similar in both groups. Conclusion: In this retrospective study, nodal metastases were rare in patients with mural complete pathological response following neoadjuvant chemoradiation (3%), and local excision did not compromise their outcome. Therefore, local excision may be an acceptable option in these patients. abstract_id: PUBMED:26290512 Neoadjuvant chemoradiation therapy and pathological complete response in rectal cancer. The management of rectal cancer has evolved significantly in the last few decades. Significant improvements in local disease control were achieved in the 1990s, with the introduction of total mesorectal excision and neoadjuvant radiotherapy. Level 1 evidence has shown that, with neoadjuvant chemoradiation therapy (CRT) the rates of local recurrence can be lower than 6% and, as a result, neoadjuvant CRT currently represents the accepted standard of care. This approach has led to reliable tumor down-staging, with 15-27% patients with a pathological complete response (pCR)-defined as no residual cancer found on histological examination of the specimen. Patients who achieve pCR after CRT have better long-term outcomes, less risk of developing local or distal recurrence and improved survival. For all these reasons, sphincter-preserving procedures or organ-preserving options have been suggested, such as local excision of residual tumor or the omission of surgery altogether. Although local recurrence rate has been stable at 5-6% with this multidisciplinary management method, distal recurrence rates for locally-advanced rectal cancers remain in excess of 25% and represent the main cause of death in these patients. For this reason, more recent trials have been looking at the administration of full-dose systemic chemotherapy in the neoadjuvant setting (in order to offer early treatment of disseminated micrometastases, thus improving control of systemic disease) and selective use of radiotherapy only in non-responders or for low rectal tumors smaller than 5 cm. abstract_id: PUBMED:29063019 Advances for achieving a pathological complete response for rectal cancer after neoadjuvant therapy. Neoadjuvant therapy has become the standard of care for locally advanced mid-low rectal cancer. Pathological complete response (pCR) can be achieved in 12%-38% of patients. Patients with pCR have the most favorable long-term outcomes. Intensifying neoadjuvant therapy and extending the interval between termination of neoadjuvant treatment and surgery may increase the pCR rate. Growing evidence has raised the issue of whether local excision or observation rather than radical surgery is an alternative for patients who achieve a clinical complete response after neoadjuvant therapy. Herein, we highlight many of the advances and resultant controversies that are likely to dominate the research agenda for pCR of rectal cancer in the modern era. abstract_id: PUBMED:29184476 Organ-Preserving Strategies for the Management of Near-Complete Responses in Rectal Cancer after Neoadjuvant Chemoradiation. In recent years, organ preservation has been considered a feasible alternative to total mesorectal excision for patients with locally advanced rectal cancer with a clinical complete response to neoadjuvant therapy. However, the degree of tumor response to neoadjuvant therapy is variable. A fraction of the patients who did not achieve a complete response had grossly visible tumors. These patients, with clearly incomplete clinical response, need a total mesorectal excision. In addition, some patients with a significant tumor response still have some abnormalities in the bowel wall, such as superficial ulceration or tissue nodularity, which, while not conclusive for the presence of a tumor, are indicative of the possibility of a residual tumor in the bowel wall or in mesorectal lymph nodes. The management of patients with a so-called near-complete clinical response to neoadjuvant therapy is controversial. In this article, we will review the clinical and radiological criteria that define a clinical response to neoadjuvant therapy, possible treatment strategies, and follow-up protocols. We will also discuss patient and tumor characteristics that in our opinion can be useful in selecting the most appropriate treatment alternative. Although organ preservation and quality of life are important, the primary goal of treatment for these patients should be local tumor control and long-term survival. abstract_id: PUBMED:36212424 Pathological complete response in MMR-deficient/MSI-high and KRAS-mutant patient with locally advanced rectal cancer after neoadjuvant chemoradiation with immunotherapy: A case report. To date, preoperative chemoradiation (CRT) is the standard of care for patients with locally advanced rectal cancer (LARC) regardless of status of mismatch repair. Immunotherapy showed promising results in the neoadjuvant treatment trials in patients with mismatch repair-deficient (dMMR) or high microsatellite instability (MSI-H) LARC. The efficacy of CRT plus programmed death 1 (PD-1) inhibitor in these patients with complex gene mutation remains unclear. Additionally, very few studies reported on whether such combination could induce abscopal effect. We report a case of dMMR and MSI-H LARC with KRAS mutation that achieved pathological complete response of primary lesion and liver metastases after neoadjuvant short-course radiotherapy followed by four cycles chemotherapy of XELOX plus PD-1 inhibitor tislelizumab and a subsequent total mesorectal excision. This case indicates that this combined treatment strategy has remarkable clinical response both in locoregional and distant diseases, which potentially leads to reduction in the risk of distant metastases and better locoregional control for this subgroup of population. abstract_id: PUBMED:35152190 Evaluation Of Pathological Response And Its Predictors In Carcinoma Rectum Following Neoadjuvant Chemoradiation. Background: Neoadjuvant chemoradiation followed by surgery is the standard of care in locally advanced rectal tumors. Better pathologic response after chemoradiation is associated with better outcomes. Pathologic response may depend on various, patient and tumor related factors. The aim of our study was to assess the pathological response using a modified Ryan scoring system and to study various factors which influence the response. Materials And Methods: This is a retrospective study carried out at a tertiary cancer centre in India. Patient details and histopathology reports of rectal cancer patients who took neoadjuvant chemoradiation from January 2016 to December 2018 were analyzed. Demographic details, pathological response assessed by modified Ryans tumor regression grade (TRG) score and various factors which influence the pathological response were studied. Those with TRG score 0 (complete response) and1(near complete response) were grouped together as good responders and those with score 3 (partial response) and 4 (poor or no response) as poor responders. Univariate and multivariate analyses were performed using logistic regression to determine factors which influence pathologic response. Results: There were a total of 83 patients. Males and females were equally distributed. 43.4%(n=36) of patients had lower rectal tumors,32.5%(n=27) had midrectal tumors and 24.1%(n=20) had upper rectal tumors. 46% of patients were good responders which includes complete responders ,17% (n=14) and those with a near complete response,29% (n=24). 54% of patients were poor responders,which includes those with incomplete response,36% (n=34) and with no or poor response,18% (n=15). Among the upper rectal tumors, only 20% had good response and among the mid and lower rectal tumors 54% had good response.(p value 0.02).63% of males were good responders in comparison to 37% among females (p value 0.05). Discussion: Response to neoadjuvant chemoradiation with capecitabine in locally advanced rectal tumors in our institute is similar to the literature data with a complete response in 16.9%, near complete response in 28.9% partial response in 36.1% and no response in 18.1% of patients, according to modified Ryan score. It was found that upper rectal tumors had a poorer response when compared to mid and lower tumors and females had a poorer response compared to males. Conclusion: Even though neoadjuvant chemoradiation remains the standard of care in locally advanced rectal carcinomas, its benefit in upper rectal tumors needs to be validated in larger studies. abstract_id: PUBMED:35237107 Transanal Local Excision of Rectal Cancer after Neoadjuvant Chemoradiation: Is There a Place for It or Should Be Avoided at All Costs? Tumor response to neoadjuvant chemoradiation (nCRT) with tumor downsizing and downstaging has significantly impacted the number of patients considered to be appropriate candidates for transanal local excision (TLE). Some patients may harbor small residual lesions, restricted to the bowel wall. These patients, who exhibit major response ("near-complete") by digital rectal examination, endoscopic assessment, and radiological assessment may be considered for this approach. Although TLE is associated with minimal postoperative morbidity, a few clinical consequences and oncological outcomes must be evaluated in advance and with caution. In the setting of nCRT, a higher risk for clinically relevant wound dehiscences leading to a considerable risk for readmission for pain management has been observed. Worse anorectal function (still better than after total mesorectal excision [TME]), worsening in the quality of TME specimen, and higher rates of abdominal resections (in cases requiring completion TME) have been reported. The exuberant scar observed in the area of TLE also represents a challenging finding during follow-up of these patients. Local excision should be probably restricted for patients with primary tumors located at or below the level of the anorectal ring (magnetic resonance defined). These patients are otherwise candidates for abdominal perineal resections or ultra-low anterior resections with coloanal anastomosis frequently requiring definitive stomas or considerably poor anorectal function. abstract_id: PUBMED:35524090 Does Pathological Complete Response after Neoadjuvant Therapy Influence Postoperative Morbidity in Rectal Cancer after Transanal Total Mesorectal Excision? Purpose: It is still unclear if pathological complete response (pCR) after neoadjuvant chemoradiotherapy (CRT) in patients treated for rectal cancer causes worse postoperative outcomes, especially after transanal total mesorectal excision (TaTME). Worse postoperative outcomes might be an argument for an organ preserving watch and wait strategy in fragile patients and patients with comorbidities. The aim of this study is to evaluate whether patients treated for rectal cancer who had pCR to neoadjuvant therapy develop worse postoperative outcomes after TaTME than patients without complete response. Methods: Comparative retrospective analysis (with nearest neighbor matching algorithm) of postoperative outcomes in two groups of patients, with pCR, n = 15 and without pCR (non-pCR), n = 57. All patients were operated on only by one surgical approach, TaTME, for middle and distal rectal tumors. All procedures were performed by one surgical team between 2014 and 2020 at the University Hospital Brno in Czech Republic. Results: Overall morbidity was comparable between the groups (pCR group - 53.8% vs. non-pCR - 38.6%, p = 0.381). Anastomotic leak (AL) was observed in 33.3% of patients with pCR and in 17.5% of patients in the non-pCR group without statistical significance (p = 0.281). Conclusion: In conclusion, pathological complete response after neoadjuvant therapy does not appear to affect postoperative morbidity in rectal cancer after TaTME. Therefore, in patients with complete response who are not adherent to W&amp;W surveillance, surgical resection can be perform without increased postoperative complications. abstract_id: PUBMED:32234159 Clinical factors of pathological complete response after neoadjuvant chemoradiotherapy in rectal cancer Objective: To explore the feasibility of clinical factors to predict the pathological complete response after neoadjuvant chemoradiotherapy in rectal cancer. Methods: A retrospective analysis was performed on clinical factors of 162 patients with rectal cancer, who underwent neoadjuvant chemoradiotherapy in the General Hospital of People's Liberation Army from January 2011 to December 2018.According to the postoperative pathological results, the patients were divided into pathological complete response (pCR) group and non-pathological complete response group (non-pCR group) to check the predictive clinical factors for pCR. Results: Twenty-eight cases achieved pCR after neoadjuvant chemoradiation (17.3%, 28/162). Univariate analysis showed that patients with higher differentiation (P=0.024), tumor occupation of the bowel lumen≤1/2 (P=0.006), earlier clinical T stage (P=0.013), earlier clinical N stage (P=0.009), the time interval between neoadjuvant chemoradiotherapy and surgery&gt;49 days (P=0.006), and maximum tumor diameter≤5 cm (P=0.019) were more likely to obtain pCR, and the differences werestatistically significant. Multivariate analysis showed that tumor occupation of the bowel lumen≤1/2 (P=0.01), maximum tumor diameter≤5 cm (P=0.035), and the interval&gt;49 days (P=0.009) were independent factors in predicting pCR after neoadjuvant therapy. Conclusion: Tumor occupation of the bowel lumen, maximum tumor diameter, and the time interval between neoadjuvant chemoradiotherapy and surgery can predict the pCR in rectal cancer. abstract_id: PUBMED:29088894 The value of diffusion kurtosis imaging in assessing pathological complete response to neoadjuvant chemoradiation therapy in rectal cancer: a comparison with conventional diffusion-weighted imaging. Objectives: The aim of this study is to comprehensively evaluate the advantage of diffusion kurtosis imaging (DKI) in distinguishing pathological complete response (pCR) from non-pCR patients with locally advanced rectal cancer (LARC) after neoadjuvant chemoradiation therapy (CRT) in comparison to conventional diffusion-weighted imaging (DWI). Material And Methods: Fifty-six consecutive patients diagnosed with LARC were prospectively enrolled and underwent pre- and post-CRT MRI on a 3.0 T MRI scanner. Apparent diffusion coefficient (ADC), mean diffusion (MD) and mean kurtosis (MK) values of the tumor were measured in pre- and post-CRT phases and then compared to histopathologic findings after total mesorectal excision (TME). Both Mann-Whitney U-test and Kruskal-Wallis test were used as statistical methods. Diagnostic performance was determined using receiver operating characteristic (ROC) curve analysis. Results: For a total of 56 rectal lesions (pCR, n = 14; non-pCR, n = 42), the MKpre and MKpost values were much lower for the pCR patients (mean±SD, 0.72±0.09 and 0.56±0.06, respectively) than those for the non-pCR patients (0.89±0.11 and 0.68±0.08, respectively) (p &lt; 0.001). The ADCpost and the change ratio of apparent diffusion coefficient (ADCratio) values was significantly higher for the pCR patients (mean±SD, 1.31±0.13 and 0.64±0.34, respectively) than for the non-pCR patients (1.12±0.16 and 0.33±0.27, respectively) (p &lt; 0.001 and p = 0.001, respectively). In addition, the MDpost and the change ratio of mean diffusion (MDratio) (2.45±0.33 vs. 1.95±0.30, p &lt; 0.001; 0.80±0.43 vs. 0.35±0.32, p &lt; 0.001, respectively) also increased, whereas the ADCpre, MDpre and the change ratio of mean kurtosis (MKratio) of the pCR (0.82±0.11, 1.40±0.21, and 0.23±0.010, respectively) exhibited a neglectable difference with that of the non-pCR (p = 0.332, 0.269, and 0.678, respectively). The MKpost showed relatively high sensitivity (92.9%) and high specificity (83.3%) in comparison to other image indices. The area under the receiver operating characteristic curve (AUROC) that is available for the assessment of pCR using MKpost (0.908, cutoff value = 0.6196) were larger than other parameters and the overall accuracy of MKpost (85.7%) was the highest. Conclusions: Both DKI and conventional DWI hold great potential in predicting treatment response to neoadjuvant chemoradiation therapy in rectal cancer. The DKI parameters, especially MKpost, showed a higher specificity than conventional DWI in assessing pCR and non-pCR in patients with LARC, but the pre-CRT ADC and MD are unreliable. Answer: Yes, local excision after complete pathological response to neoadjuvant chemoradiation for rectal cancer can be an acceptable treatment option. A retrospective study found that nodal metastases were rare in patients with mural complete pathological response following neoadjuvant chemoradiation (3%), and local excision did not compromise their outcome. Disease-free survival, pelvic recurrence-free survival, and overall survival rates were similar in patients who underwent radical surgery and those who had local excision only, suggesting that local excision may be an acceptable option in these patients (PUBMED:21178856). Additionally, the management of rectal cancer has evolved with neoadjuvant chemoradiation therapy leading to reliable tumor down-staging and a significant percentage of patients achieving a pathological complete response (pCR). Patients who achieve pCR have better long-term outcomes and less risk of local or distal recurrence, which has led to suggestions for sphincter-preserving procedures or organ-preserving options such as local excision of residual tumor or even the omission of surgery altogether (PUBMED:26290512). Moreover, growing evidence has raised the issue of whether local excision or observation rather than radical surgery is an alternative for patients who achieve a clinical complete response after neoadjuvant therapy (PUBMED:29063019). However, the management of patients with a so-called near-complete clinical response to neoadjuvant therapy is controversial, and the primary goal should be local tumor control and long-term survival (PUBMED:29184476). In conclusion, while local excision after complete pathological response to neoadjuvant chemoradiation for rectal cancer is considered an acceptable treatment option for some patients, the decision should be individualized based on the patient's specific situation, tumor characteristics, and the potential risks and benefits of the procedure.
Instruction: Are physical activity, smoking and alcohol consumption associated with lower urinary tract symptoms in men or women? Abstracts: abstract_id: PUBMED:22704109 Are physical activity, smoking and alcohol consumption associated with lower urinary tract symptoms in men or women? Results from a population based observational study. Purpose: Lower urinary tract symptoms are highly prevalent and reduce quality of life. Lifestyle behaviors and the development of lower urinary tract symptoms have been largely unexamined. We investigated physical activity, smoking and alcohol drinking in relation to the development of lower urinary tract symptoms in men and women. Materials And Methods: Data were obtained from the BACH (Boston Area Community Health) Survey, a longitudinal observational study. Baseline (2002 to 2005) in-person interviews assessed activity, smoking and alcohol consumption. Five-year followup interviews (2006 to 2010 in 4,145) assessed new reports of moderate to severe lower urinary tract symptoms defined by the AUASI (AUA symptom index). Analysis was conducted using multivariable logistic regression. Results: Lower urinary tract symptoms developed in 7.7% and 12.7% of at-risk men and women, respectively. Women were 68% less likely to experience lower urinary tract symptoms (OR 0.32; 95% CI 0.17, 0.60; p &lt;0.001) if they had high vs low levels of physical activity. Although the association was similar among men, it was not statistically significant upon adjustment for medical or sociodemographic characteristics in the multivariable model. Women smokers were twice as likely to experience lower urinary tract symptoms, particularly storage symptoms (OR 2.15; 95% CI 1.30, 3.56; p = 0.003), compared to never smokers. Among men, smoking was not associated with lower urinary tract symptoms. Results for alcohol intake were inconsistent by intake level and symptom subtype. Conclusions: A low level of physical activity was associated with a 2 to 3 times greater likelihood of lower urinary tract symptoms. Smoking may contribute to the development of lower urinary tract symptoms in women but not in men. Clinicians should continue to promote physical activity and smoking cessation, noting the additional potential benefits of the prevention of lower urinary tract symptoms, particularly for women. abstract_id: PUBMED:15963125 Association of cigarette smoking, alcohol consumption and physical activity with lower urinary tract symptoms in older American men: findings from the third National Health And Nutrition Examination Survey. Objectives: To examine the association of cigarette smoking, alcohol consumption and physical activity with lower urinary tract symptoms (LUTS) in older men. Subjects And Methods: The study included 2797 men participating in the Third National Health and Nutrition Examination Survey (NHANES III), who were aged &gt; or = 60 years. During an interview, LUTS, smoking history, alcohol consumption and physical activity were assessed. Cases comprised men with at least three of the symptoms of nocturia, hesitancy, weak stream and incomplete emptying. Men who had had prostate surgery unrelated to cancer were not included as cases. Controls were men with no symptoms or surgery. We adjusted for age and race in logistic regression models and used sampling weights to account for selection probability. Results: Current cigarette smokers had no higher odds of LUTS than 'never' smokers, but former heavy smokers (&gt; or = 50 pack-years) had a higher odds of LUTS than never smokers (odds ratio 2.01; 95% confidence interval 1.04-3.89). Men who drank alcohol daily had a lower chance of LUTS than non-drinkers (0.59; 0.37-0.95; P trend, 0.07). All levels of moderate and vigorous activity were statistically significantly inversely associated with LUTS (P trend, 0.06), whereas men who reported no leisure-time physical activity had a greater odds of LUTS (2.06; 1.26-3.39). Conclusions: Moderate alcohol consumption and physical activity may be protective against LUTS. Current cigarette smoking was not consistently associated with the condition. The possible association in former smokers warrants further investigation. abstract_id: PUBMED:9827786 Physical activity and benign prostatic hyperplasia. Background: Benign prostatic hyperplasia (BPH) leading to prostatic enlargement and lower urinary tract symptoms is highly prevalent among older men. Sympathetic nervous system activity, which is decreased by physical activity, is associated with increased prostatic smooth-muscle tone and prostatic symptoms. Therefore, we assessed whether physical activity leads to fewer lower urinary tract symptoms in the Health Professionals Follow-up Study. Methods: We observed men who were aged 40 to 75 years at baseline in 1986 for subsequent incidence of surgery for BPH. The men were free of diagnosed cancer, including prostate cancer at baseline and during follow-up, had not had a radical prostatectomy, and provided data on physical activity. Cases were men who under-went BPH surgery between 1986 and 1994 (n = 1890) or, among those who did not have surgery, who scored 15 or more points of 35 (n = 1853) on 7 questions about lower urinary tract symptoms modified from the American Urological Association Symptom Index. Noncases were men who scored 7 points or less (n = 21745). Odds ratios (ORs) and 95% confidence intervals (CIs) were calculated from multiple logistic regression models. Results: After controlling for age, race or ethnicity, alcohol consumption, and smoking, physical activity was inversely related with total BPH (extreme quintiles: OR, 0.75; 95% CI, 0.67-0.85; P for trend, &lt;.001), surgery for BPH (OR, 0.76; 95% CI, 0.64-0.90; P for trend, &lt;.001), and symptomatic BPH (OR, 0.75; 95% CI, 0.64-0.87; P for trend, &lt;.001). Walking, the most prevalent activity, was inversely related to BPH risk; men who walked 2 to 3 h/wk had a 25% lower risk of total BPH. Conclusion: Our results indicate that more physically active men have a lower frequency of lower urinary tract symptoms. abstract_id: PUBMED:18802926 Prevalence of and risk factors for prostatitis in African American men: the Flint Men's Health Study. Introduction: Prostatitis is a common, yet ill-defined condition without clear diagnostic criteria and treatment strategies. Previous studies examining the prevalence and correlates of prostatitis are limited in their inclusion of primarily white populations. The objective of the current study was to identify prevalence of and risk factors for prostatitis in a population-based sample of African-American men. Methods: In 1996, a probability sample of 703 African-American men, aged 40-79, residing in Genesee County, Michigan without a prior history of prostate cancer/surgery provided responses to a structured interview-administered questionnaire which elicited information regarding sociodemographics, current stress and health ratings, and past medical history, including history of physician diagnosed prostatitis, BPH and sexually transmitted diseases. Logistic regression was used to identify predictors of prostatitis after adjustment for age. Results: Forty-seven (6.7%) of the 703 men reported a history of prostatitis. Increased frequency of sexual activity and physical activity were significantly associated with decreased odds of disease. Moderate to severe lower urinary tract symptoms (LUTS) and a history of BPH were significantly associated with prostatitis after adjustment for age. Conclusion: After adjustment for age, LUTS severity and history of BPH were associated with increased odds of prostatitis. BMI, physical activity and sexual frequency were associated with decreased odds of prostatitis. Finally, poor emotional and physical health, high perceived stress and low social support were associated with an increased risk of prostatitis history. Importantly, these findings suggest that the primary risk factors for this condition are largely modifiable and highlight potential targets for future prevention. abstract_id: PUBMED:20181284 Determinants of male reproductive health disorders: the Men in Australia Telephone Survey (MATeS). Background: The relationship between reproductive health disorders and lifestyle factors in middle-aged and older men is not clear. The aim of this study is to describe lifestyle and biomedical associations as possible causes of erectile dysfunction (ED), prostate disease (PD), lower urinary tract symptoms (LUTS) and perceived symptoms of androgen deficiency (pAD) in a representative population of middle-aged and older men, using the Men in Australia Telephone Survey (MATeS). Methods: A representative sample (n = 5990) of men aged 40+ years, stratified by age and State, was contacted by random selection of households, with an individual response rate of 78%. All men participated in a 20-minute computer-assisted telephone interview exploring general and reproductive health. Associations between male reproductive health disorders and lifestyle and biomedical factors were analysed using multivariate logistic regression (odds ratio [95% confidence interval]). Variables studied included age, body mass index, waist circumference, smoking, alcohol consumption, physical activity, co-morbid disease and medication use for hypertension, high cholesterol and symptoms of depression. Results: Controlling for age and a range of lifestyle and co-morbid exposures, sedentary lifestyle and being underweight was associated with an increased likelihood of ED (1.4 [1.1-1.8]; 2.9 [1.5-5.8], respectively) and pAD (1.3 [1.1-1.7]; 2.7 [1.4-5.0], respectively. Diabetes and cardiovascular disease were both associated with ED, with hypertension strongly associated with LUTS and pAD. Current smoking (inverse association) and depressive symptomatology were the only variables independently associated with PD. All reproductive disorders showed consistent associations with depression (measured either by depressive symptomatology or medication use) in both age-adjusted and multivariate analyses. Conclusion: A range of lifestyle factors, more often associated with chronic disease, were significantly associated with male reproductive health disorders. Education strategies directed to improving general health may also confer benefits to male reproductive health. abstract_id: PUBMED:9921955 Alcohol consumption, cigarette smoking, and risk of benign prostatic hyperplasia. Alcohol consumption and cigarette smoking were evaluated in relation to development of benign prostatic hyperplasia (BPH) among 29,386 members of the Health Professionals Follow-up Study. Men who were 40-75 years old in 1986 and free of prior BPH surgery, diagnosed cancer at baseline, and prostate cancer at baseline and during follow-up were followed for incidence of BPH surgery from 1986 to 1994. Cases were men who reported BPH surgery between 1986 and 1994 (n = 1,813) or who scored &gt; or = 15 points of 35 on seven lower urinary tract symptom questions modified from the American Urological Association symptom index in 1992 and 1994 (n = 1,786); noncases were men who scored &lt; or = 7 points (n = 20,840). After controlling for age, race/ethnicity, body mass index, physical activity, and mutually for alcohol intake and smoking, moderate alcohol consumption was inversely related with total BPH (30.1-50 g/day vs. 0: odds ratio (OR) = 0.59, 95% confidence interval (CI) 0.51-0.70; p trend &lt; 0.0001), although the relation was attenuated at high intake (&gt; or = 50.1 g/day vs. 0: OR = 0.72, 95% CI 0.57-0.90). Current cigarette smoking was positively related to total BPH only among those who smoked 35 or more cigarettes/day (compared with never smokers: OR = 1.45, 95% CI 1.07-1.97). These findings suggest that moderate alcohol consumption and avoidance of smoking may benefit BPH. abstract_id: PUBMED:34825923 Overactive bladder syndrome in nulliparous female university students: prevalence and risk factors including waterpipe smoking. Introduction And Hypothesis: The objective was to assess the prevalence of symptoms of overactive bladder syndrome (OAB) among healthy nulliparous female university students, and to evaluate the correlation of these symptoms with a variety of factors, including waterpipe (WP) smoking. Methods: This is a cross-sectional study. A questionnaire was administered to evaluate symptoms of OAB in healthy nulliparous female university students. Variables assessed included body mass index (BMI), cigarette smoking, WP smoking, consumption of alcohol, coffee, and tea with and without artificial sweeteners, soft drinks, and energy drinks. Adjusted odds ratio were calculated to determine the correlation of these variables with OAB symptoms. Results: A total of 767 out of 2,900 females responded to the questionnaire. Bothersome frequency was reported in 32.3%, and nocturia in 47.5% of the women. Urgency and urgency urinary incontinence (UUI) were present in 25.5 and 24.7% of the study participants respectively. Urgency was associated with WP smoking (p value 0.048). Bothersome frequency was associated with artificial sweeteners used with coffee and tea (p value 0.013). UUI was associated with cigarette smoking (p value 0.034) and elevated BMI (p value &lt; 0.001). OAB symptoms were not found to be significantly associated with soft drink and energy drink consumption. A lower prevalence of nocturia (p value 0.009) and urgency was associated with alcohol consumption (p value 0.017). More than two-thirds (69.2%) of WP smokers expressed readiness to decrease WP smoking if this would improve their lower urinary tract symptoms (LUTS). Conclusion: Overactive bladder is common in healthy young nulliparous women and is associated with multiple risk factors, including WP smoking. abstract_id: PUBMED:29940234 Lifestyle and Progression of Lower Urinary Tract Symptoms in German Men-Results From the EPIC-Heidelberg Cohort. Objective: To examine if lower urinary tract symptom (LUTS) progression was related to anthropometric and lifestyle factors. Material And Methods: The analysis included 5495 men who participated in the EPIC-Heidelberg cohort (recruited 1994-1998) and who reported an International Prostate Symptom Score &lt; 8 at follow-up 4 (FUP4, 2007-2009), had not reported taking α-adrenoreceptor antagonists or 5-α reductase inhibitors or prostate surgery for benign prostatic hyperplasia/LUTS treatment. LUTS progression was defined as an International Prostate Symptom Score ≥ 8 at FUP5 (2010-2012). Using logistic regression analysis, education, marital status, satisfaction with life, satisfaction with health, history of diabetes and of hypertension, smoking, alcohol consumption, body mass index (BMI), waist circumference, and physical activity were examined as potential LUTS risk factors adjusting for age. Results: Increase in BMI between baseline and FUP4 of ≥ 2 BMI units was related to LUTS progression (odds ratio 1.30, 95% confidence interval 1.08-1.57) compared with stable BMI. Compared to men who were very satisfied with life at baseline, those who were satisfied (1.28, 1.11-1.47), unsatisfied (1.80, 1.31-2.46) or very unsatisfied with life (1.43, 0.62-3.34) were more likely to report LUTS progression. Men with longer education had higher odds of LUTS progression than men with primary education only (1.25, 1.06-1.48). Adjusting for BMI or lifestyle factors did not attenuate these associations. Smoking habits, alcohol consumption, physical activity, self-reported history of diabetes or hypertension, and marital status were not related with LUTS progression. Conclusion: Our results confirm some, but not all previously observed risk factors for LUTS progression. abstract_id: PUBMED:31260612 Do Lifestyle Factors Affect Lower Urinary Tract Symptoms? Results from the Korean Community Health Survey. Purpose: To evaluate the relationships between lower urinary tract symptoms (LUTS) and lifestyle factors (physical activity, smoking, alcohol consumption, body mass index, and stress) in Korean men. Methods: We analyzed the survey results of South Korean men (n=64,439) who were 40 years of age or older among whom interviews were conducted using questionnaires. Trained interviewers performed face-to-face surveys using computer-assisted personal interviewing, the International Prostate Symptom Score (IPSS), and standard questions. We assessed the relationships of lifestyle factors (physical activity, cigarette smoking, alcohol intake, height, weight, and stress) with LUTS. Results: We observed higher IPSS scores in participants who engaged in no exercise (n=46,008 [71.7%], IPSS=3.19±5.36) than in those who engaged in vigorous physical activity (n=10,657 [17.6%], IPSS=2.28±4.15). Former smokers showed higher total, storage, and voiding symptom IPSS scores than current smokers and nonsmokers. Nondrinkers had higher IPSS scores than current alcohol drinkers. In multivariable logistic regression analysis, we detected no relationship between LUTS and current alcohol drinking in the moderate and severe LUTS groups, using the mild LUTS group as a reference (moderate: 0.91; 95% confidence interval [CI], 0.91-0.91; P&lt;0.001, severe: 0.78; 95% CI, 0.78-0.78; P&lt;0.001). Participants with moderate to severe stress showed higher total IPSS scores than those with no or mild stress (3.38±5.77 vs. 2.88±4.90), with significant relationships between stress and LUTS found in the moderate and severe LUTS groups. In logistic regression analysis, stronger relationships were found for storage and voiding symptoms in the moderate and severe stress groups compared to the mild stress group. Conclusion: A history of smoking, low levels of physical activity, low body mass index, and moderate to severe stress were associated with a greater severity of LUTS. Moderate to severe stress was also related to voiding symptoms. However, there was no association between alcohol intake and LUTS severity. abstract_id: PUBMED:9554358 Association of smoking with lower urinary tract symptoms. Purpose: We studied the association of smoking with lower urinary tract symptoms. Materials And Methods: In 1994 we performed a population based study by mailing a questionnaire to all 3,143 men born in 1924, 1934 or 1944 who resided in Tampere, or in 11 rural or semirural municipalities in the same county. Of this population 68% were ultimately included in the study. A modified Danish Prostate Symptom Score-1 was used to assess urinary symptoms and associated bothersomeness. A symptom index was created by multiplying the symptom and bothersomeness scores of hesitancy, incomplete emptying, urge, urge incontinence, nocturia and daytime frequency, and totaling the products. The index for lower urinary tract symptoms was defined as positive when it reached 7 points. Subjects were also asked whether they had smoked for at least a year, and they were defined as smoking currently, formerly and never according to the response. Results: Compared with respondents who never smoked age adjusted odds ratios were 1.47 (95% confidence interval 1.09 to 1.98) and 1.38 (1.08 to 1.78), respectively, for those who currently and formerly smoked. After further adjusting for alcohol consumption, body mass index, previous prostate surgery, pelvic area surgery, prostate cancer and bladder cancer, the odds ratios for current and former smokers were 1.39 (95% confidence interval 1.02 to 1.93) and 1.34 (1.03 to 1.75), respectively. Conclusions: Smoking increases the prevalence of lower urinary tract symptoms. The similarity in the odds ratios of these symptoms between current and former smokers suggests that changes caused by smoking occur long term or the pathological process resulting in symptoms starts early in smokers. The decreased risk of lower urinary tract symptoms after the cessation of smoking suggests that the process is reversible but recovery is a long-term process. Answer: Physical activity, smoking, and alcohol consumption have been studied for their associations with lower urinary tract symptoms (LUTS) in both men and women, with varying results. Physical activity has been consistently associated with a lower risk of LUTS. Women with high levels of physical activity were 68% less likely to experience LUTS compared to those with low levels of activity, and although a similar association was observed in men, it was not statistically significant after adjusting for medical or sociodemographic characteristics (PUBMED:22704109). Other studies have also found that physical activity is inversely related to LUTS and benign prostatic hyperplasia (BPH) in men (PUBMED:9827786, PUBMED:15963125). Increased physical activity was associated with decreased odds of prostatitis in African American men (PUBMED:18802926) and was inversely associated with erectile dysfunction and perceived symptoms of androgen deficiency in middle-aged and older men (PUBMED:20181284). Smoking has been shown to have different effects on men and women. Women smokers were twice as likely to experience LUTS, particularly storage symptoms, compared to never smokers, while smoking was not associated with LUTS in men (PUBMED:22704109). However, other studies have found that former heavy smokers had a higher odds of LUTS than never smokers in older men (PUBMED:15963125), and current cigarette smoking was positively related to total BPH only among those who smoked 35 or more cigarettes/day (PUBMED:9921955). Smoking was also associated with a higher prevalence of LUTS in a Finnish male population (PUBMED:9554358). Alcohol consumption showed inconsistent results. Moderate alcohol consumption was associated with a lower chance of LUTS in older men (PUBMED:15963125), and moderate alcohol intake was inversely related to BPH (PUBMED:9921955). However, results for alcohol intake were inconsistent by intake level and symptom subtype in women (PUBMED:22704109), and no association was found between alcohol intake and LUTS severity in Korean men (PUBMED:31260612). In summary, physical activity appears to be protective against LUTS in both men and women, while smoking may increase the risk of LUTS, particularly in women and heavy or long-term male smokers. The relationship between The relationship between alcohol consumption and LUTS is less clear, with some studies suggesting a protective effect of moderate alcohol consumption, especially in men, while others show inconsistent results. It's important to note that these associations can vary based on the population studied and the specific urinary symptoms in question. Overall, these findings suggest that lifestyle factors do play a role in the development and progression of LUTS, and that interventions targeting physical activity and smoking cessation could potentially benefit urinary tract health in both men and women.
Instruction: Needle biopsy through the abdominal wall for the diagnosis of gastrointestinal stromal tumour - Does it increase the risk for tumour cell seeding and recurrence? Abstracts: abstract_id: PUBMED:27033260 Needle biopsy through the abdominal wall for the diagnosis of gastrointestinal stromal tumour - Does it increase the risk for tumour cell seeding and recurrence? Purpose: Preoperative percutaneous transabdominal wall biopsy may be considered to diagnose gastrointestinal stromal tumour (GIST) and plan preoperative treatment with tyrosine kinase inhibitors when an endoscopic biopsy is not possible. Hypothetically, a transabdominal wall biopsy might lead to cell seeding and conversion of a local GIST to a disseminated one. We investigated the influence of preoperative needle biopsy on survival outcomes. Methods: We collected the clinical data from hospital case records of the 397 patients who participated in the Scandinavian Sarcoma Group (SSG) XVIII/Arbeitsgemeinschaft Internistische Onkologie (AIO) randomised trial and who had a transabdominal fine needle and/or core needle biopsy carried out prior to study entry. The SSG XVIII/AIO trial compared 1 and 3 years of adjuvant imatinib in a patient population with a high risk of GIST recurrence after macroscopically radical surgery. The primary end-point was recurrence-free survival (RFS), and the secondary end-points included overall survival (OS). Results: A total of 47 (12.0%) out of the 393 patients with data available underwent a percutaneous biopsy. No significant difference in RFS or OS was found between the patients who underwent or did not undergo a percutaneous biopsy either in the entire series or in subpopulation analyses, except for a statistically significant RFS advantage for patients who had a percutaneous biopsy and a tumour ≥10 cm in diameter. Conclusion: A preoperative diagnostic percutaneous biopsy of a suspected GIST may not increase the risk for GIST recurrence in a patient population who receive adjuvant imatinib after the biopsy. abstract_id: PUBMED:27859013 Needle tract seeding after percutaneous biopsy of sarcoma: Risk/benefit considerations. To evaluate histologic subtype and grade, which in turn guide the decision making for multimodality therapy, the workup of suspected sarcoma requires more material than can be obtained from a fine-needle aspiration. Either open or percutaneous core needle biopsy is indicated before a management decision is made. Percutaneous biopsy of a potentially malignant lesion is controversial, given the perceived potential for tumor seeding along the needle tract. However, the evidence that the latter is a significant risk is weak at best. To the authors' knowledge, among cases of patients with extremity sarcoma who have undergone core needle biopsy, only a few cases of needle tract seeding have been reported to date. Although en bloc excision of the needle tract with the primary tumor is often performed, this practice is not associated with improved oncologic outcomes; the evidence for excision of the needle tract is poor. For patients with gastrointestinal stromal tumors, there is a theoretical risk of peritoneal dissemination after percutaneous biopsy, but to the authors' knowledge this remains unproven. Although endoscopic ultrasound is the preferred route for biopsy among patients with gastrointestinal stromal tumors, percutaneous biopsy is indicated if endoscopic ultrasound is unsuitable or unavailable. In the setting of retroperitoneal sarcoma, a review of pooled data from 4 large tertiary care referral centers demonstrated a risk of needle tract seeding of 0.37%. The authors concluded that the benefits of pretreatment biopsy in patients with mesenchymal tumors outweigh the potential risks of needle tract seeding. Cancer 2017;123:560-567. © 2016 American Cancer Society. abstract_id: PUBMED:35597932 Needle tract seeding and abdominal recurrence following pre-treatment biopsy of gastrointestinal stromal tumors (GIST): results of a systematic review. Background: Gastrointestinal stromal tumors (GIST) are rare abdominal tumors. Pretreatment biopsies may be used to diagnose a GIST and enable tailored treatment. Some experts are skeptical about biopsies because they fear tumor cell seeding. The objective of this study was to determine if pretreatment biopsy is associated with increased tumor recurrence. Methods: We performed a systematic literature search and included studies assessing the oncological outcome of GIST patients who underwent a pre-treatment core needle biopsy or fine needle aspiration. We assessed methodological quality with the Newcastle-Ottawa-Scale for non-randomized studies. This review was registered in the PROSPERO database (CRD42021170290). Results: Three non-randomized studies and eight case reports comprising 350 patients were eligible for inclusion. No prospective study designed to answer the review question was found. One case of needle tract seeding after percutaneous core needle biopsy of GIST was reported. None of the studies reported an increased rate of abdominal recurrence in patients with pretreatment biopsy. Conclusions: The existing evidence does not indicate a relevant risk of needle tract seeding or abdominal recurrence after pre-treatment biopsy of GIST. Biopsy can safely be done to differentiate GIST from other tumors and to select the most appropriate treatment. abstract_id: PUBMED:31850986 Oncological Outcome After Diagnostic Biopsies in Gastrointestinal Stromal Tumors: A Retrospective Cohort Study. Objective: To analyze whether the route of preoperative biopsy influences oncological outcome in GIST patients. Summary Of Background Data: Preoperative biopsies are widely used for diagnosing GIST. Little is known about the risk of tumor seeding after different routes of biopsy. Methods: Patients who underwent resection of a primary GIST between 1996 and 2014 were identified from 2 databases from 2 tertiary referral centers. Survival data were obtained using the Kaplan-Meier method. Possible confounders were identified using Cox regression analysis. The primary endpoint was local recurrence free survival (RFS) and the secondary endpoint was DSS. Results: A total of 228 patients were included, with a median age of 62 years (range 17-86) and a median follow-up time of 53 months (range 1-204). From these patients, 42 patients did not have a biopsy (18%), 70 underwent a transcutaneous biopsy (31%), and 116 a transluminal biopsy (51%). A total of 42 patients (19.0%) had a local and/or distant recurrence. From the 70 patients with a transcutaneous biopsy, only 1 patient developed a needle tract recurrence (1.4%). Local RFS and DSS were both significantly shorter in the transcutaneous biopsy group on univariate analysis compared to the other groups; however, in multivariate analysis the route of biopsy did not influence local RFS (P = 0.128) or DSS (P = 0.096). Conclusions: Transluminal or transcutaneous biopsies for diagnosing GIST do not significantly alter the risk of local recurrent disease or DSS in multivariate Cox regressions. The risk of needle tract seeding after transcutaneous biopsy was low. abstract_id: PUBMED:25506849 Repeated recurrence of a gastric gastrointestinal stromal tumor on the chest wall after initial curative resection: Report of a case. Introduction: Extra-abdominal recurrence or metastasis of a gastrointestinal stromal tumor (GIST) is very rare. Chest wall recurrence of a resected gastric GIST is extremely rare. Presentation Of Case: A 64-year-old Japanese man had undergone proximal gastrectomy for a gastric submucosal tumor 11 years previously. The histopathological diagnosis was GIST (size, 8cm). He did not receive adjuvant therapy, and underwent imaging evaluations every 6 months for the first 5 years after surgery and then annually. He was admitted to our hospital because of a lump on his right anterior chest wall 7 years after curative resection. We resected the tumor, and histopathologic findings revealed metastatic GIST. Four years after metastasectomy, another lump appeared at a different location on the right anterior chest wall. The patient was diagnosed with a second recurrence of gastric GIST and began adjuvant treatment with imatinib after second resection. He has remained alive without tumor recurrence for 2 years. Discussion: Most recurrences were predominantly found in the intra-abdominal cavity, either locally or involving the liver or peritoneum. Extra-abdominal recurrence was much less common. Although we assume that the recurrent tumor of our patient was derived from his gastric GIST, based on the histopathological examinations and clinical course, it is possible that the recurrent tumor of our case was an "extragastrointestinal GIST". Conclusion: Because extra-abdominal recurrence can occur many years after curative resection, continued, careful whole-body follow-up is required for patients with high-risk GIST. abstract_id: PUBMED:33833488 Transperineal core-needle biopsy of a rectal subepithelial lesion guided by endorectal ultrasound after contrast-enhanced ultrasound: A case report. Background: Rectal subepithelial lesions (SELs) are commonly seen in endoscopic examination, generally manifested as bumps with a smooth surface. Precise preoperative diagnoses for rectal SELs are difficult because abnormal tissues are not easily to be obtained by regular endoscopic forceps biopsy. Traditional guidance modalities of preoperative biopsy, including endoscopic ultrasound, computed tomography, and transabdominal ultrasound, are often unsatisfactory. An updated, safe, and effective biopsy guidance method is required. We herein report a new biopsy guidance modality-endorectal ultrasound (ERUS) combined with contrast-enhanced ultrasound (CEUS). Case Summary: A 32-year-old woman complained of a mass inside the rectovaginal space for 9 years, which became enlarged within 1 year. A rectal SEL detected by endoscopy was suspected to be a gastrointestinal stromal tumor or exophytic uterine fibroid. Pathological diagnosis was difficult because of unsuccessful transabdominal core needle biopsy with insufficient tissues, as well as vaginal hemorrhage. A second biopsy was suggested after multiple disciplinary treatment discussion, which referred to a transperineal core needle biopsy (CNB) guided by ERUS combined with CEUS. Adequate samples were procured and rectal gastrointestinal stromal tumor was proved to be the pathological diagnosis. Imatinib was recommended for first-line therapy by multiple disciplinary treatment discussion. After the tumor shrunk, resection of the rectal gastrointestinal stromal tumor was performed through the posterior vaginal wall. Adjuvant therapy was applied and no recurrence or metastasis has been found by the last follow-up on December 13, 2019. Conclusion: Transperineal CNB guided by ERUS and CEUS is a safe and effective preoperative biopsy of rectal SELs yet with some limitations. abstract_id: PUBMED:24598097 A case of calcifying fibrous tumor in the abdominal wall, morphologically resembling a gastric submucosal tumor We report an extremely rare case of a calcifying fibrous tumor (CFT) occurring in the abdominal wall. A 43-year-old woman was referred to our hospital because of a gastric submucosal tumor. Imaging findings revealed a gastric submucosal tumor with calcification. Endoscopic ultrasound-guided fine-needle aspiration was attempted to obtain a tissue specimen; however, the tumor was not adequately visualized because of calcification. Local excision was performed because of the tumor growth and the risk of progression to gastrointestinal stromal tumor. The tumor was histologically diagnosed as a CFT. We describe the imaging and pathological findings, and discuss the differential diagnosis and histogenesis of CFT. abstract_id: PUBMED:29529541 Synchronous gastric leiomyoma and intramuscular abdominal wall granular cell tumor with similar imaging features: A case report. Introduction: Gastric leiomyomas are benign mesenchymal tumors, comprising about 2.5% of gastric neoplasms, which can be difficult to differentiate from gastrointestinal stromal tumors which have malignant potential. Granular cell tumors in the abdominal wall are also rare. Since mesenchymal tumors are difficult to diagnose by imaging, further studies are needed to establish the diagnosis. Presentation Of Case: A 60-year-old asymptomatic woman underwent routine upper endoscopy and was found to have a gastric submucosal lesion. Computed tomography scan also showed an abdominal wall mass. The appearance of both lesions on imaging studies were similar, but it was unclear if the two lesions had the same origin. Endoscopic ultrasound-guided fine needle aspiration biopsy of the gastric lesion was insufficient to establish the diagnosis. Laparoscopic-endoscopic cooperative resection of the gastric lesion and ultrasound-guided core-needle biopsy of the abdominal wall mass enabled pathological diagnosis of both lesions. Discussion: Diagnostic imaging findings of these two lesions were similar. Histologic and immunohistochemical studies are essential to establish a definitive diagnosis. Laparoscopic-endoscopic cooperative surgery may be an effective minimally invasive approach, allowing both pathological diagnosis and complete resection of a gastric submucosal tumor, especially when endoscopic-ultrasound guided fine needle aspiration or biopsy fails to make the diagnosis. Conclusion: Laparoscopic-endoscopic cooperative surgery can be an effective minimally invasive approach to resect some lesions. This is first report of the patient with a synchronous gastric leiomyoma and an intramuscular granular cell tumor in the abdominal wall. abstract_id: PUBMED:26306291 Abdominal wall recurrence of a gastrointestinal stromal tumor: case report. Introduction: The gastrointestinal stromal tumors (GIST) are mesenchymal tumors, most commonly affecting the stomach and small bowel. Only few cases of port-site recurrence after laparoscopic treatment have been reported. We herein report the case of a parietal recurrence on the extraction incision site, 7 years after laparoscopic surgery for small bowel GIST. Case Report: A 47 years-old female patient was hospitalized in November 2007 for isolated pelvic pain. CT scan showed an intestinal tumor with a benign aspect measuring 50 mm. A laparoscopy-assisted resection was performed. Surgical exploration found a 7 cm small bowel tumor. It was extracted through a supra-pubic transversal incision without a wound protector and then resected. Histologic analysis revealed an intestinal GIST with high aggressive potential (five mitosis per field), with CD117 positive at the immunohistochemical examination. The patient had no adjuvant chemotherapy. Seven years later, the patient was readmitted for an abdominal mass at the site of the supra-pubic scar. Abdomino-pelvic CT scan showed a 10 × 7.5 cm solid mass of the abdominal wall. Percutaneous biopsies were done and the pathological analysis revealed a mesenchymal-cell tumor, positive to CD117 and DOG1 at the immunohistochemical examination. Final diagnosis was abdominal wall recurrence of GIST secondary to tumor-contamination during the first surgery. Conclusion: Abdominal wall recurrence of GIST after laparoscopic surgery is rarely reported. This complication should be avoided with preventive measures such as the use of extraction bags or wound protectors. abstract_id: PUBMED:30171772 Impact of endoscopic ultrasound-guided fine-needle biopsy on the diagnosis of subepithelial tumors: A propensity score-matching analysis. Background And Aim: Immunohistological evaluations are essential for diagnosing subepithelial tumors (SET). However, endoscopic ultrasound-guided sampling using fine-needle aspiration (FNA) needles is limited in its ability to procure core tissue for immunostaining. Fine-needle biopsy (FNB) needles may mitigate this limitation. The present study aimed to examine the efficacy of FNB needles for procuring samples that enable the diagnosis of SET. Methods: One hundred sixty patients were included in the study and separated into those whose samples were obtained using FNB needles (FNB group) and those whose samples were procured using FNA needles (FNA group). Groups were compared regarding the conclusive diagnosis rate and unwarranted resection rate. Propensity score matching was introduced to reduce selection bias. Results: Rates at which conclusive diagnoses were reached through adequate immunohistological evaluations were 82% and 60% in the FNB and FNA groups, respectively; this difference was significant (P = 0.013). Unwarranted resection rate was significantly lower in the FNB group (2%) than in the FNA group (14%; P = 0.032). Multivariate analyses showed that lesions ≤20 mm were a significant risk factor for lower conclusive diagnosis rates following the use of FNB needles (P = 0.017). Conclusions: Fine-needle biopsy needles can be useful for obtaining samples that facilitate the diagnosis of SET and for avoiding unwarranted resections. However, FNB needles may be less advantageous for small SET. Answer: The evidence from the provided abstracts suggests that needle biopsy through the abdominal wall for the diagnosis of gastrointestinal stromal tumors (GIST) does not significantly increase the risk for tumor cell seeding and recurrence. A study investigating the influence of preoperative needle biopsy on survival outcomes in patients who participated in the Scandinavian Sarcoma Group XVIII/Arbeitsgemeinschaft Internistische Onkologie trial found no significant difference in recurrence-free survival (RFS) or overall survival (OS) between patients who underwent a percutaneous biopsy and those who did not. The only exception was a statistically significant RFS advantage for patients who had a percutaneous biopsy and a tumor ≥10 cm in diameter (PUBMED:27033260). Another study that reviewed pooled data from four large tertiary care referral centers reported a risk of needle tract seeding of 0.37% in the setting of retroperitoneal sarcoma, concluding that the benefits of pretreatment biopsy outweigh the potential risks of needle tract seeding (PUBMED:27859013). Similarly, a systematic review found only one case of needle tract seeding after percutaneous core needle biopsy of GIST among 350 patients, with no studies reporting an increased rate of abdominal recurrence in patients with pretreatment biopsy (PUBMED:35597932). A retrospective cohort study also indicated that the route of preoperative biopsy (transluminal or transcutaneous) does not significantly alter the risk of local recurrent disease or disease-specific survival (DSS) in multivariate Cox regressions, and the risk of needle tract seeding after transcutaneous biopsy was low (PUBMED:31850986). In summary, the current evidence from these studies suggests that needle biopsy through the abdominal wall for the diagnosis of GIST may not significantly increase the risk of tumor cell seeding and recurrence, and can be safely performed to differentiate GIST from other tumors and to select the most appropriate treatment.
Instruction: Can the addition of regional radiotherapy counterbalance important risk factors in breast cancer patients with extracapsular invasion of axillary lymph node metastases? Abstracts: abstract_id: PUBMED:14566473 Can the addition of regional radiotherapy counterbalance important risk factors in breast cancer patients with extracapsular invasion of axillary lymph node metastases? Purpose: To evaluate if locoregional radiotherapy (RT) versus local irradiation only can alter the pattern of failure in breast cancer patients with extranodal invasion. Patients And Methods: From 08/1988 to 06/1998, 81 patients with extranodal invasion were treated with adjuvant RT (median total dose: 50.4 Gy), 46/81 only locally, 35/81 loco regionally due to presumed adverse parameters. The mean number of resected (positive) lymph nodes was 17 (seven). 78 patients received adjuvant systemic treatment(s). Results: Patients treated with locoregional RT had significantly more often lymphatic vessel invasion (LVI; 63% vs. 28%; p = 0.003), T3/T4 tumors (43% vs. 17%; p = 0.014), and four or more positive lymph nodes (91% vs. 46%; p &lt; 0.001) than patients irradiated only locally. Disease progression occurred in 24/81 patients (locoregional RT: 26% vs. local RT: 33%). The above risk factors were highly significant of worse outcome. Despite their overrepresentation in the locoregional RT group, no difference was found between both groups in regard to disease-free survival (DFS; p = 0.83) and overall survival (OS; p = 0.56), suggesting that regional RT was able to counterbalance the increased risk. There was even a trend toward a better 3-year DFS, 61% in locoregional RT and 37% in local RT, in the subgroup of patients with four or more positive lymph nodes. In a Cox regression model, higher T-stage, four or more positive lymph nodes, and LVI remained significant. For DFS and distant metastasis-free survival (DMFS), the absence of estrogen receptors and the omission of regional RT were also significant. Conclusion: Our data suggest that the addition of regional RT might be beneficial in selected subgroups of patients with extranodal invasion and other poor prognostic factors. abstract_id: PUBMED:30288496 The Significance of Extent of Extracapsular Extension in Patients with T1-2 and N1 Breast Cancer. Objective: The prognostic importance of extracapsular extension (ECE) in breast cancer is not yet clear, especially in patients with pathological T1-2 and N1 (pT1-2N1) disease. We aimed to investigate whether the extent of ECE was an independent prognostic factor for survival outcomes in patients with pT1-2N1 breast cancer. Materials And Methods: A total number of 131 patients with pT1-2N1 breast cancer treated between 2009 and 2015 were retrospectively evaluated. A single pathologist re-analyzed the histologic examples of all cases. The extent of ECE was graded from 0 to 4. Results: There was a significant correlation between the number of lymph nodes involved and ECE grade (p=0.004). According to the univariate analysis, lymphovascular invasion (LVI) and ECE grade were the significant prognostic factors for overall survival (OS); age, number of metastatic lymph nodes, menopausal status, and ECE grade were the prognostic factors for disease-free survival (DFS). With a median follow-up of 46 months, grade 3-4 ECE seems to be notably associated with a shorter OS and DFS in multivariate analysis. The mean OS was 85 months for the patients with grade 0-2 ECE vs 75 months for the patients with grade 3-4 ECE (p=0.003). The mean DFS was 83 months for the patients with grade 0-3 ECE vs 68 months for the patients with grade 4 ECE (p=&lt;0.0001). Conclusion: This research has shown that the extent of ECE is an important prognostic factor for survival in pT1-2N1 breast cancer patients and grade 3-4 ECE seems to be notably associated with a shorter OS and DFS. abstract_id: PUBMED:9231679 Extracapsular axillary node extension in patients receiving adjuvant systemic therapy: an indication for radiotherapy? Purpose: This is a retrospective review into the patterns of failure of 82 patients with Stage II or III breast cancer who had extracapsular extension (ECE) of axillary nodal metastases and who received systemic chemotherapy or hormonal therapy without loco-regional radiation. Methods And Materials: The clinical records of patients with axillary node positive (T1-T3, N1, 2) Stage II or III breast cancer seen at the London Regional Cancer Centre between 1980-1989 were reviewed. Patients were identified who underwent segmental mastectomy with axillary node dissection or modified radical mastectomy and received adjuvant chemotherapy or tamoxifen but did not undergo loco-regional radiation. Eighty-two patients within this group had pathologic evidence of extracapsular axillary node extension (ECE). For 45 of these patients the extension was extensive, and for the remaining 37 it was microscopic. This ECE-positive group was compared to a subgroup of 172 patients who did not have pathologic evidence of extracapsular axillary node extension but had metastatic carcinoma confined within the nodal capsule. Results: Median age of the 82 ECE-positive patients was 56 years. Twenty-five patients had had a segmental mastectomy, the remainder a modified radical mastectomy. Median actuarial survival was 60 months, with a median disease-free and loco-regional failure-free survival of 38 months. Seventy-eight percent of these patients developed a recurrence, which was loco-regional in 60% (21% local, 21% regional, 2% local and regional, and 16% loco-regional and metastatic). There was a 36% recurrence rate in intact breast, 14% the chest wall following modified radical mastectomy, 7% relapsed in the axilla, 12% in supraclavicular nodes, and 1% in the internal mammary nodes. A comparison of the 82 ECE-positive patients with a group of 172 ECE-negative patients determined that there was a statistically significant difference between the two groups in terms of survival (overall and disease-free) and loco-regional recurrence. Univariate analysis of the entire 254 node-positive patient group revealed extracapsular nodal extension (ECE) to be a prognostically significant factor for actuarial and disease-free survival as well as for loco-regional failure, but ECE did not remain an independently prognostic factor after multivariate analysis. Segmental mastectomy, positive resection margins, and ER negative status increased the risk of loco-regional recurrence within the ECE-positive group. Conclusions: Extracapsular axillary node extension is a prognostically significant factor for actuarial survival, disease-free survival, and loco-regional failure but not independent of other adverse prognostic factors. It is a marker for increased loco-regional recurrence associated with doubling of breast, chest wall, and supraclavicular recurrence rates. The risk of axillary relapse in patients who have had an adequate level I and II axillary dissection but demonstrate extracapsular extension is low (7%). We recommend breast/chest wall and supraclavicular radiation for all patients with pathologic evidence of such extranodal extension who have had a level I and II axillary dissection regardless of the number of positive axillary nodes. Axillary irradiation should be considered for patients who have had only an axillary sampling or level I axillary dissection. abstract_id: PUBMED:7673012 Microscopic extracapsular extension in the axilla: is this an indication for axillary radiotherapy? Purpose: Although the axilla is often treated with radiotherapy (RT) postoperatively when microscopic extracapsular extension (ECE) of lymph nodal metastases is present, little data are available to assess axillary failure in the absence of such treatment. As it has been the practice at this institution to withhold axillary irradiation in the presence of microscopic extracapsular spread, we retrospectively analyzed our results for axillary recurrence, disease-free survival (DFS), and overall survival (OS). Methods And Materials: Clinical records were reviewed of 82 women with Stage II node positive breast cancer treated with lumpectomy, axillary dissection, and RT in addition to systemic chemo/hormonal therapy. Axillary surgery consisted of a level I, II, +/- III dissection, with a median of 16.5 nodes removed. Tangential radiotherapy fields were used to treat the breast. All patients were also treated with an abbreviated supraclavicular field with the lateral border medial to the humeral head. Pathological sections were available for review in 72 of the 82 women. Results: Twenty-seven of 72 (37.5%) had evidence of ECE; 45 of 72 (62.5%) had metastatic carcinoma confined within the nodal capsule. Clinical characteristics were comparable between the patients with and without ECE with the exception of (a) pathologic subtype, with a greater percentage of infiltrating ductal tumors associated with ECE (p = 0.044), and (b) number of positive lymph nodes, with 93% of patients without ECE having one to three positive nodes vs. only 56% among patients with ECE (p &lt; 0.001). With a median follow-up of 40 months, 1 of 27 patients (4%) with ECE experienced an axillary failure as a component of first failure compared to 0 of 45 patients without ECE (p = 0.4). There were no isolated axillary failures. Five-year disease-free survival (72% without ECE vs. 57% with ECE, p = 0.12) and overall survival (83% vs. 53%, respectively, p = 0.068) suggested a less favorable outcome for patients with ECE. Conclusions: Microscopic ECE appears to be associated with increased axillary involvement and decreased survival rather than subsequent axillary failure. Our data suggest that radiotherapy to a dissected axilla may be omitted for the sole indication of microscopic extracapsular disease. abstract_id: PUBMED:34106427 Loco-regional therapy and the risk of breast cancer-related lymphedema: a systematic review and meta-analysis. Background: This meta-analysis was designed to assess the association between two loco-regional therapies, regional nodal irradiation (RNI) and axillary lymph node dissection (ALND), and breast cancer-related lymphoedema (BCRL). Methods: We searched PubMed, Science Direct, Embase, and BMJ databases for clinical studies published between January 1, 2010 and January 1, 2020, which assessed risk factors and incidence/prevalence of BCRL. Two investigators independently selected articles to extract relative data and calculate corresponding exact binomial 95% confidence intervals (CIs). In total, 93 articles were reviewed, from which 19 studies were selected. The extracted data were pooled using a random-effects mixed model. Results: The incidence of lymphedema in the selected studies ranged from 3% to 36.7%, with a pooled incidence of 14.29% (95% CI 13.79-14.79). The summary odds ratio/risk ratio (OR/RR) of ALND vs. no-ALND was 3.67 (95% CI 2.25-5.98) with a heterogeneity (I2) of 81% (P &lt; 0.00001). After excluding the studies with an abnormally high risk of lymphedema from self-reporting, the summary hazard ratio (HR) was 2.99 (95% CI 2.44-3.66) with no heterogeneity (I2 = 0%, P = 0.83). The summary OR/RR of patients with vs. without radiotherapy (RT) was 1.82 (95% CI 0.92-3.59), but the RR of RT to breast/chest vs. both axillary and supraclavicular areas was 2.66 (95% CI 0.73-9.70). Conclusion: Regional nodal irradiation has a significantly higher risk for developing lymphedema than irradiation of the breast/chest wall. Axillary dissection and axillary RT have a similar risk for early-onset of breast cancer-related lymphoedema, although the risk trends higher for axillary dissection. abstract_id: PUBMED:30514362 Tailored axillary surgery with or without axillary lymph node dissection followed by radiotherapy in patients with clinically node-positive breast cancer (TAXIS): study protocol for a multicenter, randomized phase-III trial. Background: Complete lymph node removal through conventional axillary dissection (ALND) has been standard treatment for breast cancer patients for almost a century. In the 1990s, however, and in parallel with the advent of the sentinel lymph node (SLN) procedure, ALND came under increasing scrutiny due to its association with significant patient morbidity. Several studies have since provided evidence to suggest omission of ALND, often in favor of axillary radiation, in selected clinically node-negative, SLN-positive patients, thus supporting the current trend in clinical practice. Clinically node-positive patients, by contrast, continue to undergo ALND in many cases, if only for the lack of studies re-assessing the indication for ALND in these patients. Hence, there is a need for a clinical trial to evaluate the optimal treatment for clinically node-positive breast cancer patients in terms of surgery and radiotherapy. The TAXIS trial is designed to fill this gap by examining in particular the value of tailored axillary surgery (TAS), a new technique for selectively removing positive lymph nodes. Methods: In this international, multicenter, phase-III, non-inferiority, randomized controlled trial (RCT), including 34 study sites from four different countries, we plan to randomize 1500 patients to either receive TAS followed by ALND and regional nodal irradiation excluding the dissected axilla, or receive TAS followed by regional nodal irradiation including the full axilla. All patients undergo adjuvant whole-breast irradiation after breast-conserving surgery and chest-wall irradiation after mastectomy. The main objective of the trial is to test the hypothesis that treatment with TAS and axillary radiotherapy is non-inferior to ALND in terms of disease-free survival of clinically node-positive breast cancer patients in the era of effective systemic therapy and extended regional nodal irradiation. The trial was activated on 31 July 2018 and the first patient was randomized on 7 August 2018. Discussion: Designed to test the hypothesis that TAS is non-inferior to ALND in terms of curing patients and preventing recurrences, yet is significantly superior in reducing patient morbidity, this trial may establish a new worldwide treatment standard in breast cancer surgery. If found to be non-inferior to standard treatment, TAS may significantly contribute to reduce morbidity in breast cancer patients by avoiding surgical overtreatment. Trial Registration: ClinicalTrials.gov, ID: NCT03513614. Registered on 1 May 2018. www.kofam.ch , ID: NCT03513614 . Registered on 17 June 2018. EudraCT No.: 2018-000372-14. abstract_id: PUBMED:16170484 Sites of failure in breast cancer patients with extracapsular invasion of axillary lymph node metastases. No need for axillary irradiation?! Background And Purpose: Extracapsular spread (ECS) is frequent, but the specific sites of relapse are seldom given in the literature. In this study it was evaluated, if ECS might be an indicator for axillary irradiation. Patients And Methods: After a retrospective review of pathology reports, the information about ECS was available in 254 lymph node-positive patients: ECS was absent in 34% (ECS-negative; n = 87) and present in 66% (ECS-positive; n = 167). All patients were irradiated locally, 78 patients got periclavicular and 74 axillary irradiation (median total dose: 50.4 Gy). 240/254 patients (94.5%) received systemic treatment/s. Mean follow-up was 46 months. Results: The regional relapse rate was 4.6% without ECS versus 9.6% with ECS. The 5-year axillary relapse-free survival was 100% in ECS-negative and 90% in ECS-positive patients (p = 0.01), whereas corresponding values for periclavicular relapse-free survival (with ECS: 91% +/- 4%; without ECS: 94% +/- 3%; p = 0.77) and local relapse-free survival (with ECS: 86% +/- 4%; without ECS: 91% +/- 3%; p = 0.69) were not significantly different. chi(2)-tests revealed a high correlation of ECS with T-stage, number of positive lymph nodes and progesterone receptor status, comparisons with estrogen receptor, grade, or age were not significant. In multivariate analysis number of positive lymph nodes was solely significant for regional failure. Dividing the patients into those with one to three and those with four or more positive lymph nodes, ECS lost its significance for axillary failure. Conclusion: ECS was accompanied by an enhanced axillary failure rate in univariate analysis, which was no longer true after adjusting for the number of positive lymph nodes. abstract_id: PUBMED:25059790 Prognostic value of extracapsular invasion of axillary lymph nodes combined with peritumoral vascular invasion in patients with breast cancer. Background: Extracapsular invasion (ECI) of metastatic axillary lymph nodes has been associated with aggressive nodal disease but its prognostic role in breast cancer is unclear. The present study evaluated nodal ECI as a predictor of breast cancer recurrence. Methods: We evaluated 154 women with histologically proven node-positive breast cancer who were diagnosed with invasive ductal carcinoma, and investigated the relationships between ECI and recurrences and other clinicopathological factors, particularly vascular invasion and the number of lymph node metastases. Results: The presence of ECI at positive nodes was significantly associated with the number of positive nodes, and with disease recurrence and survival in univariate (but not multivariate) analysis. Interestingly, all ECI(+) patients with distant metastases in our series had peritumoral vascular invasion (PVI), which may have reflected systemic disease; ECI with PVI of the primary tumor strongly predicted recurrent disease and shorter survival. Conclusion: ECI of axillary metastases combined with PVI indicates high tumor aggressiveness. Patients with ECI and PVI may be considered for stronger adjuvant therapies because of their high risk for distant recurrences. abstract_id: PUBMED:18385202 Extracapsular tumor spread and the risk of local, axillary and supraclavicular recurrence in node-positive, premenopausal patients with breast cancer. Background: Extracapsular tumor spread (ECS) has been identified as a possible risk factor for breast cancer recurrence, but controversy exists regarding its role in decision making for regional radiotherapy. This study evaluates ECS as a predictor of local, axillary, and supraclavicular recurrence. Patients And Methods: International Breast Cancer Study Group Trial VI accrued 1475 eligible pre- and perimenopausal women with node-positive breast cancer who were randomly assigned to receive three to nine courses of classical combination chemotherapy with cyclophosphamide, methotrexate, and fluorouracil. ECS status was determined retrospectively in 933 patients based on review of pathology reports. Cumulative incidence and hazard ratios (HRs) were estimated using methods for competing risks analysis. Adjustment factors included treatment group and baseline patient and tumor characteristics. The median follow-up was 14 years. Results: In univariable analysis, ECS was significantly associated with supraclavicular recurrence (HR = 1.96; 95% confidence interval 1.23-3.13; P = 0.005). HRs for local and axillary recurrence were 1.38 (P = 0.06) and 1.81 (P = 0.11), respectively. Following adjustment for number of lymph node metastases and other baseline prognostic factors, ECS was not significantly associated with any of the three recurrence types studied. Conclusions: Our results indicate that the decision for additional regional radiotherapy should not be based solely on the presence of ECS. abstract_id: PUBMED:12428566 Influence of prognostic factors on the risk of recurrence in breast cancer patients after radical treatment The analysis includes 730 consecutive breast cancer patients operated radically. The impact on local and distant recurrences as well as the probability of 5-year disease free survival according to age, hormonal status, primary tumor and axillary nodes stage, number of involved nodes, presence of extracapsular invasion and histopathological grade was assessed. Results of this analysis showed the stage of axillary lymph nodes as the most important prognostic factor for the estimation of the risk of failure. It was also revealed that well-known, classic factors are often insufficient to determine appropriate prognosis. Answer: The addition of regional radiotherapy (RT) can indeed counterbalance important risk factors in breast cancer patients with extracapsular invasion of axillary lymph node metastases. A study by Truong et al. (PUBMED:14566473) found that despite the overrepresentation of patients with adverse prognostic factors such as lymphatic vessel invasion (LVI), T3/T4 tumors, and four or more positive lymph nodes in the locoregional RT group, there was no difference in disease-free survival (DFS) and overall survival (OS) compared to the group that received local RT only. This suggests that regional RT was able to counterbalance the increased risk associated with these factors. Moreover, there was a trend toward better 3-year DFS in the subgroup of patients with four or more positive lymph nodes who received locoregional RT. Additionally, the study by Truong et al. (PUBMED:14566473) indicated that in a Cox regression model, higher T-stage, four or more positive lymph nodes, and LVI remained significant for DFS and distant metastasis-free survival (DMFS), but the absence of estrogen receptors and the omission of regional RT were also significant. This further supports the idea that regional RT can play a role in improving outcomes for patients with poor prognostic factors. In summary, the evidence suggests that regional RT might be beneficial in selected subgroups of patients with extranodal invasion and other poor prognostic factors, potentially improving their survival outcomes despite the presence of these high-risk features.
Instruction: Do sleep deprivation and time of day interact with mild obstructive sleep apnea to worsen performance and neurobehavioral function? Abstracts: abstract_id: PUBMED:17557439 Do sleep deprivation and time of day interact with mild obstructive sleep apnea to worsen performance and neurobehavioral function? Study Objectives: Sleep deprivation, time of day (circadian influences), and obstructive sleep apnea (OSA) all reduce performance and neurobehavioral function. We assessed the interactive effect of sleep deprivation and time of day on performance and neurobehavioral function in subjects with and without mild OSA. Methods: This was a cross-over study in which 13 subjects with mild OSA and 16 subjects without OSA had performance and neurobehavioral testing after a normal night's sleep and after a night of supervised sleep deprivation. All subjects were studied in the sleep laboratory of a university teaching hospital. Subjects were administered questionnaires to collect demographic, physical, and medical information; completed actigraphy and sleep diaries to estimate prior sleep debt before testing periods; and were tested on the Neurobehavioral Assessment Battery, a personal computer-based driving simulator (AusEd), and the Oxford Sleep Resistance Test to assess performance and neurobehavioral function. Results: Sleep deprivation resulted in poorer driving simulator and neurobehavioral performance for most outcome measures. The worst daytime performance was often seen at 3:00 PM. Subjects with mild OSA were less aware of their sleepiness due to sleep deprivation and, in 1 reaction time task, showed greater impairment than did subjects without OSA at certain times of the day after sleep deprivation. Conclusions: The results suggest that subjects with mild OSA are not primarily different than subjects without OSA in their response to sleep deprivation or time-of-day influences. Consistent with previous literature, there were clear effects of sleep deprivation and time of day in all subjects. The finding that perception of daytime sleepiness after sleep deprivation was blunted in subjects with OSA compared with subjects without OSA, despite similar performance decrements, warrants further study. abstract_id: PUBMED:14746387 Does sleep deprivation worsen mild obstructive sleep apnea? Study Objectives: Sleep deprivation is believed to worsen obstructive sleep apnea (OSA). We assessed the effect of acute sleep deprivation on polysomnography in a cohort of subjects with mild OSA and a cohort of subjects without OSA. Design: Crossover study in which subjects initially had polysomnography after a normal night's sleep or after 36 hours of sleep deprivation, followed by a 2- to 4-week interval, after which subjects were restudied under the alternate testing condition. Setting And Participants: 13 subjects with mild OSA and 16 subjects without OSA were studied in a university teaching hospital sleep laboratory. Interventions: 36 hours of supervised sleep deprivation. Measurements: Subjects' age, body mass index, neck circumference and Epworth Sleepiness Scale scores were measured; actigraphy and sleep diaries were used to estimate prior sleep debt before each sleep study. Results: Sleep deprivation was found to significantly increase total sleep time, sleep efficiency, and rapid eye movement and slow-wave sleep time. Subjects with OSA showed a lower minimum oxygen saturation after sleep deprivation. However, subjects did not show a significantly different respiratory disturbance index, arousal index, or length of the longest apnea after sleep deprivation. Conclusions: Acute sleep deprivation did not worsen most OSA parameters as measured by polysomnography. A lower minimum oxygen saturation in mild OSA subjects after sleep deprivation may be important in patients with significant cardiorespiratory disease. More research is needed to assess whether daytime performance and function (eg, driving, sleepiness) is more greatly impaired in OSA subjects who are sleep deprived, compared to normal subjects who are sleep deprived. abstract_id: PUBMED:17507290 Does age worsen EEG slowing and attention deficits in obstructive sleep apnea syndrome? Objective: The aim of this study was to determine whether EEG slowing is more pronounced in older than younger OSAS patients and to verify whether this cortical slowing is correlated to daytime performance, respiratory perturbation and sleep fragmentation. Methods: Twelve young OSAS patients (mean age 38.2+/-2.0 y) and 13 older OSAS patients (mean age 62.2+/-1.9 y) along with 13 young controls (mean age 35.8+/-2.0 y) and 14 older controls (mean age 60.2+/-2.0 y) underwent a polysomnographic evaluation followed by a waking EEG recording. As a global index of cortical slowing, a ratio of slow-to-fast frequencies was calculated in all cortical regions. Daytime performance was assessed using the four choice reaction time test. Results: Differences in waking EEG and in daytime performance were analyzed by ANOVAs with Group and Age as factors. Waking EEG did not yield a Group by Age interaction. OSAS patients had higher ratios across all regions than controls. Similarly, daytime performance revealed no Group by Age interaction. However, OSAS patients showed more lapses than controls and older subjects were slower than younger subjects. Conclusions: Our results indicate that age does not interact with OSAS to worsen the severity of cortical slowing, but age can add to the OSAS effect to worsen daytime performance deficits in OSAS patients. Significance: The daytime performance deficits observed particularly in elderly OSAS patients warrant a careful clinical assessment of these patients to prevent accidents and injuries. abstract_id: PUBMED:12530988 Outcome measurement in sleep medicine practice and research. Part 2: assessment of neurobehavioral performance and mood. Neurobehavioral performance and alterations in mood consequent to sleep disorders and their treatment has recently been the focus of clinical investigations. Primarily in patients with obstructive sleep apnea (OSA), there has been increased interest in the effects of sleep fragmentation and hypoxemia on the brain's ability to process information and to alter affect. The purpose of the second part of this two-part series is to describe measures that could be applied to document the impact of sleep disorders on neurobehavioral performance and mood, discuss factors affecting the selection of measures for research and practice, and to describe evidence generated by the use of these instruments in research. The neurobehavioral deficits that accompany sleep deprivation can be categorised as decrements in cognitive throughput, working memory and sustained attention. Usually evaluated using tasks of short duration, impairments associated with OSA have included impaired information processing and decline in the total number of completed and/or correct responses per unit time. Using assessments of working memory, including short recall maneuvers involving words or paragraphs, investigators have documented in sleep apnea patients the inability to consolidate and recall material producing deficits in the recollection and retention of new information. Evaluations of sustained attention appraise reaction time, the ability to remain on task, and the number of errors of omission and false responses. Evidence suggests that OSA patients display slowing of response time as well as increased errors, lapses and number of false responses. Similar deficits have been documented with sustained and divided attention tasks that present the respondent with challenges in tracking and reaction response, tasks required for driving. Although untreated sleep apnea patients were more impaired than normal controls, their performance on such tasks was greatly enhanced following CPAP treatment. In addition to substantial clinically meaningful improvements in many areas of neurobehavioral performance, changes in mood have also accompanied treatment for sleep apnea. 2001 Harcourt Publishers Ltd abstract_id: PUBMED:24932145 Individual variability and predictors of driving simulator impairment in patients with obstructive sleep apnea. Study Objectives: Obstructive sleep apnea (OSA) is associated with driving impairment and road crashes. However, daytime function varies widely between patients presenting a clinical challenge when assessing crash risk. This study aimed to determine the proportion of patients showing "normal" versus "abnormal" driving simulator performance and examine whether anthropometric, clinical, and neurobehavioral measures predict abnormal driving. Methods: Thirty-eight OSA patients performed a 90-min simulated driving task under 3 conditions: normal sleep, restricted sleep (4 h in bed), and normal sleep + alcohol (BAC∼0.05 g/dL). Patients were classified as "resilient" drivers if, under all 3 experimental conditions their mean steering deviation fell within 2 standard deviations of the mean steering deviation of 20 controls driving under baseline normal sleep conditions, or a "vulnerable" driver if mean steering deviation was outside this range in at least one experimental condition. Potentially predictive baseline anthropometric, clinical, neurocognitive, and cortical activation measures were examined. Results: Of the 38 OSA patients examined, 23 (61%) and 15 (39%) were classified as resilient and vulnerable drivers, respectively. There were no differences in baseline measures between the groups, although the proportion of females was greater and self-reported weekly driving exposure was less among vulnerable drivers (p &lt; 0.05). On univariate analysis gender, weekly driving hours, and auditory event related potential P2 amplitude were weakly associated with group status. Multivariate analysis showed weekly driving hours (OR 0.69, 95%CI, 0.51-0.94, p = 0.02) and P2 amplitude (OR 1.34, 95%CI 1.02-1.76, p = 0.035) independently predicted vulnerable drivers. Conclusions: Most OSA patients demonstrated normal simulated driving performance despite exposure to further sleep loss or alcohol. Most baseline measures did not differentiate between resilient and vulnerable drivers, although prior driving experience and cortical function were predictive. Novel measures to assist identification of OSA patients at risk of driving impairment and possibly accidents are needed. Trial Registration: Data presented in this manuscript was collected as part of a clinical trial "Experimental Investigations of Driving Impairment in Obstructive Sleep Apnea." Trial ID: ACTRN12610000009011, URL: http://www.anzctr.org.au/trial_view.aspx?ID=334979. abstract_id: PUBMED:9085510 Neurobehavioral consequences of arousals. The neurobehavioral deficits of obstructive sleep apnea syndrome (OSAS) are often attributed to the rate of respiratory disturbance or rate of arousals during sleep. However, sleep disordered breathing is also associated with other changes in sleep infrastructure that may account for cumulative waking deficits. This was illustrated in polysomnographic data from 1,521 patients with OSAS where increasing arousal indices were associated with increased duration of stage 1 sleep and concomitant reduction in total sleep time. Similar results have been found in paradigms in which sleep was experimentally fragmented in healthy individuals. It appears that chronic fragmentation of sleep, whether by apneas or acoustic stimuli, leads to cumulative homeostatic pressure for sleep, which may explain a number of phenomenon characteristic of both untreated OSAS patients and experimentally fragmented sleepers: (1) increased arousal threshold, (2) rapid return to sleep after arousal, (3) fewer awakenings over time, (4) increased sleep inertia on awakenings, (5) increased amnesia for arousals, and (6) daytime sleepiness. Elevated homeostatic drive for sleep appears to be a function of both the frequency of arousals within a night and the chronicity of sleep fragmentation across nights, neither of which have been adequately modeled in experimental studies of healthy subjects. abstract_id: PUBMED:26840327 Does Suspected Sleep Disordered Breathing Impact on the Sleep and Performance of Firefighting Volunteers during a Simulated Fire Ground Campaign? Adequate sleep is fundamental to workplace performance. For volunteer firefighters who work in safety critical roles, poor performance at work can be life threatening. Extended shifts and sleeping conditions negatively impact sleep during multi-day fire suppression campaigns. Having sleep disordered breathing (SDB) could contribute further to sleep deficits. Our aim was to investigate whether those with suspected SDB slept and performed more poorly during a fire ground simulation involving sleep restriction. Participants, n = 20 participated in a 3-day-4-night fire ground simulation. Based on oximetry desaturation index data collected during their participation, participants were retrospectively allocated to either a SDB (n = 8) or a non-SDB group (n = 12). The simulation began with an 8 h Baseline sleep (BL) followed by two nights of restricted (4 h) sleep and an 8 h recovery sleep (R). All sleeps were recorded using a standard electroencephalography (EEG) montage as well as oxygen saturation. During the day, participants completed neurobehavioral (response time, lapses and subjective fatigue) tasks. Mixed effects ANOVA were used to compare differences in sleep and wake variables. Analyses revealed a main effect of group for Total sleep (TST), REM , wake after sleep onset (WASO) and Arousals/h with the SDB group obtaining less TST and REM and greater WASO and Arousals/h. The group × night interaction was significant for N3 with the SDB group obtaining 42 min less during BL. There was a significant main effect of day for RRT, lapses and subjective fatigue and a significant day × group interaction for RRT. Overall, the SDB group slept less, experienced more disturbed sleep and had poorer response time performance, which was exacerbated by the second night of sleep restriction. This could present a safety concern, particularly during longer campaigns and is worthy of further investigation. In addition, we would recommend promotion of awareness of SDB, its symptoms and potential impact among volunteers and relevant agencies. abstract_id: PUBMED:32005352 Effect of Obstructive Sleep Apnea on Neurocognitive Performance. This article reviews the effects of obstructive sleep apnea on neurocognitive performance, proposed mechanisms of cognitive impairment, and the effects of continuous positive airway pressure on performance. Obstructive sleep apnea can affect several domains of neurocognitive performance to include attention and vigilance, memory and learning, psychomotor function, emotional regulation, and executive function. Proposed mechanisms include intermittent hypoxemia, sleep deprivation and fragmentation, hypercapnia, and disruption of the hypothalamic-pituitary-adrenal-axis. Continuous positive airway pressure can improve cognitive defects associated with obstructive sleep apnea. More data are needed to determine whether other therapies improve cognitive function. abstract_id: PUBMED:15201136 Efficacy of positive airway pressure and oral appliance in mild to moderate obstructive sleep apnea. The efficacy of currently recommended treatments is uncertain in patients with mild to moderate obstructive sleep apnea (apnea-hypopnea index [AHI], 5-30). A group of 114 sleep clinic patients with an AHI of 5-30 have participated in a randomized controlled crossover trial of 3 months of treatment with each of nasal continuous positive airway pressure (CPAP), a mandibular advancement splint, and a placebo tablet. Outcomes were sleep fragmentation and hypoxemia, daytime sleepiness, quality of life, neurobehavioral function, and blood pressure. Both active treatments improved sleep outcomes, but positive airway pressure had a greater effect. The quality of life, symptoms, and subjective but not objective sleepiness improved to a similar degree with both treatments; however, many of the improvements seen in neuropsychologic function and mood were not better than the placebo effect. Some aspects of nocturnal blood pressure were improved with the splint but not with CPAP. This study has shown that although both CPAP and mandibular advancement splint effectively treated sleep-disordered breathing and sleepiness, the expected response in neurobehavioral function was incomplete. This may be due to the splint having a lesser therapeutic effect and CPAP being poorly tolerated and therefore used less in this patient group. abstract_id: PUBMED:15231906 Neurobehavioral implications of habitual snoring in children. Objective: Current guidelines for the treatment of children with obstructive sleep apnea (OSA) suggest that primary snoring (PS) in children is benign. However, PS has not been well evaluated, and it is unknown whether PS is associated with serious morbidity. This study investigated whether PS is associated with neurobehavioral deficits in children. Methods: Parents of 5- to 7-year-old snoring children in public schools were surveyed about their child's sleeping habits. Children with a history of snoring and nonsnoring children were invited for overnight polysomnographic assessment and a battery of neurobehavioral tests. Only children who did not have a history of attention-deficit/hyperactivity disorder and were not considered hyperactive by parental report were tested. Results: Children with a history of snoring, an obstructive apnea index of &lt;1/hour of total sleep time (hrTST), an apnea/hypopnea index &lt;5/hrTST, and no gas exchange abnormalities were classified as PS (n = 87). Control subjects were defined as children without a history of snoring, an obstructive apnea index &lt;1/hrTST, an apnea/hypopnea index &lt;5/hrTST, and no gas exchange abnormalities (n = 31). Although means for both groups were in the normal range, the PS children were found to perform worse on measures related to attention, social problems, and anxious/depressive symptoms. In addition, although within the normal range, both overall cognitive abilities and certain language and visuospatial functions were significantly lower for the PS group than for the control subjects. Conclusions: PS seems to be associated with significant neurobehavioral deficits in a subset of children, possibly related to increased susceptibility to sleep fragmentation. Larger studies are urgently required because current guidelines for treatment of snoring in children may require reevaluation. Answer: Yes, sleep deprivation and time of day do interact with mild obstructive sleep apnea (OSA) to worsen performance and neurobehavioral function. A study found that sleep deprivation resulted in poorer driving simulator and neurobehavioral performance for most outcome measures, with the worst daytime performance often seen at 3:00 PM. Subjects with mild OSA were less aware of their sleepiness due to sleep deprivation and, in one reaction time task, showed greater impairment than subjects without OSA at certain times of the day after sleep deprivation (PUBMED:17557439). This suggests that while subjects with mild OSA are not primarily different from those without OSA in their response to sleep deprivation or time-of-day influences, the perception of daytime sleepiness after sleep deprivation was blunted in subjects with OSA compared to those without, despite similar performance decrements (PUBMED:17557439). Additionally, acute sleep deprivation did not worsen most OSA parameters as measured by polysomnography, but a lower minimum oxygen saturation in mild OSA subjects after sleep deprivation may be important in patients with significant cardiorespiratory disease (PUBMED:14746387). It is also noted that age does not interact with OSA to worsen the severity of cortical slowing, but it can add to the OSA effect to worsen daytime performance deficits in OSA patients (PUBMED:17507290). Overall, the interaction of sleep deprivation and time of day with mild OSA can exacerbate performance and neurobehavioral function, and this is an area that warrants further study to understand the implications for individuals with OSA, especially in relation to activities such as driving and other tasks that require sustained attention and alertness.
Instruction: Should Risk Adjustment for Surgical Outcomes Reporting Include Sociodemographic Status? Abstracts: abstract_id: PUBMED:27216572 Should Risk Adjustment for Surgical Outcomes Reporting Include Sociodemographic Status? A Study of Coronary Artery Bypass Grafting in California. Background: Public reporting of surgical outcomes must adjust for patient risk. However, whether patient sociodemographic status (SDS) should be included is debatable. Our objective was to empirically compare risk-adjustment models and hospital ratings with or without SDS factors for patients undergoing coronary artery bypass grafting. Study Design: This is a retrospective analysis of the California Coronary Artery Bypass Grafting Outcomes Reporting Program, 2011-2012. Outcomes included 30-day or in-hospital mortality, perioperative stroke, and 30-day readmission. Sociodemographic status factors included race, language, insurance, ZIP code-based median income, and percent that were a college graduate. The c-statistic and goodness-of-fit were compared between models with and without SDS factors. Differences in hospital performance rating when adjusting for SDS were also compared. Results: None of the SDS factors predicted mortality. Income, education, and language had no impact on any outcomes. Insurance predicted stroke (MediCal vs private insurance, odds ratio [OR] = 1.91; 95% CI, 1.11-3.31; p = 0.020) and readmissions (Medicare vs private insurance, OR = 1.36; 95% CI, 1.16-1.61; p &lt; 0.001; MediCal vs private insurance, OR = 1.56; 95% CI, 1.26-1.94; p &lt; 0.001). Race also predicted stroke (Asian vs white, OR = 2.26; p &lt; 0.001). Adding SDS factors improved the c-statistic in readmission only (0.652 vs 0.645; p = 0.008). Goodness-of-fit worsened when adding SDS factors to mortality models, but was no different in stroke or readmissions. Hospital performance rating only changed in readmissions; of 124 hospitals, only 1 hospital moved from "better" to "average" when adjusting for SDS. Conclusions: Adjusting for insurance improves statistical models when analyzing readmissions after coronary artery bypass grafting, but does not impact hospital performance ratings substantially. Deciding whether SDS should be included in a patient's risk profile depends on valid measurements of SDS and requires a nuanced approach to assessing how these variables improve risk-adjusted models. abstract_id: PUBMED:29945848 Exploring an optimal risk adjustment model for public reporting of cesarean section surgical site infections. Background: Public report of surgical site infections (SSI) rates has been an important component of SSI reduction strategies, and risk adjustment is needed before SSI rates are publicly reported. Improving the risk adjustment model facilitates meaningful comparison in the public reporting of SSIs. This research aimed to explore an optimal risk adjustment model for the public reporting of cesarean section (CS) SSI. Methods: Information on 2506 cases of CS performed at T hospital, a tertiary general hospital located in the W City of H Province in China, from 01 January 2013 to 31 December 2014 was collected. The data were used to construct the multivariate risk adjustment models of CS SSI through logistic and Poisson stepwise regression. The c-index was used to compare the predictive power between the new logistic regression and the National Nosocomial Infections Surveillance (NNIS) risk index model. Pearson goodness-of-fit was determined to compare the goodness-of-fit between the new Poisson regression and the NNIS risk index model. The two new regression models were also compared. Results: The logistic and Poisson regression models included two patient-related risk factors, namely, BMI (OR=1.085, P=0.006; RR=1.081, P=0.006) and ASA score (OR=1.522, P=0.044; RR=1.501, P=0.047). The c-index of the logistic regression model (0.628) was higher than that of the NNIS risk index model (0.600). The goodness-of-fit of the Poisson regression model (0.946) was better than that of the NNIS risk index model (0.851). Conclusions: The logistic and Poisson regression risk models are better than the NNIS risk index model, implying that a multifactorial risk adjustment model is needed for the public reporting of CS SSI. The advantage of logistic regression model is that the predictive power of model can be evaluated by c-index, however, Poisson regression may offer more advantages on model accuracy than logistic regression does when the infection rate decreases. abstract_id: PUBMED:29298402 Public Reporting of Primary Care Clinic Quality: Accounting for Sociodemographic Factors in Risk Adjustment and Performance Comparison. Performance measurement and public reporting are increasingly being used to compare clinic performance. Intended consequences include quality improvement, value-based payment, and consumer choice. Unintended consequences include reducing access for riskier patients and inappropriately labeling some clinics as poor performers, resulting in tampering with stable care processes. Two analytic steps are used to maximize intended and minimize unintended consequences. First, risk adjustment is used to reduce the impact of factors outside providers' control. Second, performance categorization is used to compare clinic performance using risk-adjusted measures. This paper examines the effects of methodological choices, such as risk adjusting for sociodemographic factors in risk adjustment and accounting for patients clustering by clinics in performance categorization, on clinic performance comparison for diabetes care, vascular care, asthma, and colorectal cancer screening. The population includes all patients with commercial and public insurance served by clinics in Minnesota. Although risk adjusting for sociodemographic factors has a significant effect on quality, it does not explain much of the variation in quality. In contrast, taking into account the nesting of patients within clinics in performance categorization has a substantial effect on performance comparison. abstract_id: PUBMED:36934654 Does Work Relative Value Unit Measure Surgical Complexity for Risk Adjustment of Surgical Outcomes? Introduction: The purpose of this study was to determine whether the work relative value unit (workRVU) of a patient's operation can be useful as a measure of surgical complexity for the risk adjustment of surgical outcomes. Methods: We retrospectively analyzed the American College of Surgeon's National Surgical Quality Improvement Program database (2005-2018). We examined the associations of workRVU of the patient's primary operation with preoperative patient characteristics and associations with postoperative complications. We performed forward selection multiple logistic regression analysis to determine the predictive importance of workRVU. We then generated prediction models using patient characteristics with and without workRVU and compared c-indexes to assess workRVU's additive predictive value. Results: 7,507,991 operations were included. Patients who were underweight, functionally dependent, transferred from an acute care hospital, had higher American Society of Anesthesiologists class or who had medical comorbidities had operations with higher workRVU (all P &lt; 0.0001). The subspecialties with the highest workRVU were neurosurgery (mean = 22.2), thoracic surgery (mean = 21.1), and vascular surgery (mean = 18.8) (P &lt; 0.0001). For all postoperative complications, mean workRVU was higher for patients with the complication than those without (all P &lt; 0.0001). For eight of 12 postoperative complications, workRVU entered the logistic regression models as a predictor variable in the 1st to 4th steps. Addition of workRVU as a preoperative predictive variable improved the c-index of the prediction models. Conclusions: WorkRVU was associated with sicker patients and patients experiencing postoperative complications and was an important predictor of postoperative complications. When added to a prediction model including patient characteristics, it only marginally improved prediction. This is possibly because workRVU is associated with patient characteristics. abstract_id: PUBMED:22388108 Reliability adjustment for reporting hospital outcomes with surgery. Background: Reliability adjustment, a novel technique for quantifying and removing statistical "noise" from quality rankings, is becoming more widely used outside surgery. We sought to evaluate its impact on hospital outcomes assessed with the American College of Surgeons' National Surgical Quality Improvement Program (ACS-NSQIP). Methods: We used prospective, clinical data from the ACS-NSQIP to identify all patients undergoing colon resection in 2007 (n = 181 hospitals, n = 18,455 patients). We first used standard NSQIP techniques to generate risk-adjusted mortality and morbidity rates for each hospital. Using hierarchical logistic regression models, we then adjusted these for reliability using empirical Bayes techniques. To evaluate the impact of reliability adjustment, we first estimated the extent to which hospital-level variation was reduced. We then compared hospital mortality and morbidity rankings and outlier status before and after reliability adjustment. Results: Reliability adjustment greatly diminished apparent variation in hospital outcomes. For risk-adjusted mortality, there was a 6-fold difference before (1.4%-7.8%) and less than a 2-fold difference (3.2% to 5.7%) after reliability adjustment. For risk-adjusted morbidity, there was a 2-fold difference (18.0%-38.2%) before and a 1.5-fold difference (20.8%-34.8%) after reliability adjustment. Reliability adjustment had a large impact on hospital mortality and morbidity rankings. For example, with rankings based on mortality, 44% (16 hospitals) of the "best" hospitals (top 20%) were reclassified after reliability adjustment. Similarly, 22% (8 hospitals) of the "worst" hospitals (bottom 20%) were reclassified after reliability adjustment. Conclusions: Reliability adjustment reduces variation due to statistical noise and results in more accurate estimates of risk-adjusted hospital outcomes. Given the risk of misclassifying hospitals and surgeons using standard approaches, this technique should be considered when reporting surgical outcomes. abstract_id: PUBMED:23628227 Optimizing ACS NSQIP modeling for evaluation of surgical quality and risk: patient risk adjustment, procedure mix adjustment, shrinkage adjustment, and surgical focus. The American College of Surgeons National Surgical Quality Improvement Program (ACS NSQIP) collects detailed clinical data from participating hospitals using standardized data definitions, analyzes these data, and provides participating hospitals with reports that permit risk-adjusted comparisons with a surgical quality standard. Since its inception, the ACS NSQIP has worked to refine surgical outcomes measurements and enhance statistical methods to improve the reliability and validity of this hospital profiling. From an original focus on controlling for between-hospital differences in patient risk factors with logistic regression, ACS NSQIP has added a variable to better adjust for the complexity and risk profile of surgical procedures (procedure mix adjustment) and stabilized estimates derived from small samples by using a hierarchical model with shrinkage adjustment. New models have been developed focusing on specific surgical procedures (eg, "Procedure Targeted" models), which provide opportunities to incorporate indication and other procedure-specific variables and outcomes to improve risk adjustment. In addition, comparative benchmark reports given to participating hospitals have been expanded considerably to allow more detailed evaluations of performance. Finally, procedures have been developed to estimate surgical risk for individual patients. This article describes the development of, and justification for, these new statistical methods and reporting strategies in ACS NSQIP. abstract_id: PUBMED:18808342 Complex surgical site infections and the devilish details of risk adjustment: important implications for public reporting. Objective: To validate the National Nosocomial Infection Surveillance (NNIS) risk index as a tool to account for differences in case mix when reporting rates of complex surgical site infection (SSI). Design: Prospective cohort study. Setting: Twenty-four community hospitals in the southeastern United States. Methods: We identified surgical procedures performed between January 1, 2005, and June 30, 2007. The Goodman-Kruskal gamma or G statistic was used to determine the correlation between the NNIS risk index score and the rates of complex SSI (not including superficial incisional SSI). Procedure-specific analyses were performed for SSI after abdominal hysterectomy, cardiothoracic procedures, colon procedures, insertion of a hip prosthesis, insertion of a knee prosthesis, and vascular procedures. Results: A total of 2,257 SSIs were identified during the study period (overall rate, 1.19 SSIs per 100 procedures), of which 1,093 (48.4%) were complex (0.58 complex SSIs per 100 procedures). There were 45 complex SSIs identified following 7,032 abdominal hysterectomies (rate, 0.64 SSIs per 100 procedures); 63 following 5,318 cardiothoracic procedures (1.18 SSIs per 100 procedures); 139 following 5,144 colon procedures (2.70 SSIs per 100 procedures); 63 following 6,639 hip prosthesis insertions (0.94 SSIs per 100 procedures); 73 following 9,658 knee prosthesis insertions (0.76 SSIs per 100 procedures); and 55 following 6,575 vascular procedures (0.84 SSIs per 100 procedures). All 6 procedure-specific rates of complex SSI were significantly correlated with increasing NNIS risk index score (P&lt;.05). Conclusions: Some experts recommend reporting rates of complex SSI to overcome the widely acknowledged detection bias associated with superficial incisional infection. Furthermore, it is necessary to compensate for case-mix differences in patient populations, to ensure that intrahospital comparisons are meaningful. Our results indicate that the NNIS risk index is a reasonable method for the risk stratification of complex SSIs for several commonly performed procedures. abstract_id: PUBMED:26782350 The necessity of sociodemographic status adjustment in hospital value rankings for perforated appendicitis in children. Background: Hospitals are increasingly focused on demonstration of high-value care for common surgical procedures. Although sociodemographic status (SDS) factors have been tied to various surgical outcomes, the impact of SDS factors on hospital value rankings has not been well explored. Our objective was to examine effects of SDS factors on high-value surgical care at the patient level, and to illustrate the importance of SDS adjustment when evaluating hospital-level performance. Methods: Perforated appendicitis hospitalizations were identified from the 2012 Kids' Inpatient Database. The primary outcome of interest was high-value care as defined by evaluation of duration of stay and cost. SDS factors included race, health insurance type, median household income, and patient location. The impact of SDS on high-value care was estimated using regression models after accounting for hospital-level variation. Risk-adjusted value rankings were compared before and after adjustment for SDS. Results: From 9,986 hospitalizations, 998 high-value encounters were identified. African Americans were less likely to experience high-value care compared with white patients after adjusting for all SDS variables. Although private insurance and living in nonmetro counties were associated independently with high-value care, the effects were attenuated in the fully adjusted models. For the 136 hospitals ranked according to risk-adjusted value status, 59 hospitals' rankings improved after adjustment and 53 hospitals' rankings declined. Conclusion: After adjustment for patient and hospital factors, SDS has a small but significant impact on risk-adjusted hospital performance ranking for pediatric appendicitis. Adjustment for SDS should be considered in future comparative performance assessment. abstract_id: PUBMED:17645157 Risk adjustment and public reporting on home health care. Risk adjustment is a critical tool in public reporting of quality measures. Its aim is to level the playing field so that providers serving different patients can be meaningfully compared. We used a theory and evidence-based approach to develop risk-adjustment models for the 10 publicly reported home health quality measures and compared their performance with current models developed using a data-driven stepwise approach. Overall, the quality ratings for most agencies were similar regardless of approach. Theory and evidence-based models have the potential to simplify risk adjustment, and thereby improve provider and consumer understanding and confidence in public reporting. abstract_id: PUBMED:30741732 Socioeconomic "Distressed Communities Index" Improves Surgical Risk-adjustment. Objective: We hypothesize the Distressed Communities Index (DCI), a composite socioeconomic ranking by ZIP code, will predict risk-adjusted outcomes after surgery. Summary Of Background Data: Socioeconomic status affects surgical outcomes; however, the American College of Surgeons National Surgery Quality Improvement Program (ACS NSQIP) database does not account for these factors. Methods: All ACS NSQIP patients (17,228) undergoing surgery (2005 to 2015) at a large academic institution were paired with the DCI, which accounts for unemployment, education level, poverty rate, median income, business growth, and housing vacancies. Developed by the Economic Innovation Group, DCI scores range from 0 (no distress) to 100 (severe distress). Multivariable regressions were used to evaluate ACS NSQIP predicted risk-adjusted effect of DCI on outcomes and inflation-adjusted hospital cost. Results: A total of 4522 (26.2%) patients came from severely distressed communities (top quartile). These patients had higher rates of medical comorbidities, transfer from outside hospital, emergency status, and higher ACS NSQIP predicted risk scores (all P &lt; 0.05). In addition, these patients had greater resource utilization, increased postoperative complications, and higher short- and long-term mortality (all P &lt; 0.05). Risk-adjustment with multivariate regression demonstrated that DCI independently predicts postoperative complications (odds ratio 1.1, P = 0.01) even after accounting for ACS NSQIP predicted risk score. Furthermore, DCI independently predicted inflation-adjusted cost (+$978/quartile, P &lt; 0.0001) after risk adjustment. Conclusions: The DCI, an established metric for socioeconomic distress, improves ACS NSQIP risk-adjustment to predict outcomes and hospital cost. These findings highlight the impact of socioeconomic status on surgical outcomes and should be integrated into ACS NSQIP risk models. Answer: The question of whether risk adjustment for surgical outcomes reporting should include sociodemographic status (SDS) is complex and has been the subject of various studies. Some research suggests that including SDS factors can improve statistical models for certain outcomes. For instance, a study on coronary artery bypass grafting in California found that adjusting for insurance improved statistical models when analyzing readmissions, although it did not substantially impact hospital performance ratings (PUBMED:27216572). Similarly, research on cesarean section surgical site infections (SSI) in China indicated that logistic and Poisson regression models, which included patient-related risk factors like BMI and ASA score, were better than the National Nosocomial Infections Surveillance (NNIS) risk index model, suggesting that a multifactorial risk adjustment model is needed for public reporting of CS SSI (PUBMED:29945848). In the context of primary care clinic quality reporting, risk adjusting for sociodemographic factors had a significant effect on quality but did not explain much of the variation in quality. However, accounting for patients clustering by clinics in performance categorization had a substantial effect on performance comparison (PUBMED:29298402). Another study on the work relative value unit (workRVU) as a measure of surgical complexity found that while workRVU was associated with sicker patients and postoperative complications, it only marginally improved prediction when added to a model including patient characteristics (PUBMED:36934654). Reliability adjustment, which quantifies and removes statistical noise from quality rankings, has been shown to reduce variation due to statistical noise and results in more accurate estimates of risk-adjusted hospital outcomes (PUBMED:22388108). The American College of Surgeons National Surgical Quality Improvement Program (ACS NSQIP) has refined surgical outcomes measurements and enhanced statistical methods to improve the reliability and validity of hospital profiling, including adjustments for patient risk, procedure mix, and stabilization of estimates derived from small samples (PUBMED:23628227). The NNIS risk index has been validated as a reasonable method for risk stratification of complex SSIs for several commonly performed procedures (PUBMED:18808342). In pediatric appendicitis, SDS factors had a small but significant impact on risk-adjusted hospital performance ranking, suggesting that SDS adjustment should be considered in future comparative performance assessment (PUBMED:26782350).
Instruction: Do groups of women aged 50 to 75 match the national average mammography rate? Abstracts: abstract_id: PUBMED:9791636 Do groups of women aged 50 to 75 match the national average mammography rate? Context: As mammography rates increase, an important question is how closely groups of women match or do not match the national-level, average screening percentage. Objective: This study employed a classification-tree methodology to combine individual risk factors from multiple logistic regression, in order to more comprehensively define groups of women less (or more) likely to be screened. Design/setting: This report was a secondary data analysis drawing on data from the 1992 National Health Interview Survey, Cancer Control Supplement (NHIS-CCS). Participants: Analyses examined mammography status of women aged 50-75 (n = 1,727). Main Outcome Measure: The dependent variable was having a screening mammogram in the past 2 years. Multiple logistic regression (SUDAAN) was conducted first to select significant correlates of screening. A classification-tree analysis (CHAID subroutine of SPSS) was then used to combine the significant correlates into exclusive and exhaustive subgroups. Results: A total of 13 subgroups were identified, of which only six approximated the overall population screening rate. The lowest screening occurred in small clusters of women, which, when added together, formed a larger percentage of the population who were not screened within the past 2 years. Conclusions: Efforts to increase mammography may face the challenge of identifying relatively small pockets of women and addressing their individual barriers. Further work should be done to find efficient ways to combine individual risk factors into groups at risk for not being screened. abstract_id: PUBMED:31541655 Risk-Based Screening Mammography for Women Aged &lt;40: Outcomes From the National Mammography Database. Objective: There is insufficient large-scale evidence for screening mammography in women &lt;40 years at elevated risk. This study compares risk-based screening of women aged 30 to 39 with risk factors versus women aged 40 to 49 without risk factors in the National Mammography Database (NMD). Methods: This retrospective, HIPAA-compliant, institutional review board-exempt study analyzed data from 150 NMD mammography facilities in 31 states. Patients were stratified by 5-year age intervals, availability of prior mammograms, and specific risk factors for breast cancer: family history of breast cancer, personal history of breast cancer, and dense breasts. Four screening performance metrics were calculated for each age and risk group: recall rate (RR), cancer detection rate (CDR), and positive predictive values for biopsy recommended (PPV2) and biopsy performed (PPV3). Results: Data from 5,986,131 screening mammograms performed between January 2008 and December 2015 in 2,647,315 women were evaluated. Overall, mean CDR was 3.69 of 1,000 (95% confidence interval: 3.64-3.74), RR was 9.89% (9.87%-9.92%), PPV2 was 20.1% (19.9%-20.4%), and PPV3 was 28.2% (27.0%-28.5%). Women aged 30 to 34 and 35 to 39 had similar CDR, RR, and PPVs, with the presence of the three evaluated risk factors associated with significantly higher CDR. Moreover, compared with a population currently recommended for screening mammography in the United States (aged 40-49 at average risk), incidence screening (at least one prior screening examination) of women aged 30 to 39 with the three evaluated risk factors has similar cancer detection rates and recall rates. Discussion: Women with one or more of these three specific risk factors likely benefit from screening commencing at age 30 instead of age 40. abstract_id: PUBMED:12495883 Mammography use among women aged 35 to 75 years The aim of the study was to evaluate mammography use and indications (diagnosis or screening) among 19,253 women aged 35 to 75 attending a health screening centre. The data were collected by a self-administered questionnaire completed after by an interview with a nurse. Mammography use is frequent in women attending a health screening centre, even before the age of 50: 79.1% had undergone at least one mammography during their life and for 47.3% a screen had been performed within the last three years. Nevertheless mammography was only used by 65.2% of the women aged 50-69 years, target age group of the screening programme. A logistic regression model adjusting for multiple variables was used to examine factors associated with women who underwent a screen within the last three years. Women more likely to be screened were aged 50-69 (vs age 35-49: OR=3.1), used regular gynaecological care (OR=4.5), had a family history of breast cancer (OR=1.45), had in their district a breast cancer screening campaign (OR=1.93). The study indicated that women under 50 years of age with a family history of breast cancer and women aged 70 and over were under-screened. When a campaign for breast cancer screening was organised in the district the screening rates were higher in the target population (71,4% vs 60,1%) although only 64% of the women reported having attended. Current recommendations for breast cancer screening should be reviewed in the light of these results. abstract_id: PUBMED:29264752 Mammography use among women aged 18-39 years in the United States. Purpose: Recommendations for breast cancer screening using mammography target asymptomatic women aged ≥ 40 years who are not at increased risk for breast cancer. Evidence is not available to demonstrate benefits of screening with mammography at younger ages, and little is known about mammography use among younger women. This study described mammography use among women aged 18-39 years. Methods: We analyzed data from the 2011-2015 National Survey of Family Growth, an in-person survey of a nationally representative sample of the U.S. household population. We estimated the prevalence of ever receiving a mammogram and examined reasons for the first mammograms among women aged 18-39 years without personal cancer history (n = 8324). We classified the first mammogram as a screening examination if it was performed either as part of a routine exam or because of family history of cancer. Results: Among women aged 18-39 years, 14.3% (95% CI 13.2-15.4) reported ever having a mammogram. Prevalence of mammography use was highest among women aged 35-39 years (31.0%, 95% CI 27.8-34.5), and was higher among non-Hispanic black women than in other race/ethnicity groups. Women with a family history of breast cancer reported a higher prevalence of mammography use than women without this family history. For both women with and without a family history of breast cancer, about half of all first mammograms were performed for screening reasons. Conclusions: Among U.S. women aged 18-39 years with no personal cancer history, one in seven reported having received a mammogram. Women with no family history of breast cancer were as likely as those with a family history to initiate breast cancer screening with mammography before age 40. Our findings provide evidence that supports further research to examine factors that prompt young women to receive screening mammograms. abstract_id: PUBMED:29346854 High Participation Rate in Mammography Screening: Experience from Croatia. Objective: The aim of the study was to analyse the results of three cycles of mammography screening (MS) in the Croatian National Programme (CNP) for Early Breast Cancer Detection for women aged 50–69 years in the Bjelovar-Bilogora County (BBC) from 2006–2014. Methods: Data on women aged 50–69 screened during a 9-year period were obtained from the Croatian Cancer Registry and Institute of Public Health reports. Participation rate and performance indicators were examined. Results: The total of 57,428 women were invited to mammography screening in BBC during a 9-year period and 31,402 mammograms in total were performed. The response rate of 84% in BBC was consistently higher than the national average of about 60% reported in 2007, 2013 and 2014. Conclusion: The National Programme in BBC has been carried out continuously for nine years with a higher response rate compared with the national average, as a result of additional efforts of the Croatian Institute of Public Health team, as well as good cooperation among all programme stakeholders. It was concluded that to achieve better results in the response of women to screening and consequently reduced mortality from breast cancer is possible through tailored health promotion activities. abstract_id: PUBMED:11232132 Predictors of mammography use among Canadian women aged 50-69: findings from the 1996/97 National Population Health Survey. Background: Screening mammography, although recommended every 2 years for women aged 50-69, is thought to be underused among select groups of Canadian women. Methods: We used data from the 1996/97 National Population Health Survey to describe current patterns in mammography use (including reasons for not having a mammogram within the 2 years before the survey and future screening intentions) in Canada and to determine factors associated with nonparticipation and time-inappropriate use (mammogram 2 or more years before the survey) among women aged 50-69. Results: Among respondents aged 50-69, 79.1% (95% confidence interval [CI] 76.9%-81.2%) reported ever having had a mammogram, and 53.6% (95% CI 51.4%-55.9%) had had a recent (time-appropriate) mammogram (within the 2 years before the survey). Only 0.6% (95% CI 0.3%-0.9%) of recently screened women reported problems of access, and few reported personal or health system barriers as reasons for not obtaining a recent mammogram. Over 50% of the women who had not had a recent mammogram reported that they did not think it was necessary, and only 28.2% (95% CI 23.8%-32.7%) of those who had never had a mammogram planned to have one within the 2 years following the survey. The rate of time-appropriate mammography varied significantly by province, from 41.1% (95% CI 29.3%-52.9%) in Newfoundland to 69.4% (95% CI 61.3%-77.6%) in British Columbia. Significant predictors of never having had a mammogram included higher age, residence in a rural area, Asia as place of birth, no involvement in volunteer groups, no regular physician or recent medical consultations (including recent blood pressure check), current smoking, infrequent physical activity and no hormone replacement therapy. Interpretation: Despite increases in mammography screening rates since the 1994/95 National Population Health Survey, current estimates indicate that almost 50% of women aged 50-69 have not had a time-appropriate mammogram. Our findings confirm continued low mammography participation rates among older women and those in rural areas, select ethnic groups and women with negative health care and lifestyle characteristics. abstract_id: PUBMED:35171027 Factors Associated with Mammography Screening Choices by Women Aged 40-49 at Average Risk. Background: The U.S. Preventive Services Task Force (USPSTF) does not recommend routine mammogram screening for women aged 40-49 years at average risk for breast cancer. We aimed to assess the extent to which women were following guideline recommendations and to examine whether guideline awareness and other individual-level factors were associated with adherence. Materials and Methods: We surveyed a nationally representative panel of 383 U.S. women aged 40-49 years at low risk for hereditary breast cancer in October 2019. Results: Only 29% of women reported not having initiated screening mammography. Most women (80%) were unaware of the USPSTF screening guideline related to age of initiation and frequency of mammography. Being aware of the recommendation to initiate screening at age 50 increased the odds of not initiating screening (odds ratio [OR] = 6.70, p &lt; 0.001), whereas being older than 45 years (OR = 0.22, p &lt; 0.001) and having a primary care doctor decreased the odds of not initiating screening (OR = 0.25, p &lt; 0.001). Conclusions: Mammogram screening in excess of USPSTF recommendations is prevalent among U.S. women aged 40-49 years. Efforts are needed to increase women's awareness of the rationale for guidelines and the opportunities to discuss with providers whether delaying mammograms is appropriate. abstract_id: PUBMED:23414437 Targeting of mammography screening according to life expectancy in women aged 75 and older. Objectives: To examine receipt of mammography screening according to life expectancy in women aged 75 and older. Design: Population-based survey. Setting: United States. Participants: Community dwelling U.S. women aged 75 and older who participated in the 2008 or 2010 National Health Interview Survey. Measurements: Using a previously developed and validated index, women were categorized according to life expectancy (&gt;9, 5-9, &lt;5 years). Receipt of mammography screening in the past 2 years was examined according to life expectancy, adjusting for sociodemographic characteristics, access to care, preventive orientation (e.g., receipt of influenza vaccination), and receipt of a clinician recommendation for screening. Results: Of 2,266 respondents, 27.1% had a life expectancy of greater than 9 years, 53.4% had a life expectancy of 5 to 9 years, and 19.5% had a life expectancy of less than 5 years. Overall, 55.7% reported receiving mammography screening in the past 2 years. Life expectancy was strongly associated with receipt of screening (P &lt; .001), yet 36.1% of women with less than 5 years life expectancy were screened, and 29.2% of women with more than 9 years life expectancy were not screened. A clinician recommendation for screening was the strongest predictor of screening independent of life expectancy. Higher educational attainment, age, receipt of influenza vaccination, and history of benign breast biopsy were also independently associated with being screened. Conclusion: Despite uncertainty of benefit, many women aged 75 and older are screened with mammography. Life expectancy is strongly associated with receipt of screening, which may reflect clinicians and patients appropriately considering life expectancy in screening decisions, but 36% of women with short life expectancies are still screened, suggesting that new interventions are needed to further improve targeting of screening according to life expectancy. Decision aids and guidelines encouraging clinicians to consider patient life expectancy in screening decisions may improve care. abstract_id: PUBMED:38424761 Screening Mammography: There Is Value in Screening Women Aged 75 Years and Older. Objective: Patient screening mammography records performed in women aged 75 years and older were reviewed to evaluate the value of screening in this population, by determining the incidence of cancer diagnosed and associated outcomes. Methods: Data from patients aged 75 years and older who presented for screening mammography and underwent biopsy with resultant malignant pathology were retrospectively collected and analyzed to record patient demographics and outcomes. Results: From 2007-2017, there were 763,256 screening mammography appointments in 130,232 patients, with 3716 patients diagnosed with 4412 screen-detected malignancies (5.8 per 1000 cancers). In women aged 75 years and older, 76,885 (76,885 per 130,232, 10.1%) screening mammograms were performed in 18,497patients, with 643 malignancies diagnosed in 614 women eligible for study inclusion (8.4 per 1000 cancers). Lesions frequently presented as a mass with or without calcifications (472 per 643, 73%). A majority (529 per 643, 82%) was invasive; 79% (507 per 643) stage 0 or 1, and 63% (407 per 643) grade 2 or 3. Lymph node-positive status was confirmed in 7% of patients (46 per 614). Surgical intervention was pursued by 98% of patients (599 per 614). Conclusion: Screening mammograms in women aged 75 years and older comprise a small percentage of the total screening examinations; however, they represented a significant portion of all patients diagnosed with screen-detected cancers, showing a substantial cancer detection rate. Most tumors were low stage, intermediate to high grade, and invasive. The majority had treatment involving surgery, suggesting these women are in good health and want to pursue surgical intervention. Screening mammography should be performed in this age group. abstract_id: PUBMED:15207980 Prevalence and correlates of repeat mammography among women aged 55-79 in the Year 2000 National Health Interview Survey. Background: Utilization of mammography has increased steadily since the early 1990s. It is now important to expand the attention given to obtaining repeat examination. This study examines the prevalence and cross-sectional correlates of repeat mammography, among women aged 55-79, using a 12-month (N = 3,502) and a 24-month interval (N = 3,491). Methods: Data were from the Year 2000 Cancer Control Module of the National Health Interview Survey (NHIS-CCM). The NHIS-CCM asked about the most recent mammogram and the total number of mammograms over the prior 6 years. An algorithm estimated repeat mammography for the two intervals. Results: Prevalence estimates were 49% for the 12-month interval, and 64.1% for the 24-month interval. Correlates of lower likelihood of repeat mammography for both indicators were: no regular source of care, having public or no health insurance, less than a college education, household income less than $45K, not being married, current or never smoking, age 65-79, and lower absolute risk of breast cancer (Gail Model score). Conclusions: A substantial percentage of women do not receive repeat mammography. The correlates of repeat mammography were similar to those often found for ever-had and recent mammography. There is probably some imprecision in the prevalence estimates due to the nature of NHIS-CCM questions. Issues pertinent to the definition of repeat examination are addressed. Answer: Groups of women aged 50 to 75 do not uniformly match the national average mammography rate. A study using data from the 1992 National Health Interview Survey, Cancer Control Supplement (NHIS-CCS) found that only six out of thirteen identified subgroups approximated the overall population screening rate. The lowest screening rates occurred in small clusters of women, which collectively represented a larger percentage of the population not screened within the past 2 years (PUBMED:9791636). This suggests that there is variability in mammography rates among different groups of women within this age range, and efforts to increase mammography may need to address the individual barriers faced by these smaller pockets of women.
Instruction: Does Chile's nutritional situation constitute a double burden? Abstracts: abstract_id: PUBMED:25411304 Does Chile's nutritional situation constitute a double burden? Background: Chile has probably experienced Latin America's fastest nutritional transition, as evidenced by very low rates of stunting, but the country shows a high prevalence of obesity in most population groups. Objective: The aim was to assess the existence of a double burden of nutritional problems in Chile on the basis of available data. Design: Secondary analyses were conducted on data collected by the Health Ministry (height and weight for children aged &lt;6 y and for adults aged ≥65 y), the Education Ministry (height and weight in the first year of primary school and the first year of high school), the 2003 and 2009-2010 National Health Surveys [body mass index (BMI) and anemia prevalence], the 2010-2011 National Food Consumption Survey (ENCA; food consumption, height, and weight), and a Food Insecurity Survey of elderly adults (aged 65-74 y) in Santiago (height, weight, and food insecurity). Results: In 2011 the prevalence of stunting (height-for-age &lt; -2 z scores) was 1.9% for children &lt;6 y old and 3.6% among children in the first year of primary school. This situation was in contrast with a high prevalence of obesity in children (22.1% of children in the first year of primary school; BMI ≥2 z scores) and among adults, especially women, increasing with age (44.8% of women 45-64 y old had a BMI ≥30 kg/m(2)). The prevalence of anemia in women aged 15-64 y was low (5.1%). In the ENCA survey, women showed a high prevalence of below-average intakes of vitamin A, vitamin B-12, vitamin C, calcium, and zinc, irrespective of BMI. Elderly persons who were underweight and those who were obese had a significantly greater perception of food insecurity in relation to those with a normal weight (61%, 50%, and 33%, respectively). Conclusions: The data showed high rates of obesity with very low frequency of stunting and, although more information is needed, the double burden of malnutrition probably does not exist in Chile, unlike in other countries in the region. Among specific groups there are low intakes of critical nutrients and food insecurity. Surveys that include anthropometric and biochemical measurements at the family level are needed to correctly evaluate the double burden of malnutrition in Chile. abstract_id: PUBMED:30799815 The nutrition transition and the double burden of malnutrition. Chronic noncommunicable diseases are increasingly frequent in low- and medium-income countries, but problems of malnutrition, such as growth restriction in children or micronutrient deficiencies in both children and adults, persist in these same countries. This double burden of malnutrition and the emergence of chronic diseases such as type 2 diabetes strain healthcare systems and constitute a sometimes unbearable load for the countries concerned, for the government, but also for the individuals affected and their families. This double burden is often associated with the nutrition transition or the progression away from the local traditional diet towards a Westernized diet frequently high in fat, salt, and sugar, with low nutritional density. This transition is attributed to worldwide changes in dietary systems expressed by an increased availability of foodstuffs marketed across the planet, such as vegetable oils, sugars, and refined flours, but also the multiplication of points of sale of food that has been processed, even ultraprocessed. The efforts to battle this scourge must take into account the complexity of the phenomenon and the many factors associated with it. A systemic approach that considers the global forces governing the food systems must be promoted. Actions concerning nutrition must therefore emphasize simultaneously the problems of undernutrition and of overnutrition. WHO labels these interventions "double duty actions." abstract_id: PUBMED:33829604 Has the double burden of malnutrition reached pupils in rural western Kenya? Background: Undernutrition and overnutrition have been considered separate challenges caused by the distinct factors in distinct settings. The double burden of malnutrition is defined as the coexistence of both undernutrition and overnutrition, which has now spread globally, especially in low- and middle-income countries. In Kenya, the double burden of malnutrition is present in the capital city but the situation in rural areas currently remains unknown. The aim of this study was therefore to analyze nutritional status in rural Kenya. Methods: Data from height and weight measurements conducted by teachers in the third term of school year 2013 were collected from primary schools. Teachers were trained by experts before starting the measurements. The Z-scores of height-for-age, weight-for-age, and body mass index-for-age were calculated using AnthroPlus, and nutritional status was defined according to the World Health Organization guidelines. Results: Data analyzed from 7,447 pupils (3,763 boys and 3,684 girls) showed that 12.4% were stunted and 7.8% were underweight among those ≤120 months old, and 11.7% were thin among those &gt;120 months old. Boys were more likely to be stunted than girls. Boys were more likely to be underweight and girls were more likely to be overweight in the older age group. Conclusion: The double burden of malnutrition was not found, and undernutrition was still common among schoolchildren in the study area. However, it is important to monitor nutritional status due to adverse effects of rapid urbanization. Height and weight measurements in schools may be useful for monitoring the nutritional status of schoolchildren. abstract_id: PUBMED:9239285 Nutritional status of rural school children from Chile's metropolitan region: a comparative study: 1989 The objective of this study was to carry out a comparative study of the nutritional status of a representative sample of 651 rural school children from Chile's Metropolitan Region between two periods: 1986-1987 and 1989. The percentages of weight/age (%W/A), height/age (%H/A) and weight/height (%W/H), as well as Z scores were compared with WHO Tables; head circumference/age (%HC/A), with Tanner Tables and brachial anthropometric measurements with Frisuncho standards. Socioeconomic status (SES) was measured through Graffar's Modified Method. Statistical procedures included analysis of variance, "t" test for comparison of the means and chi-square. In 1989, results showed an improvement, both present nutritional status (%W/H) and SES as compared with 1986-1987 period. %W/A increased and %H/A decreased from 1986-1987 to 1989. %W/H should be a better indicator of nutritional status due to the high incidence of growth failure (47.4%) which was detected in 51.9% and 30.9% in low-low SES and medium SES, respectively (p &lt; 0.001). Obesity was significantly higher in females (23.5%) than in males (13.2%) (p &lt; 0.05) by which this problem must be prevented to avoid negative consequences in present and adult life. %HC/A positively and significantly increased from 1986-1987 to 1989 confirming that educational selectivity apparently relates to %HC/A, and not to %W/A or %H/A. Considering that only 13.0% of school-age children were beneficiaries of the School Feeding Program (PAE), the high prevalence of undernutrition and growth failure in non-beneficiaries and the fact that most part of the sample (86.9%) belong to low-SES, it should be necessary to increase the covering of PAE in the rural area of Chile's Metropolitan Region. abstract_id: PUBMED:36207248 The double burden of overweight or obesity and anemia among women married as children in India: A case of the Simpson's paradox. Background: Child marriage, defined as marriage before age 18 years, could impact women's nutritional status through biological as well as environmental and socioeconomic attributes affecting diet and lifestyle behaviors. This study aims to examine whether women married as children have a differential risk of individual level double burden of overweight/obesity and anemia at adult age compared to women married as adults. Methods: Using nationally representative data from India we estimated multinomial logistic regressions to obtain relative risk ratios (RRR) in favor of mutually exclusive anemia and overweight/ obesity conditions among women aged 20-49 years. We estimated the model for full sample and for sub-samples by household wealth groups. Results: We find that women who were married as children had a lower relative risk (RRR=0.941) of the double burden of anemia and overweight/obesity in the full sample. However, when sociodemographic correlates were accounted for and assessed in sub-groups by wealth groups, they had a higher relative risk (ARRR ranging from 1.079 to 1.204) of the double burden compared to women married as adults. Conclusion: Our results thus portray a classic case of the Simpson's paradox by documenting a reversal of association between child marriage and occurrence of the double burden of malnutrition in the subgroup level than that in the general population. This finding provides a critical policy insight for effective public health interventions to improve women's health and wellbeing, particularly in low resource settings. abstract_id: PUBMED:25411309 Double burden of undernutrition and obesity in Uruguay. Background: Uruguay is at an advanced stage of the epidemiologic transition; like other Latin American countries, it bears a nutritional double burden composed of undernutrition and overweight or obesity. Objectives: The aim was to estimate whether a double burden of nutritional problems exists in Uruguay and to identify if governmental programs and policies for nutrition take this double burden into account. Design: Existing studies were reviewed, and other data were processed specifically for the purpose of this article. Several data sources were used to include a broad, comprehensive population range. Results: The prevalence of stunting in children aged &lt;2 y is 10.9%. There is a high frequency of obesity, which increases with age (9.5% in children &lt;2 y old, 18.8% in those aged 6 y, 20.4% in those aged 11 y, 26.6% in those aged 13-15 y, and 35.3% in adults). In addition, 13.8% of women start pregnancy underweight and the rate of obesity increases during pregnancy from 36.7% at the beginning to 46.5% at the end. Anemia is very frequent in the selected population we examined, as follows: 31% of children &lt;2 y old and 20.9% of pregnant women in their third trimester. An obese mother with a stunted child are present in 6.3% of households; 1.9% of stunted children are obese at age 6 y and 3.1% are obese at the age of 11. Multiple logistic analysis applied to children &lt;2 y showed an association between stunting and obesity (OR: 2.0; 95% CI: 1.2, 3.6). Conclusions: The data suggest that there is a nutritional double burden in Uruguay. This nutritional burden is similar to that of other countries in the region. This important public issue should be tackled, and it should be addressed in early infancy. abstract_id: PUBMED:9830754 Obesity, an unresolved challenge in Chile Socioeconomic and demographic changes in Chile, have modified its epidemiological profile. At the present time, the most important public health problems are non communicable chronic diseases, specially cardiovascular ailments. The prevalence of cardiovascular risk factors has also increased and obesity stands out. This review analyzes the present situation of obesity and other cardiovascular risk factors of nutritional origin, in Chile, and proposes intervention alternatives. At the present time 7% preschool children, 12% of school children, 24% of pregnant women, 10% of male adults and 24% of female adults are obese. This higher prevalence is even more pronounced in women of low socioeconomic level. Risk factors associated to obesity are an inadequate diet, with a higher consumption of fats and refined foods, sedentariness and hyperlipidemia. Primary and secondary care strategies must be devised to face this growing problem. abstract_id: PUBMED:31702104 Nutritional status of Tajik children and women: Transition towards a double burden of malnutrition. The double burden of malnutrition, an emerging concern in developing countries, can exist at various levels: individual, household, and population. Here, we explore the nutritional status of Tajik women (15-49 years) and children (5-59 months) focusing on overweight/obesity along with undernutrition (underweight, stunting, and micronutrient deficiencies). For this, nutritional markers (haemoglobin (Hb), transferrin receptor (TfR), serum ferritin (Sf), retinol binding protein (RBP), vitamin D, serum folate, and urinary iodine), height, and weight were assessed from 2,145 women and 2,149 children. Dietary intake, weaning, and breastfeeding habits were recorded using a 24-hr recall and a questionnaire. Overweight (24.5%) and obesity (13.0%) are increasing among Tajik women compared with previous national surveys (2003 and 2009). Prevalence of iron deficiency and anaemia was 38.0% and 25.8%, respectively; 64.5% of women were iodine deficient, 46.5% vitamin A deficient, and 20.5% had insufficient folate levels. Women in rural areas had significantly lower iron status and body mass index and higher iodine intake compared with urban areas; 20.9% of children were stunted, 2.8% wasted, 6.2% underweight, 52.4% iron deficient, and 25.8% anaemic; all more prominent in rural areas. Dietary diversity was higher among urban women. Intraindividual or household double burden was not seen. In summary, double burden of malnutrition constituted an increase in overweight among women, especially in urban areas, and persisting levels of undernutrition (stunting, iron, and vitamin A deficiency), predominately in rural areas. A holistic, innovative approach is needed to improve infant and young children feeding and advise mothers to maintain an adequate diet. abstract_id: PUBMED:28105078 The current situation for gastric cancer in Chile. Gastric cancer is a neoplasm with a high incidence and mortality rate in Chile where more than 3000 people die every year from this type of cancer. This study shows the clinical and epidemiological considerations of this disease, information about translational research on this pathology in Chile, the contribution of Chilean doctors to the development of gastric cancer management awareness and the general situation of gastric cancer in Chile. abstract_id: PUBMED:31859970 Worsening situation of tuberculosis in Chile Background: Recent data suggest an increase in tuberculosis (TB) incidence in Chile. Aim: To evaluate recent epidemiological trends, geographic extension and potential factors associated with TB reemergence in Chile. Material And Methods: Data analysis from official sources and trend analysis. Results: TB incidence rate increased from 12.3 (2014) to 14.7 (2017) per 100,000 inhabitants. Morbidity rates also increased in nine out of 15 regions. The proportion of TB cases in specific groups has also increased in the last six years: HIV/AIDS (68%), immigrants (118%), drug users/alcoholics (267%) and homeless people (370%). Several indicators of the national TB program performance have deteriorated including TB case detection, HIV co-infection study and contact tracing activities. Overall results indicate a higher than expected case-fatality ratio (&gt; 3%), high rates of loss from follow-up (&gt; 5%), and low percentage of cohort healing rate (&lt; 90%). This decline is associated with a Control Program with scarce human resources whose central budget decreased by 90% from 2008 to 2014. New molecular diagnostic tools and liquid media culture were only recently implemented. Conclusions: TB trends and overall program performance indicators have deteriorated in recent years in Chile and several factors appear to be involved. Multiple strategies will be required to rectify this situation. Answer: Chile's nutritional situation does indeed constitute a double burden, characterized by the coexistence of obesity and undernutrition, including micronutrient deficiencies, within the population. The country has experienced a rapid nutritional transition, which has resulted in very low rates of stunting, indicating a decrease in chronic undernutrition among children. However, there is a high prevalence of obesity across most population groups, with particularly high rates among adults, especially women, and an increasing trend with age (PUBMED:25411304). The double burden of malnutrition is a global issue often associated with the nutrition transition, where traditional diets are replaced by Westernized diets high in fat, salt, and sugar but low in nutritional density. This transition is driven by changes in dietary systems and the increased availability of processed and ultraprocessed foods (PUBMED:30799815). In rural western Kenya, a study found that the double burden of malnutrition had not yet been reached, with undernutrition still being common among schoolchildren. However, the situation in Chile differs, as the country is at a more advanced stage of the epidemiologic transition (PUBMED:33829604). Historical data from rural school children in Chile's Metropolitan Region showed an improvement in nutritional status over time, but also highlighted the presence of obesity, particularly among females, suggesting the need for preventive measures (PUBMED:9239285). In Uruguay, which shares some similarities with Chile in terms of nutritional challenges, there is evidence of a double burden of nutritional problems, with a significant prevalence of stunting in young children and high rates of obesity across different age groups (PUBMED:25411309). While the double burden of malnutrition is a complex issue with multiple contributing factors, it is clear from the available data that Chile is facing this challenge, with high rates of obesity coexisting with pockets of undernutrition and micronutrient deficiencies. This situation calls for comprehensive public health interventions that address both undernutrition and overnutrition to improve the overall nutritional status of the population (PUBMED:25411304).
Instruction: Does offering more support calls to smokers influence quit success? Abstracts: abstract_id: PUBMED:21464680 Does offering more support calls to smokers influence quit success? Background: Previous studies have found that offering additional callback counseling support to smokers calling a telephone quit line increases quit rates. However, what is less certain is the most cost-efficient protocol for offering such a service. Objective: This study compares the efficacy of offering 2 versus 4 counseling callbacks after an initial call from Medicaid/uninsured adult smokers contacting the New York State Smokers' Quit Line (NYSSQL). Outcomes compared are the 7- and 30-day nonsmoker prevalence rates measured at 3-month follow-up and the cost per quit. Design: A 2-group randomized trial was conducted. Setting And Participants: The study population included 1923 adult (18+ years) Medicaid/uninsured current smokers (10+ cigarettes per day) who called the NYSSQL between February and March 2009 seeking help to stop smoking. At the time of the study, the NYSSQL provided Medicaid/uninsured callers with up to 6 weeks of free nicotine medications and up to 4 counseling callbacks. Half the subjects were randomized to standard care with up to 4 counseling callbacks with the remaining subjects offered only 2 counseling callbacks. All participants were sent a minimum of a 2-week supply of nicotine replacement therapy, with some receiving up to 6 weeks. Participants were recontacted 3 months after enrollment in the study to assess smoking status. Main Outcome Measures: Quit rates, total counseling callbacks completed, reductions in cigarette consumption, and cost per quit measures. Results: There was not a significant difference between study groups in the number of callbacks completed. There was also no difference in 7- or 30-day nonsmoker prevalence rates measured after 3 months' follow-up or reported use of the free nicotine replacement therapy between those assigned to either the 2- or 4-callback protocols. The cost per quit was essentially the same in both groups (2 callbacks--$442 per quit vs 4 callbacks--$445 per quit). Conclusion: There was no advantage in terms of quit success or cost to offering up to 4 callbacks instead of 2 callbacks. abstract_id: PUBMED:34423746 Decisional Balance of Smoking and Planning to Quit Smoking: A Moderated Mediation Model among Chinese Male Smokers. Decisional balance in quitting smoking involves positive and negative attitudes toward smoking. This study explored the relationship between the decisional balance of smoking and planning to quit smoking, and examined the mediating role of willingness to quit smoking and the moderating role of social support. A questionnaire survey was conducted among 326 daily male smokers from China in 2016, including the decisional balance of smoking, willingness to quit smoking, social support, planning to quit, and nicotine dependence. The results showed: (1) willingness to quit smoking mediated the relationship between the decisional balance of smoking and planning to quit smoking; (2) the relationship between the decisional balance of smoking and planning to quit smoking was moderated by emotional support rather than instrumental support. Smokers with a higher level of emotional support for quitting smoking were more likely to progress to planning to quit smoking, while this relationship was not found among smokers with a lower level of emotional support. These findings suggested that smokers' willingness to quit smoking would be key to promoting their planning to quit. Meanwhile, the dynamic process of planning to quit smoking was moderated by emotional support from others. abstract_id: PUBMED:24837754 Predictors of successful and unsuccessful quit attempts among smokers motivated to quit. Introduction: Despite their positive motivation to quit, many smokers do not attempt to quit or relapse soon after their quit attempt. This study investigated the predictors of successful and unsuccessful quit attempts among smokers motivated to quit smoking. Methods: We conducted secondary data analysis among respondents motivated to quit within 6 months, randomized to the control group (N=570) of a Web-based smoking cessation intervention study. Using chi-square tests and ANOVA with Tukey post hoc comparisons, we investigated baseline differences by smoking status (successful quitter/relapse/persistent smoker) assessed after 6 weeks (N=214). To identify independent predictors of smoking status, multivariate multinomial logistic regression analyses were conducted. Results: Successful quitters at 6-week follow-up (26%) had reported significantly higher baseline levels of self-efficacy than relapsers (45%) and persistent smokers (29%). Furthermore, both successful quitters and relapsers had reported a significantly higher baseline intention to quit than persistent smokers and successful quitters had reported significantly more preparatory planning at baseline than persistent smokers. Results from regression analyses showed that smokers' baseline intention to quit positively predicted quit attempts reported after 6 weeks, while self-efficacy positively predicted quit attempt success. Conclusions: Different factors appear to play a role in predicting quit attempts and their success. Whereas intention to quit only appeared to play a role in predicting quit attempts, self-efficacy was the main factor predicting quit attempt success. More research is needed to determine the role of preparatory planning and plan enactment and to investigate whether these findings can be replicated on the long term. abstract_id: PUBMED:30016736 The moderating effect of perceived social support on the relation between heaviness of smoking and quit attempts among adult homeless smokers. Background: Over 70% of homeless adults smoke cigarettes. Despite the desire to quit, this group rarely receives the external support to make or maintain a successful quit attempt (SQA; intentional quit attempt lasting &gt;24 h). The Heaviness of Smoking Index (HSI) is a cigarette dependence measure that independently predicts SQAs among domiciled adults. For homeless adults, social support may be a way to buffer the impact of cigarette dependence on SQAs. Methods: The association of the HSI and past-year SQAs, and the potential moderating role of social support, was examined among 445 homeless smokers (Mage = 43.2 + 11.8, 65% male, 57.5% white). Support was measured by the International Support Evaluation List (ISEL-12) and its 3 subscales: tangible, belonging, &amp; appraisal support. Results: The HSI was negatively correlated with SQAs (r=-.283, p &lt; .01) and in a regression model controlling for age, sex, and race/ethnicity, appraisal support significantly moderated this relationship (p &lt; .05). The HSI was significantly related to SQAs across low, moderate, and high levels of appraisal support [mean, +1 SD; low (β=-.657, p &lt; .001), medium (β=-.457, p &lt; .001), and high (β=-.258, p &lt; .05)]. Neither the ISEL-12 total nor the other subscales were moderators. Conclusion: The perceived availability of someone to talk to about one's problems appeared to attenuate the strength of the inverse relationship between the heaviness of smoking and SQAs. Fostering appraisal support for homeless smokers through group treatment may reduce the impact of cigarette dependence on making quit attempts. Social support coupled with the increased availability of empirically-supported cessation aids may improve dismal quit rates among homeless adults. abstract_id: PUBMED:31398854 The Association of Peer Smoking Behavior and Social Support with Quit Success in Employees Who Participated in a Smoking Cessation Intervention at the Workplace. The current study investigated whether quit success among employees who participated in a smoking cessation intervention at the workplace was associated with social support from, and the smoking behavior of, people in their environment. Tobacco-smoking employees (n = 604) from 61 companies participated in a workplace group smoking cessation program. Participants completed questionnaires assessing social support from, and the smoking behavior of, people in their social environment. They were also tested for biochemically validated continuous abstinence directly after finishing the training and after 12 months. The data were analyzed using mixed-effects logistic regression analyses. Social support from colleagues was positively associated with 12-month quit success (odds ratio (OR) = 1.85, 95% confidence interval (CI) = 1.14-3.00, p = 0.013). Support from a partner was positively associated with short-term quit success (OR = 2.01, 95% CI = 1.23-3.30, p = 0.006). Having a higher proportion of smokers in the social environment was negatively associated with long-term abstinence (OR = 0.81, 95% CI = 0.71-0.92, p = 0.002). Compared to having a non-smoking partner, long-term quit success was negatively associated with having no partner (OR = 0.48, 95% CI = 0.26-0.88, p &lt; 0.019), with having a partner who smokes (OR = 0.40, 95% CI = 0.24-0.66, p &lt; 0.001), and with having a partner who used to smoke (OR = 0.47, 95% CI = 0.26-0.86, p = 0.014). In conclusion, people in a smoker's social environment, particularly colleagues, were strongly associated with quit success. The workplace may, therefore, be a favorable setting for smoking cessation interventions. abstract_id: PUBMED:30476896 Perceived addiction to smoking and associations with motivation to stop, quit attempts and quitting success: A prospective study of English smokers. Aims: Some argue that perceived addiction to smoking (PAS) might undermine motivation to stop. We examined the association of PAS with motivation to stop in a population sample and assessed its association with past and future quit attempts and future quit success. Method: 12,700 smokers in England were surveyed between September 2009-March 2012 as part of the Smoking Toolkit Study. 2796 smokers were followed up after 6 months. PAS was assessed at baseline by a single self-report item. The outcome variables were ratings of motivation to stop and reports of past-year quit attempts at baseline, and quit attempts in the past 6 months and smoking status at follow-up. Baseline covariates were sex, age, social grade and daily cigarette consumption. Results: In adjusted analyses, PAS was positively associated with at least some degree of motivation to stop versus no motivation (ORs = 1.97-2.96, all p's &lt; 0.001). PAS was also positively associated with past-year quit attempts (OR = 1.43, 95% CI = 1.32-1.55, p &lt; 0.001), but not with future quit attempts (OR = 1.17, 95% CI = 0.99-1.39, p = 0.064) or quit success (OR = 1.04, 95% CI = 0.73-1.47, p = 0.83). Conclusion: In smokers in England, perceived addiction to smoking is positively associated with motivation to stop and having recently made a quit attempt but is not clearly associated with future quit attempts or success. These findings provide no grounds for believing that increasing smokers' perceived addiction through promotion of stop-smoking support has undermined motivation to stop. abstract_id: PUBMED:32690733 Are population trends in high-risk alcohol consumption in smokers associated with trends in quit attempts and quit success? A time-series analysis. Objectives: Monthly changes in the prevalence of high-risk drinking and smoking in England appear to be positively correlated. This study aimed to assess how far monthly changes in high-risk drinking were specifically associated with attempts to stop smoking and the success of quit attempts. Design: Data were used from the Alcohol and Smoking Toolkit Studies between April 2014 and June 2018. These involve monthly household face-to-face surveys of representative samples of ~1800 adults. Setting: England. Participants: Data were aggregated on 17 560 past-year smokers over the study period. Primary And Secondary Outcome Measures: Autoregressive integrated moving average with exogenous input (ARIMAX) modelling was used to assess the association over time between monthly prevalence of high-risk drinking among smokers and (a) prevalence of attempts to quit smoking and (b) prevalence of successful quit attempts in those attempting to quit. Bayes factors (BF) were calculated to compare the null hypothesis with the hypothesis of an effect sufficiently large (β=0.6) to explain the established association between overall prevalence in smoking and high-risk drinking. Results: No statistically significant associations were found between monthly changes in prevalence of high-risk drinking among smokers and attempts to quit smoking (β=0.156, 95% CI -0.079 to 0.391, p=0.194) or quit success (β=0.066, 95% CI -0.524 to 0.655, p=0.827). BF indicated that the data were insensitive but suggested there is weak evidence for the null hypothesis in the case of both quit attempts (BF=0.80) and quit success (BF=0.53). Conclusions: Monthly changes in prevalence of high-risk alcohol consumption in England are not clearly associated with changes in quit attempt or quit success rates. abstract_id: PUBMED:36129734 Are smoking restrictions at public venues and psychosocial beliefs associated with intentions to quit smoking among smokers in Malaysia? The Malaysian government reinforced smoking restrictions at public venues to protect nonsmokers from secondhand smoke (SHS) exposure. This study examined whether smokers' reports about smoking restrictions and psychosocial beliefs were associated with quit intentions among Malaysian smokers. Data from 1047 cigarette smokers (103 females and 944 males) aged 18 and older from the 2020 International Tobacco Control (ITC) Malaysia Wave 1 Survey were analyzed with bivariate and multivariable logistic regression models. Most Malaysian smokers (85.2%) reported having quit intentions. Smoking was completely restricted in 34.8% of the nighttime venues, 85.3% of air-conditioned (AC) food and beverage (F&amp;B) venues (restaurants, food courts, coffee shops), 87.3% of non-AC F&amp;B, and 69.4% of indoor workplaces. Smokers who visited nighttime entertainment venues where smoking was fully restricted were less likely to have quit intentions. There was no significant association found with quit intentions for smokers who visited AC and non-AC F&amp;B venues and indoor workplaces where smoking was fully restricted. All five psychosocial beliefs assessed, age, and education were positively associated with quit intentions. Malaysian smokers are interested in quitting and psychosocial beliefs were positively associated with quit intentions. There is a need for the Malaysian government to implement and reinforce comprehensive smoking restrictions in all public venues and indoor workplaces to protect nonsmokers from SHS exposure and to encourage smokers to think about quitting, which may influence their quit intentions. abstract_id: PUBMED:29161166 Helping Smokers with Severe Mental Illness Who Do Not Want to Quit. Background: People with a severe and persistent mental illness are far more likely to smoke than others. While a large portion would like to quit, they are less likely to make quit attempts and succeed. Objective: This study used an Randomized Controlled Trial (RCT) to test an intervention designed to increase engagement in cessation treatment, quit attempts, and quitting in smokers who did not want to quit in the next 30 days. It also compared these smokers with those who were motivated to quit in the next 30 days. Methods: Participants (N = 222), were smokers with significant mental illness receiving intensive outpatient care from Wisconsin Community Support Programs who were not interested in quitting in the next 30 days. They were randomly assigned to either an intervention group or an attention control group. The intervention, administered during four weekly sessions, included a motivational element, components designed to prepare the smoker for a quit attempt, and pre-quit nicotine patch. Additionally, 48 smokers motivated to quit in the next 30 days served as a comparison group. Results: Compared to control participants, smokers receiving the intervention were more likely to be abstinent at the three month follow-up (biochemically verified, intent to treat, 8.5% vs. 1.0%, respectively, p = .01). They were also more likely to accept four more quitting preparation sessions (intent to treat, 50.8% vs 29.2%, respectively, p &lt; .001) but were not more likely to call a telephone tobacco quit line. Conclusion/Importance: Brief motivational interventions increased engagement in cessation treatment and abstinence among smokers with signification mental illness. abstract_id: PUBMED:28623848 Very brief physician advice and supplemental proactive telephone calls to promote smoking reduction and cessation in Chinese male smokers with no intention to quit: a randomized trial. Background And Aims: There is inconsistent evidence that behavioural support to promote smoking reduction is effective at increasing smoking cessation. We examined the effectiveness of brief physician advice together with four very brief telephone calls in promoting smoking cessation among Chinese men via reduction compared with equivalent advice on diet and exercise. Design: A two-group pragmatic randomized controlled trial. Participants And Setting: Male patients attending the Endocrinology and Acupuncture out-patient clinics of a general hospital in Beijing, China. Intervention And Comparators: Physicians advised participants allocated to the smoking-reduction intervention (SRI, n = 181) group to reduce smoking to at least half of their current consumption within 1 month at baseline. At follow-up, a telephone counsellor repeated this advice if the participant had not reduced their cigarette consumption. Participants who had reduced consumption were encouraged to quit smoking. Physicians gave participants in the exercise and diet advice (EDA, n = 188) control group brief advice about physical activity and healthy diet at baseline, and a telephone counsellor reinforced this at each follow-up interview. Both groups had one face-to-face interview at baseline plus five telephone interviews and interventions (approximately 1 minute each) at 1 week and 1-, 3-, 6- and 12-month follow-up. Measurements: The primary outcome was self-reported 6-month prolonged abstinence rate at 12-month follow-up interview. Findings: By intention-to-treat, the self-reported 6-month prolonged abstinence rate at 12-month follow-up in the SRI groups (19 quitters, 15.7%) was higher, but not significantly, than the EDA control group (10 quitters, 7.8%), and the adjusted odds ratio (OR) and 95% confidence interval (CI) was 2.26 (0.97-5.26), P = 0.062. The self-reported 7-day point prevalence quit rate (secondary outcome) in the SRI group was significantly higher than the control group at each follow-up interview (at 12-month follow-up: 13.3 versus 6.9%, OR (95% CI) = 2.09 (1.01, 4.34), P = 0.049). Conclusions: A very brief, proactive and low-cost smoking-reduction intervention without medications for Chinese male smokers with no intention to quit appears to increase smoking abstinence. Answer: The available evidence from the abstracts suggests that the influence of additional support calls on quit success among smokers is not straightforward and may depend on various factors, including the type of support and the smoker's context. One study found that offering more callback counseling support to smokers calling a telephone quit line did not significantly increase quit rates. Specifically, there was no advantage in terms of quit success or cost to offering up to 4 callbacks instead of 2 callbacks to Medicaid/uninsured adult smokers seeking help to stop smoking through the New York State Smokers' Quit Line (PUBMED:21464680). However, other studies highlight the importance of different types of support in the quitting process. For instance, emotional support was found to moderate the relationship between decisional balance of smoking and planning to quit smoking among Chinese male smokers, suggesting that smokers with higher levels of emotional support were more likely to progress to planning to quit smoking (PUBMED:34423746). Similarly, perceived social support was shown to moderate the relationship between the heaviness of smoking and quit attempts among homeless smokers, indicating that appraisal support could attenuate the impact of cigarette dependence on making quit attempts (PUBMED:30016736). Moreover, social support from colleagues and partners was associated with quit success in employees participating in a workplace smoking cessation intervention, with the workplace being identified as a favorable setting for such interventions (PUBMED:31398854). In contrast, perceived addiction to smoking was positively associated with motivation to stop and having recently made a quit attempt but was not clearly associated with future quit attempts or success (PUBMED:30476896). In summary, while offering more support calls alone may not necessarily influence quit success (PUBMED:21464680), the type and source of support, such as emotional and social support from colleagues, partners, and others, can play a significant role in motivating smokers to plan and attempt to quit, and may contribute to successful quitting (PUBMED:34423746, PUBMED:30016736, PUBMED:31398854).
Instruction: Does every US smoker bear the same cigarette tax? Abstracts: abstract_id: PUBMED:24861973 Does every US smoker bear the same cigarette tax? Aims: To evaluate state cigarette excise tax pass-through rates for selected price-minimizing strategies. Design: Multivariate regression analysis of current smokers from a stratified, national, dual-frame telephone survey. Setting: United States. Participants: A total of 16 542 adult current smokers aged 18 years or older. Measurements: Cigarette per pack prices paid with and without coupons were obtained for pack versus carton purchase, use of generic brands versus premium brands, and purchase from Indian reservations versus outside Indian reservations. Findings: The average per pack prices paid differed substantially by price-minimizing strategy. Smokers who used any type of price-minimizing strategies paid substantially less than those who did not use these strategies (P &lt; 0.05). Premium brand users who purchased by pack in places outside Indian reservations paid the entire amount of the excise tax, together with an additional premium of 7-10 cents per pack for every $1 increase in excise tax (pass-through rate of 1.07-1.10, P &lt; 0.05). In contrast, carton purchasers, generic brand users or those who were likely to make their purchases on Indian reservations paid only 30-83 cents per pack for every $1 tax increase (pass-through rate of 0.30-0.83, P &lt; 0.05). Conclusions: Many smokers in the United States are able to avoid the full impact of state excise tax on cost of smoking by buying cartons, using generic brands and buying from Indian reservations. abstract_id: PUBMED:35457762 Trends and Determinants of Cigarette Tax Increases in Japan: The Role of Revenue Targeting. Cigarette prices in Japan are lower than those in most other high-income countries. A more striking fact is that cigarette tax revenues have been kept almost flat at just over two trillion JPY (Japanese yen; 18.2 billion US dollars) over more than three decades, despite steadily declining cigarette sales and seemingly weakening pressure from stakeholders with a vested interest in the tobacco industry. We attempted to examine trends and determinants of cigarette tax increases in Japan. In particular, we hypothesized that the Japanese finance ministry adjusts cigarette taxes to meet a revenue target. Under this hypothesis, we searched for the most plausible amount of the minimum target of tax revenue that corresponds to cigarette tax increases over the past 37 years (1985-2021) using public data on cigarette sales and taxes. The results revealed that two trillion JPY was the minimal revenue target that could plausibly explain the increase in cigarette tax. In addition, the timing and magnitude of cigarette tax increases have been successfully set to maintain stable tax revenues. A key determinant of cigarette tax increases in Japan has been hard revenue targets, rather than public health concerns. abstract_id: PUBMED:33681021 Has tax reforms in India been effective in tobacco control: Evidences on affordability of cigarette after introduction of Goods and Service Tax. Background: Taxes are the most cost-effective mechanism to deter tobacco consumption. However, the tobacco tax system has not been favorable in India. India introduced Goods and Service Tax (GST) in 2017 to make the tax system uniform. This paper seeks to examine the role of the newly rolled out tax system on cigarette prices and affordability and hence consumption. Methods: We used secondary data from different government publications and conducted simple statistical analysis - to present price changes, affordability of cigarette in pre and post GST regime. Affordability was estimated comparing per capita income with price index of cigarette. Results: The findings suggest that the tax structure has not been simplified with multiple taxes imposed based upon the length of cigarettes. The relative WPI of cigarettes is increasing suggesting higher increase in cigarette prices than general price. Affordability though declined in the initial two years as compared to WPI and per capita income, it remained neutral in the post GST regime. Conclusions: The findings suggest that tax reform should aim at influencing affordability adequately so that it deters consumption of cigarette. abstract_id: PUBMED:27840571 New Evidence on the Price Effects of Cigarette Tax Competition. Multiple studies have shown that cigarette taxes are more than fully passed through to cigarette prices and that access to a nearby state with a lower cigarette tax also reduces local cigarette prices. We study two other sources of tax competition: nearby Native American reservations and online sales. Using quarterly data on local cigarette prices from 1976-2003, we show that the opening of a Native American casino within 25 miles of a city center is associated with a $0.016-$0.027 lower per-pack price, while a 50 percentage point increase in internet penetration is associated with a $0.22-$0.25 per-pack price reduction. These effects are not observed for other local prices for which there is no potential tax savings. Our results further our understanding of how tax competition affects local cigarette prices and provide context to studies linking Native American reservations and internet penetration to cigarette smuggling. abstract_id: PUBMED:32097751 Price Elasticity of cigarette smoking demand in the Philippines after the 2012 Sin Tax Reform Act. The Philippine tobacco excise tax reform law passed in 2012 drastically increased cigarette prices which were historically low. A pack of 20 cigarettes costing nine cents (US Dollar) or less was taxed five cents in 2011. When the reform took effect in 2013, each pack was taxed 24 cents which is almost five times the 2011 rate. Alongside the increase in tax is a decline in the prevalence of tobacco use from 28.3% in 2009 to 23.8% in 2015. Seven years since the reform took effect, policymakers are still debating whether the tax introduced was high enough to significantly reduce smoking prevalence. This study estimated the total price elasticity of cigarette demand using regression analyses on the pooled Philippine 2009 and 2015 Global Adult Tobacco Survey data with the excise tax as an instrumental variable. Information from both tax regimes provided the variation in cigarette prices that allowed for the estimation of the price elasticity of smoking participation and intensity. Age, sex, urban residence, educational attainment, employment status, wealth quintile, and media exposure were used as control variables. Results confirm that cigarette demand is inelastic, given that total cigarette price elasticity of demand ranges from -0.56 to -1.10 which means that for every 10% price increase, total cigarette demand declines by 5.6% to 11.0%. This study also provides total price elasticities for different subpopulations. Future studies can use these elasticity estimates to forecast smoking prevalence and provide policy recommendations. abstract_id: PUBMED:29975860 A simulation impact evaluation of a cigarette excise tax increase on licit and illicit cigarette consumption and tax revenue in 36 European countries. Objectives: To assess the impact of a simulated 10% tax-induced cigarette price increase on licit and illicit consumption and tax revenues in 36 European countries. Methods: Employing panel data for licit and illicit cigarette consumption, fixed effects regression models were applied for different income clusters. Results: Total cigarette consumption dropped by about 3.1% as a result of the simulated tax-induced price increase. Annual illicit cigarette consumption increased by 1.52%, (95% confidence interval: 0.21, 2.83), while annual licit cigarette consumption decreased by 4.61% (95% confidence interval: -6.51, -2.72) in the observed 36 European countries. With total consumption decreasing by about 8%, the Czech Republic, Latvia, Lithuania, Poland and Slovakia were affected the most by the price hike. More specifically, licit consumption in these countries decreased by 18.43% (95% confidence interval: -19.91, -16.95) while illicit use increased by 10.99% (95% confidence interval: 6.01, 15.96). Moreover, the overall annual tobacco tax revenue increased by US$14.69 billion in the simulation. Conclusion: Results of the study suggest that European policy makers continue to implement tobacco taxation policies to control smoking prevalence and national health care expenditures. At the same time, efforts to kerb contraband activities along EU Eastern borders should be intensified. abstract_id: PUBMED:24227541 Cigarette tax avoidance and evasion: findings from the International Tobacco Control Policy Evaluation (ITC) Project. Background: Decades of research have produced overwhelming evidence that tobacco taxes reduce tobacco use and increase government tax revenue. The magnitude and effectiveness of taxes in reducing tobacco use provide an incentive for tobacco users, manufacturers and others, most notably criminal networks, to devise ways to avoid or evade tobacco taxes. Consequently, tobacco tax avoidance and tax evasion can reduce the public health and fiscal benefit of tobacco taxes. Objectives: First, this study aims to document, using data from the International Tobacco Control Policy Evaluation Project (ITC), levels and trends in cigarette users' tax avoidance and tax evasion behaviour in a sample of 16 low-, middle- and high-income countries. Second, this study explores factors associated with cigarette tax avoidance and evasion. Methods: We used data from ITC surveys conducted in 16 countries to estimate the extent and type of cigarette tax avoidance/evasion between countries and across time. We used self-reported information about the source of a smoker's last purchase of cigarettes or self-reported packaging information, or similar information gathered by the interviewers during face-to-face interviews to measure tax avoidance/evasion behaviours. We used generalised estimating equations to explore individual-level factors that may affect the likelihood of cigarette tax avoidance or evasion in Canada, the USA, the UK and France. Findings: We found prevalence estimates of cigarette tax avoidance/evasion vary substantially between countries and across time. In Canada, France and the UK, more than 10% of smokers reported last purchasing cigarettes from low or untaxed sources, while in Malaysia some prevalence estimates suggested substantial cigarette tax avoidance/evasion. We also found important associations between household income and education and the likelihood to engage in tax avoidance/evasion. These associations, however, varied both in direction and magnitude across countries. abstract_id: PUBMED:32432189 Have higher cigarette taxes in the United States discouraged smoking? A review of data from 1999-2013. Introduction: Cigarette smoking continues to be a leading cause of preventable morbidity and premature death in the United States. This study examined the impact of federal and state cigarette excise taxes on adult smoking between 1999 and 2013. Methods: Data came from the National Health Interview Survey, Behavioural Risk Factor Surveillance System and Tax Burden on Tobacco. Analyses were done from 1999-2013, 2002-6 and 2009-13. Associations between cigarette taxes, prices and smoking were examined in several states based on cigarette tax: Missouri and Virginia (low tax), Florida, Nebraska and Nevada (median tax) and New York and Rhode Island (high tax). Results: Smoking declined nationally from 22.8% (1999) to 19.0% (2013) with rates falling from 25.7% to 20.5% in men and 21.5% to 15.3% in women. Annual cigarette consumption (in millions) declined from 432,758 to 280,534 and per capita consumption from 1,621 cigarettes (1999) to 894 cigarettes (2013). Smoking declined across age, gender, race/ethnicity, education and poverty level in 2009-13 compared to 2002-6 with large reductions in states with higher cigarette taxes. Negative correlations between cigarette tax and smoking, and positive correlations between cigarette tax and price, were seen. Conclusions: Higher cigarette taxes appear to have had a negative impact on smoking in the US. Our data suggest that states with higher cigarette taxes have lower smoking rates than states with lower taxes. Tax measures are however implemented as part of a comprehensive tobacco control package and further research is needed to assess the relative contribution of cigarette tax on smoking reductions in the states examined. abstract_id: PUBMED:28219975 Changes in cigarette expenditure minimising strategies before and after a cigarette tax increase. Background: Smokers use cigarette expenditure minimising strategies (CEMS) to alleviate the effect of tax increases on their cigarette expenses. We examined changes in smokers' CEMS use before and after a 2013 Minnesota $1.75 cigarette tax increase. Methods: Data were from representative samples of smokers who participated in the Minnesota Adult Tobacco Survey 2010 (n=948) and 2014 (n=1229). Participants indicated CEMS used in the past year from a list. Weighted multiple logistic regressions were used to examine changes in prevalence of each CEMS use over time adjusting for demographics and cigarette consumption. Characteristics associated with CEMS use in 2014 were examined. Results: Between 2010 and 2014, more smokers tried to save money on cigarettes by rolling their own cigarettes (from 19% to 29%), using other tobacco products (from 13% to 25%), and buying cigarettes from cheaper places (from 48% to 55%). Yet, fewer smokers used coupons/promotions (from 63% to 50%) and bought cigarettes by the carton (from 39% to 32%). These changes varied somewhat by race/ethnicity and education, for example, more smokers with &lt;high school education used discount brands over time than more educated smokers. CEMS use in 2014 varied by demographics, for example, smokers with lower education were more likely than those with higher education to purchase discount brands, roll their own cigarettes, use coupons/promotions and cut back on smoking (p&lt;0.05). Conclusions: Socially disadvantaged smokers were most likely to use CEMS and continue smoking after a cigarette tax increase. Regulations that would reduce CEMS use could boost the effectiveness of cigarette tax increases. abstract_id: PUBMED:32403253 The Impact of Cigarette Excise Tax Increases on Regular Drinking Behavior: Evidence from China. (1) Background: Many studies have shown that increasing taxation on cigarettes does play a role in tobacco control, but few studies have focused on whether increasing cigarette excise taxes significantly affects alcohol consumption. In this article, we aim to examine the effects of China's 2015 increase in the cigarette excise tax on residents' regular drinking behavior. (2) Methods: Using survey data from China Family Panel Studies (CFPS), we performed a panel logit regression analysis to model the relationship between the cigarette excise tax and regular drinking behavior. The Propensity Score Matching with Difference-in-Differences (PSM-DID) approach was adopted to determine the extent to which the cigarette excise tax affected residents' drinking behavior. To test whether the cigarette excise tax could change regular drinking behavior by decreasing daily smoking quantity, we used an interaction term model. (3) Results: China's 2015 increase in the cigarette excise tax had a significant negative effect on the probability of regular alcohol consumption among smokers, and the cigarette excise tax worked by reducing the average daily smoking of smokers. We also found that the regular drinking behavior of male smokers was more deeply affected by the increased cigarette excise tax than females. (4) Conclusions: Our research results not only give a deeper understanding of the impact of the cigarette excise tax, but also provide an important reference with which to guide future decisions concerning excise taxes imposed on cigarettes. Answer: No, every US smoker does not bear the same cigarette tax. The average per pack prices paid by smokers differ substantially based on the price-minimizing strategies they use. Smokers who employ any type of price-minimizing strategies, such as buying cartons, using generic brands, or purchasing from Indian reservations, pay substantially less than those who do not use these strategies. Specifically, premium brand users who purchase by pack outside Indian reservations pay the entire amount of the excise tax, along with an additional premium of 7-10 cents per pack for every $1 increase in excise tax (pass-through rate of 1.07-1.10). In contrast, carton purchasers, generic brand users, or those likely to make their purchases on Indian reservations pay only 30-83 cents per pack for every $1 tax increase (pass-through rate of 0.30-0.83) (PUBMED:24861973).
Instruction: Post-traumatic growth in parents after a child's admission to intensive care: maybe Nietzsche was right? Abstracts: abstract_id: PUBMED:19225754 Post-traumatic growth in parents after a child's admission to intensive care: maybe Nietzsche was right? Objective: The aim of this prospective study was to establish the degree to which parents report post-traumatic growth after the intensive care treatment of their child. Design: Prospective cross-sectional cohort study. Setting: Paediatric Intensive Care Unit (PICU). Subjects: A total of 50 parents of children, admitted to PICU for &gt;12 h. Measurements And Results: Parents provided stress ratings as their child was discharged from PICU and, 4 months later, completed postal questionnaires rating their anxiety, depression, post-traumatic stress and post-traumatic growth. As much as 44 parents (88%) indicated on the Posttraumatic Growth Inventory (PTGI) [1] that they had experienced a positive change to a great degree as a result of their experiences in PICU. Parents of children who were ventilated (P = 0.024) reported statistically higher post-traumatic growth as did parents of older children (P = 0.032). PTGI scores were positively correlated with post-traumatic stress scores at 4 months (P = 0.021), but on closer inspection this relationship was found to be curvilinear. Conclusions: Post-traumatic growth emerged as a salient concept for this population. It was more strongly associated with moderate levels of post-traumatic stress, than high or low levels. abstract_id: PUBMED:35526209 A Systematic Review of the Factors Associated with Post-Traumatic Growth in Parents Following Admission of Their Child to the Intensive Care Unit. This systematic review aims to identify the demographic, clinical and psychological factors associated with post-traumatic growth (PTG) in parents following their child's admission to the intensive care unit (ICU). Papers published up to September 2021 were identified following a search of electronic databases (PubMed, Medline, Web of Science, PsycINFO, CINAHL, PTSDpubs and EMBASE). Studies were included if they involved a sample of parents whose children were previously admitted to ICU and reported correlational data. 1777 papers were reviewed. Fourteen studies were eligible for inclusion; four were deemed to be of good methodological quality, two were poor, and the remaining eight studies were fair. Factors associated with PTG were identified. Mothers, and parents of older children, experienced greater PTG. Parents who perceived their child's illness as more severe had greater PTG. Strong associations were uncovered between PTG and post-traumatic stress, psychological well-being and coping. PTG is commonly experienced by this population. Psychological factors are more commonly associated with PTG in comparison with demographic and clinical factors, suggesting that parents' subjective ICU experience may be greater associated with PTG than the objective reality. abstract_id: PUBMED:34612728 Post-Traumatic Growth in Parents following Their Child's Death in a Pediatric Intensive Care Unit. Background: Although bereaved parents suffer greatly, some may experience positive change referred to as post-traumatic growth. Objective: Explore the extent to which parents perceive post-traumatic growth after their child's death in a pediatric intensive care unit (PICU), and associated factors. Design: Longitudinal parent survey conducted 6 and 13 months after a child's death. Surveys included the Post-traumatic Growth Inventory-Short Form (PTGI-SF), a 10-item measure with range of 0-50 where higher scores indicate more post-traumatic growth. Surveys also included the Inventory of Complicated Grief (ICG), the Patient Health Questionnaire-8 (PHQ-8) for depression, the Short Post-Traumatic Stress Disorder Rating Interview (SPRINT), a single item on perceived overall health, and sociodemographics. Setting/Subjects: One hundred fifty-seven parents of 104 children who died in 1 of 8 PICUs affiliated with the U.S. Collaborative Pediatric Critical Care Research Network. Results: Of participating parents, 62.4% were female, 71.6% White, 82.7% married, and 89.2% had at least a high school education. Mean PTGI-SF scores were 27.5 ± 12.52 (range 5-50) at 6 months and 28.6 ± 11.52 (range 2-49) at 13 months (p = 0.181). On multivariate modeling, higher education (compared with those not completing high school) and higher 6-month ICG scores (reflecting more complicated grief symptoms) were associated with lower 13-month PTGI-SF scores (p = 0.005 and 0.033, respectively). Conclusion: Parents bereaved by their child's PICU death perceive a moderate degree of post-traumatic growth in the first 13 months after the death however variability is wide. Education level and complicated grief symptoms may influence parents' perception of post-traumatic growth. abstract_id: PUBMED:36245746 Parent post-traumatic growth after a child's critical illness. Objective: Post-traumatic growth is the experience of a positive change after a traumatic event. Our objective is to characterize the factors associated with post-traumatic growth in parents after a child's pediatric intensive care unit (PICU) admission. Study Design: A cross-sectional survey study examining post-traumatic growth and select independent variables in parents 1 year after a child's ≥72 h PICU admission for an acute illness or injury. The study was completed in parents of children discharge alive from a tertiary care PICU from January 1, 2017 to December 31, 2017. A mixed-effects linear regression model was built to evaluate the association of post-traumatic stress, anxiety, depression, resiliency, family function, and child function with post-traumatic growth. Results: Eighty-two parents of 52 children discharged alive in 2017 completed the survey. Fifty-two percent were ≥35 years and 64.3% were mothers. Median age of their children was 2.8 years (IQR 0.5-11.3) with a median hospital stay of 12 Days (IQR 6-20). Moderate-to-high levels of post-traumatic growth occurred in 67.1% of parents. Increased hospital length of stay (β Coeff 0.85; p = 0.004, 95% CI 0.27, 1.43) and parent post-traumatic stress symptoms (β Coeff 1.04; p = 0.006, 95% CI 0.29, 1.78) were associated with increased post-traumatic growth, and increased parent depression symptoms (β Coeff -1.96; p = 0.015; 95% CI -3.54, -0.38) with decreased post-traumatic growth. Conclusion: Longer child hospital stays and increased parent post-traumatic stress symptoms were associated with increased post-traumatic growth, while increased depression was associated with less post-traumatic growth. The impact of future PICU parent psychosocial interventions on parents may be best assessed using a dual outcome focused on both reducing negative mental health symptoms while concurrently promoting skills to facilitate parent adaptation and post-traumatic growth. abstract_id: PUBMED:28882361 Relation between parental psychopathology and posttraumatic growth after a child's admission to intensive care: Two faces of the same coin? Objectives: Confronted with the potentially traumatic experience of a child's admission to a paediatric intensive care unit, parents may experience psychopathological post-trauma symptoms as well as posttraumatic growth. The aim of this cross-sectional study was to explore the relation between psychopathology symptoms, namely, posttraumatic stress disorder), anxiety and depression, as well as post traumatic growth in parents following their child's hospitalisation in a paediatric intensive care unit. Methods: Six months after their child's discharge, 143 parents completed the questionnaire, which assessed post traumatic growth (Posttraumatic Growth Inventory), post traumatic stress disorder (Davidson Trauma Scale), depression and anxiety (Hospital Anxiety and Depression Scale). Results: Of the 143 parents, 23.1% reported symptoms of post traumatic stress disorder, 21% reported symptoms of moderate to severe anxiety, 9.1% reported symptoms of moderate to severe depression and 37.1% reported at least a medium degree of post traumatic growth. There was a moderate, direct association between post traumatic stress disorder, depression and anxiety with post traumatic growth. Higher scores in anxiety, depression and post traumatic stress disorder were associated with higher levels of post traumatic growth, contradicting the notion of an inverted U-shaped relationship between psychopathology symptoms and post traumatic growth. Conclusions: Given that positive and negative outcomes after a child's critical admission tend to co-occur, it is surmised that parents who indicate post traumatic growth do not deny the difficulties. While not negating the negative impact on the mental health of a parent with a child admitted to intensive care, including the assessment of post traumatic growth as an outcome following this event has important implications for research and clinical practice. abstract_id: PUBMED:32208588 Is post-traumatic growth possible in the parents of former patients of neonatal intensive care units? Introduction: The birth of a sick child, as well as the infant's subsequent hospitalization in an neonatal intensive care unit (NICU), is undoubtedly stressful for the parents of the infant. Most studies conducted in groups of parents of such children focus on the assessment of the negative changes in their functioning due to such stress. The authors were interested in positive changes in the psychological functioning of parents that may occur after traumatic experiences. These changes are referred to as post-traumatic growth (PTG). Objective: The aim of this study was to examine whether parents experience post-traumatic growth and to determine the predictors of PTG in fathers and mothers, depending on the coping strategy adopted. Material And Methods: The study involved 82 parents, whose children were previously hospitalized in neonatal intensive care unit. The methods used included the following standardized psychological tests: the Post-traumatic Growth Inventory, the Impact of Event Scale-Revised, and the COPE Inventory. Socio-demographic and medical data were also collected. Results: Analysis of the data proved that the illness and hospitalization of a child are significantly associated with the occurrence of post-traumatic growth in parents. PTG in mothers is higher than in fathers. Predictors of PTG in fathers include the use of strategies aimed at seeking emotional support and positive reinterpretation and growth, while in the group of mothers, seeking emotional support, religious coping and planning were the coping strategies used. Conclusions: Research on post-traumatic growth should be expanded. Knowledge of the predictors of positive growth in a difficult situation can contribute to the widespread implementation of primary and secondary prevention of post-traumatic stress symptoms as well as increase positive changes in individuals who have experienced traumatic events. abstract_id: PUBMED:27539892 Post-traumatic growth in parents after infants' neonatal intensive care unit hospitalisation. Aims And Objectives: To determine the incidence and severity of post-traumatic growth in a group of parents of children hospitalised in the intensive care unit in the past. Background: A premature birth or a birth with life-threatening conditions is a traumatic event for the parents and may lead to a number of changes, some of which are positive, known as post-traumatic growth. Method: The survey covered 106 parents of 67 infants aged 3-12 months. An original questionnaire and standardised research tools were used in the study: Impact Event Scale - Revised, Perceived Stress Scale, COPE Inventory: Positive Reinterpretation and Growth, Coping Inventory for Stressful Situations, Post-traumatic Growth Inventory and Parent and Infant Characteristic Questionnaire. Results: Due to a stepwise backward variables selection, we found three main factors that explain post-traumatic growth: post-traumatic stress symptoms, positive reinterpretation and growth and dichotomic variable infants' survival. This model explained 29% of the post-traumatic growth variation. Similar models that were considered separately for mothers and fathers showed no significantly better properties. Conclusion: Post-traumatic growth was related to a lesser extent to sociodemographic variables or the stressor itself, and related to a far greater extent to psychological factors. Relevance To Clinical Practice: Our study highlights the fact that post-traumatic growth in the parents of neonates hospitalised in the neonatal intensive care units remains under-evaluated. abstract_id: PUBMED:25959486 Parents' experiences of their child's admission to paediatric intensive care. Background: Admission of a child to the paediatric intensive care unit (PICU) may be one of the most stressful events for parents because the outcome is often uncertain. So how do parents cope, and how can we as nurses help them? Aim: To explore the lived experiences of parents whose children have been admitted to a PICU. Methods: Using Heidegger's school of interpretative phenomenology, six unstructured interviews were conducted. These were transcribed and analysed following interpretative phenomenological analysis. Participants were chosen through purposive sampling. Findings: Each participant had different emergent themes. Themes included trauma, responsibility, anxiety about where the child is, post-traumatic stress symptoms and transfer to the ward. Conclusion: The lived experience of a parent is fraught with varying different emotions, with the beginning of the journey and the ending of the PICU admission causing the most anxiety. abstract_id: PUBMED:38030534 Post-traumatic stress and health-related quality of life after admission to paediatric intensive care: Longitudinal associations in mother-child dyads. Objective: Elevated post-traumatic stress symptoms (PTSS) and reduced health-related quality of life (HRQoL) are commonly experienced in both children and their parent's following admission to the paediatric intensive care unit (PICU). Previous research has demonstrated a conflict in the directionality between PTSS and HRQoL in children and their parents. Our study sought to explore the interconnection and transmission between PTSS and HRQoL in children and their mothers following an admission to the PICU. Design: Prospective longitudinal design. Setting: Two tertiary care PICUs in Brisbane, Australia. Patients: Two hundred eighty-two mother-child dyads of children aged 2-16 years admitted to the PICU for &gt;8 h. Interventions: None. Measurement And Main Results: Four waves of dyadic data (n = 282 mother-child dyads) over a 12-month period from the PICU post-traumatic stress study were used. Cross-lagged panel modelling was used to examine the link between maternal-rated PTSS and HRQoL across children and their mothers. Actor-partner interdependence modelling was then used to examine the interconnections between mothers and children over time. In the dyadic model, partner effects were only present from mother to their child (i.e., higher maternal PTSS was predictive of higher child PTSS at subsequent time points). Higher maternal PTSS predicted lower maternal mental HRQoL but not lower child psychosocial HRQoL. Actor effects were also present with lower child psychosocial HRQoL, predicting higher child PTSS at subsequent time points. Findings indicated that a unidirectional transmission process from mother to child may be present after a child's life-threatening illness. Conclusions: This study provides evidence for a long-term negative influence of maternal PTSS on child PTSS in families who have experienced a paediatric critical illness or injury. This highlights the important role of maternal wellbeing in children's mental health outcomes following PICU admission. Further research needs to explore the temporal and dyadic relationships of PTSS and HRQoL. abstract_id: PUBMED:37950321 Post-traumatic growth and influencing factors among parents of premature infants: a cross-sectional study. Background: Post-traumatic growth is a positive psychological change that may aid recovery in individuals experiencing trauma. Owing to the lack of research in the area of parental care for premature infants, we decided to explore the levels and factors influencing post-traumatic growth among parents of premature infants in neonatal intensive care units. We believe that these findings will help reassess existing care practices so that healthcare providers can promptly identify negative emotions and take necessary measures to help develop the potential to enhance post-traumatic growth. Methods: A cross-sectional survey was conducted using convenience sampling between February and September 2022. Data were analysed using independent sample t-tests and one-way analysis of variance (ANOVA). Bivariate correlations were analysed using the Pearson's or Spearman's method, and related factors were analysed using multiple linear regression. We followed the SRQR checklist throughout the study period. Results: A total of 217 patients were effectively treated, with a recovery rate of 98.64%. Univariate analysis showed that the length of hospital stay, presence of only one child, parents' age, marital status, education level, working status, and per capita monthly familial income were influencing factors. Bivariate analysis showed that post-traumatic growth was moderately and positively correlated with perceived social support, rumination, and family resilience. Multiple linear regression showed that purposeful contemplation, family resilience, education, family support, age, and marital status entered into the regression equation and together accounted for 47.4% of the total variation. Conclusions: It is necessary to pay attention to post-traumatic growth and familial stability in these families, provide aid in building a good support system, and encourage parents to mobilise their family and favourable factors to increase post-traumatic growth levels. Answer: The concept of post-traumatic growth (PTG) suggests that individuals can experience positive changes as a result of struggling with major life crises or traumatic events, which aligns with Nietzsche's famous adage, "What does not kill me, makes me stronger." Research on parents who have had a child admitted to intensive care supports the notion that PTG can occur in such stressful circumstances. A study by Colville et al. (PUBMED:19225754) found that a significant number of parents (88%) reported experiencing positive change to a great degree after their child's treatment in a Pediatric Intensive Care Unit (PICU). This growth was more pronounced in parents of children who were ventilated and parents of older children. Interestingly, PTG was more strongly associated with moderate levels of post-traumatic stress rather than high or low levels, suggesting a complex relationship between stress and growth. A systematic review (PUBMED:35526209) identified factors associated with PTG in parents following their child's ICU admission. Mothers and parents of older children experienced greater PTG, and a strong perception of the child's illness severity was also linked to greater PTG. Psychological factors, including well-being and coping, had stronger associations with PTG than demographic and clinical factors, indicating that the subjective experience of the ICU may be more influential in PTG than the objective reality. Further studies (PUBMED:34612728, PUBMED:36245746, PUBMED:28882361, PUBMED:32208588, PUBMED:27539892) have consistently shown that PTG is a common experience among parents after a child's critical illness or NICU admission. Factors such as increased hospital length of stay, post-traumatic stress symptoms, coping strategies, and psychological support have been associated with increased PTG. Conversely, higher levels of parental depression were associated with decreased PTG (PUBMED:36245746). Research also indicates that PTG does not negate the negative impact on mental health but may co-occur with psychopathological symptoms (PUBMED:28882361). This suggests that parents who report PTG are not in denial about the difficulties they face. In conclusion, the evidence supports the idea that parents can experience PTG following their child's admission to intensive care, which is in line with Nietzsche's perspective on growth through adversity. The findings highlight the importance of considering both the negative and positive outcomes of traumatic events and suggest that interventions should aim to support parents in ways that facilitate both the reduction of negative mental health symptoms and the promotion of adaptation and growth (PUBMED:36245746).