input
stringlengths
6.82k
29k
Instruction: Is laparoscopic evaluation of digestive cancers legitimate? Abstracts: abstract_id: PUBMED:9805796 Is laparoscopic evaluation of digestive cancers legitimate? A prospective study of 109 cases Unlabelled: This aims of this study is to evaluate after laparoscopic exploration or treatment in digestive cancers, the incidence of port site metastases and the incidence of unnecessary laparotomy in advanced disease. Patients: 109 patients were included in this study: 91 had localized digestive cancers and 18 had disseminated disease with hepatic and/or peritoneal metastases. Methods: All the patients underwent a laparoscopy and a laparoscopic ultra-sonography under general anesthesia prior to a definitive decision on therapeutic management. Depending on the circumstances, histological or cytological biopsies were performed. The puncture sites of the trocar were examined clinically and, if required, by ultrasonography, monthly through out the course of the disease. Chemotherapy was instigated in 60.5 per cent of patients. Results: 35 patients (32.1 per cent) only had one laparoscopic examination with a mean survival of 4.8 months, 22.9 per cent of patients had laparoscopic treatment of their lesion and 44.9 per cent had conventional treatment. Laparoscopic exploration allowed the detection of 52 lesions that had not been identified by conventional imaging techniques and thereby avoided 35 laparotomies (32.1 per cent). Tumor invasion of the peritoneum was present in 46 per cent of cases, most notably in the cases of pancreatico-biliary cancers. The overall mean survival was 32.7 months while it was 9.8 months in cases of palliative treatment. In total, 436 trocars were used. There was one case (0.9 per cent of patient or 0.02 per cent of port) of a metastasis occurring at the site of the trocar following treatment by right-sided, laparoscopic colectomy in a patient with disseminated cancer. Two factors seem to be involved peritoneal spread of the tumor and manipulation of the tumor. Conclusions: Laparoscopic exploration for digestive cancers is a legitimate technique. It allows the detection of lesions that are not identified by conventional imaging techniques, permits a more accurate assessment of the resectability of a tumor and reduces the number of unnecessary laparotomies. The incidence of metastases at the site of the trocar is low and is closely linked to the presence of disseminated disease and manipulation of the tumor. abstract_id: PUBMED:7575125 Comparative evaluation of contact ultrasonography and transcystic cholangiography during laparoscopic cholecystectomy: a prospective study. Background: The role of intraoperative cholangiography (IOC) during laparoscopic cholecystectomy (LC) is controversial. While many advocate its routine use, others argue for a selective approach. Recent reports showed laparoscopic contact ultrasonography (LCU) as a viable alternative to IOC. However, no prospective data were available to compare the accuracy, efficacy, and safety of the two diagnostic procedures. Objective: To evaluate the benefits and disadvantages of LCU and IOC during LC. Methods: Seventy-eight patients who underwent LC at Pisa (Italy) and Dundee (Scotland) university hospitals were entered in a prospective data registry. Details of operative technique and results of LCU and IOC were analyzed by reviewing videotape recordings of each procedure. Results: Laparoscopic cholecystectomy was achieved in 73 patients, with five requiring conversion to the open procedure. The success rate of IOC was 90% (64/71). Performance of IOC demanded more than twice the time needed for LCU. Eleven percent (8/71) of cholangiograms were abnormal, with a false-positive rate of 1% (1/71). Laparoscopic contact ultrasonography detected all four instances of unsuspected ductal stones but none of the three cases of anomalous biliary anatomy. Clinically relevant incidental findings were picked up by LCU in six patients. Conclusions: Laparoscopic contact ultrasonography proved to be extremely accurate in the detection of ductal stones but less reliable in the disclosure of anomalous biliary anatomy. The essential role of IOC in providing a clear spatial display of the biliary tract was confirmed. Since the two procedures are complementary, their combined use is advisable in difficult LC to avoid retained common bile duct stones and prevent iatrogenic complications. abstract_id: PUBMED:37383140 Application of laparoscopic surgery in gallbladder carcinoma. Gallbladder carcinoma (GC) is a rare type of cancer of the digestive system, with an incidence that varies by region. Surgery plays a primary role in the comprehensive treatment of GC and is the only known cure. Compared with traditional open surgery, laparoscopic surgery has the advantages of convenient operation and magnified field of view. Laparoscopic surgery has been successful in many fields, including gastrointestinal medicine and gynecology. The gallbladder was one of the first organs to be treated by laparoscopic surgery, and laparoscopic cholecystectomy has become the gold standard surgical treatment for benign gallbladder diseases. However, the safety and feasibility of laparoscopic surgery for patients with GC remain controversial. Over the past several decades, research has focused on laparoscopic surgery for GC. The disadvantages of laparoscopic surgery include a high incidence of gallbladder perforation, possible port site metastasis, and potential tumor seeding. The advantages of laparoscopic surgery include less intraoperative blood loss, shorter postoperative hospital stay, and fewer complications. Nevertheless, studies have provided contrasting conclusions over time. In general, recent research has tended to support laparoscopic surgery. However, the application of laparoscopic surgery in GC is still in the exploratory stage. Here, we provide an overview of previous studies, with the aim of introducing the application of laparoscopy in GC. abstract_id: PUBMED:9371056 Laparoscopic surgery for cancer patients. Laparoscopy is an effective tool for diagnosis and staging of malignancies. Laparoscopic resection of abdominal tumors has been performed rarely, with two exceptions: laparoscopic adrenalectomy and laparoscopic resection of colorectal cancer. One of the best applications of minimally invasive surgery is the use of laparoscopic techniques for palliation of abdominal cancer. Requiring thorough training and preparation of surgeons and mandating their strict credentialing will reduce the risk of complications from laparoscopic surgery. abstract_id: PUBMED:30307417 Early outcomes of open, laparoscopic and robot-assisted pancreatoduodenectomy Aim: To study immediate results of open, laparoscopic and robot-assisted pancreatoduodenectomy for malignancies. Material And Methods: There were 158 patients with cancer of biliopancreatoduodenal area. Open procedures were performed in 118 cases, laparoscopic in 17, robot-assisted pancreatoduodenectomy - in 23. Results: After 'standard' pancreatoduodenectomy 31 (62.0%) complications were registered, after laparoscopic - 12 (24.0%) and aWfter robot-assisted surgery - 7 (14.0%) complications. Relationship between probability of complications was absent (correlation coefficient 0.10491), however, significant differences in incidence of complications after various surgical approaches were observed (c2=6.8832; df=0.9679; p<0.05). Conclusion: Laparoscopic and robot-assisted pancreatoduodenectomy was not followed by advanced early postoperative morbidity. Moreover, minimally invasive approach was associated with improved outcomes. abstract_id: PUBMED:18225733 Evaluation of a four-year experience with laparoscopic surgery in Abidjan, Ivory Coast Laparoscopic surgery is now a widely accepted modality for gastrointestinal surgery. In this report the authors evaluate their four-year experience with laparoscopic surgery in a total of 494 patients in Abidjan, Ivory Coast. Cholecystectomy and appendectomy were the main indications in this series. However laparoscopic surgery has a wide range of applications in Black Africa not only for elective indications such as evaluation of chronic abdominal pain and digestive cancer but also emergency indications such as peritonitis and abdominal trauma. abstract_id: PUBMED:8740676 Port site metastasis. An unresolved problem in laparoscopic surgery. A review. After the explosive success of laparoscopic cholecystectomy, great interest has been shown in the laparoscopic treatment of digestive malignancies. Laparoscopy has been proposed for diagnosis and staging, and as a technical option aiming to cure or palliate. But this optimism has been tempered by the doubt about the potential disseminatory role of laparoscopy after the observation of a large number of port site seeding tumors. Since the first laparoscopic cholecystectomy, more than 100 port site metastases have been reported, without a clear explanation for these findings. Port site recurrences have been observed after gynaecologic procedures, laparoscopic cholecystectomy in which an unsuspected gallbladder cancer was found and after laparoscopic operations indicated for oncological treatment of digestive tumors, mainly colorectal cancer. Other cases have been reported after thoracoscopic resection of oesophageal cancer or urologic cancer, even after staging laparoscopy associated with sampling. Possible mechanisms for port site cell implantation are direct implantation in the wound during unprotected and forced tissue retrieval or by contaminated instruments during tumor dissection, the effect of gas turbulence in long laparoscopic procedures and embolization of exfoliated cells during tumor dissection or hematogenous dissemination. Probably, a multifactorial mechanism may be responsible, in which the key factors could be a long operative procedure, the high pressure pneumoperitoneum, tumoral manipulation during dissection and forced extraction of unprotected specimens. Prophylactic measures proposed to avoid this disastrous complication are the use of protective bags for tissue retrieval, peritoneal lavage with heparin in order to avoid adhesion of free cells, or lavage with cytocidal solutions. abstract_id: PUBMED:24803799 Laparoscopic surgery for benign and malign diseases of the digestive system: indications, limitations, and evidence. The laparoscopic technique was introduced in gastrointestinal surgery in the mid 1980s. Since then, the development of this technique has been extraordinary. Triggered by technical innovations (stapling devices or coagulation/dissecting devices), nowadays any type of gastrointestinal resection has been successfully performed laparoscopically and can be performed laparoscopically dependent on the patient's condition. This summary gives an overview over 30 years of laparoscopic surgery with focus on today's indications and evidence. Main indications remain the more common procedures, e.g., appendectomy, cholecystectomy, bariatric procedures or colorectal resections. For all these indications, the laparoscopic approach has become the gold standard with less perioperative morbidity. Regarding oncological outcome there have been several high-quality randomized controlled trials which demonstrated equivalency between laparoscopic and open colorectal resections. Less common procedures like esophagectomy, oncological gastrectomy, liver and pancreatic resections can be performed successfully as well by an experienced surgeon. However, the evidence for these special indications is poor and a general recommendation cannot be given. In conclusion, laparoscopic surgery has revolutionized the field of gastrointestinal surgery by reducing perioperative morbidity without disregarding surgical principles especially in oncological surgery. abstract_id: PUBMED:26902090 Learning Curve for Laparoscopic Pancreaticoduodenectomy: a CUSUM Analysis. Background: Laparoscopic pancreaticoduodenectomy (LPD), an advanced minimally invasive technique, has demonstrated advantages to open pancreaticoduodenectomy (OPD). However, this complex procedure requires a relatively long training period to ensure technical proficiency. This study was therefore designed to analyze the learning curve for LPD. Methods: From October 2010 to September 2015, 63 standard pancreaticoduodenectomy procedures were to be performed laparoscopically by a single surgeon at the Department of Pancreatic Surgery, West China Hospital, Sichuan University, China. After applying the inclusion and exclusion criteria, a total of 57 patients were included in the study. Data for all the patients, including preoperative, intraoperative, and postoperative variables, were prospectively collected and analyzed. The learning curve for LPD was evaluated using both cumulative sum (CUSUM) and risk-adjusted CUSUM (RA-CUSUM) methods. All of the variables among the learning curve phases were compared. Results: Based on the CUSUM and the RA-CUSUM analyses, the learning curve for LPD was grouped into three phases: phase I was the initial learning period (cases 1-11), phase II represented the technical competence period (cases 12-38), and phase III was regarded as the challenging period (cases 39-57). The operative time, intraoperative blood loss, and postoperative ICU demand significantly decreased with the learning curve. More lymph nodes were collected after the initial learning period. There were no significant differences in terms of postoperative complications or the 30-day mortality among the three phases. More challenging cases were encountered in phase III. Conclusions: According to this study, the learning curve for LPD consisted of three phases. Conservatively, to attain technical competence for performing LPD, a minimum of 40 cases are required for laparoscopic surgeons with a degree of laparoscopic experience. abstract_id: PUBMED:21029461 Laparoscopic and open resection for colorectal cancer: an evaluation of cellular immunity. Background: Colorectal cancer is one kind of frequent malignant tumors of the digestive tract which gets high morbidity and mortality allover the world. Despite the promising clinical results recently, less information is available regarding the perioperative immunological effects of laparoscopic surgery when compared with the open surgery. This study aimed to compare the cellular immune responses of patients who underwent laparoscopic (LCR) and open resections (OCR) for colorectal cancer. Methods: Between Mar 2009 and Sep 2009, 35 patients with colorectal carcinoma underwent LCR by laparoscopic surgeon. These patients were compared with 33 cases underwent conventional OCR by colorectal surgeon. Clinical data about the patients were collected prospectively. Comparison of the operative details and postoperative outcomes between laparoscopic and open resection was performed. Peripheral venous blood samples from these 68 patients were taken prior to surgery as well as on postoperative days (POD) 1, 4 and 7. Cell counts of total white blood cells, neutrophils, lymphocyte subpopulations, natural killer (NK) cells as well as CRP were determined by blood counting instrument, flow cytometry and hematology analyzer. Results: There was no difference in the age, gender and tumor status between the two groups. The operating time was a little longer in the laparoscopic group (P > 0.05), but the blood loss was less (P = 0.039). Patients with laparoscopic resection had earlier return of bowel function and earlier resumption of diet as well as shorter median hospital stay (P < 0.001). Compared with OCR group, cell numbers of total lymphocytes, CD4+T cells and CD8+T cells were significant more in LCR group (P < 0.05) on POD 4, while there was no difference in the CD45RO+T or NK cell numbers between the two groups. Cellular immune responds were similar between the two groups on POD1 and POD7. Conclusions: Laparoscopic colorectal resection gets less surgery stress and short-term advantages compared with open resection. Cellular immune respond appears to be less affected by laparoscopic colorectal resection when compared with open resection. Answer: Yes, laparoscopic evaluation of digestive cancers is considered a legitimate technique. The study by PUBMED:9805796 found that laparoscopic exploration for digestive cancers allows the detection of lesions not identified by conventional imaging techniques, permits a more accurate assessment of the resectability of a tumor, and reduces the number of unnecessary laparotomies. The incidence of metastases at the site of the trocar is low and is closely linked to the presence of disseminated disease and manipulation of the tumor. Additionally, other studies have supported the use of laparoscopic surgery for digestive cancers. PUBMED:37383140 discusses the application of laparoscopic surgery in gallbladder carcinoma, highlighting its advantages such as less intraoperative blood loss, shorter postoperative hospital stay, and fewer complications, although it also mentions the high incidence of gallbladder perforation, possible port site metastasis, and potential tumor seeding as disadvantages. PUBMED:9371056 emphasizes that laparoscopy is an effective tool for diagnosis and staging of malignancies, and PUBMED:30307417 reports that early outcomes of laparoscopic and robot-assisted pancreatoduodenectomy show improved outcomes compared to open procedures, with no increased early postoperative morbidity. PUBMED:24803799 provides an overview of 30 years of laparoscopic surgery, stating that it has revolutionized gastrointestinal surgery by reducing perioperative morbidity without compromising surgical principles, especially in oncological surgery. PUBMED:21029461 also suggests that laparoscopic colorectal resection is associated with less surgical stress and better short-term advantages compared to open resection, with cellular immune response appearing to be less affected by laparoscopic resection. In conclusion, the legitimacy of laparoscopic evaluation and treatment of digestive cancers is supported by its ability to accurately detect and assess tumors, reduce unnecessary surgeries, and potentially offer better postoperative outcomes with less impact on the patient's immune system.
Instruction: Need-based resource allocation: different need indicators, different results? Abstracts: abstract_id: PUBMED:19622159 Need-based resource allocation: different need indicators, different results? Background: A key policy objective in most publicly financed health care systems is to allocate resources according to need. Many jurisdictions implement this policy objective through need-based allocation models. To date, no gold standard exists for selecting need indicators. In the absence of a gold standard, sensitivity of the choice of need indicators is of concern. The primary objective of this study was to assess the consistency and plausibility of estimates of per capita relative need for health services across Canadian provinces based on different need indicators. Methods: Using the 2000/2001 Canadian Community Health Survey, we estimated relative per capita need for general practitioner, specialist, and hospital services by province using two approaches that incorporated a different set of need indicators: (1) demographics (age and sex), and (2) demographics, socioeconomic status, and health status. For both approaches, we first fitted regression models to estimate standard utilization of each of three types of health services by indicators of need. We defined the standard as average levels of utilization by needs indicators in the national sample. Subsequently, we estimated expected per capita utilization of each type of health services in each province. We compared these estimates of per capita relative need with premature mortality in each province to check their face validity. Results: Both approaches suggested that expected relative per capita need for three services vary across provinces. Different approaches, however, yielded different and inconsistent results. Moreover, provincial per capita relative need for the three health services did not always indicate the same direction of need suggested by premature mortality in each province. In particular, the two approaches suggested Newfoundland had less need than the Canadian average for all three services, but it had the highest premature mortality in Canada. Conclusion: Substantial differences in need for health care may exist across Canadian provinces, but the direction and magnitude of differences depend on the need indicators used. Allocations from models using survey data lacked face validity for some provinces. These results call for the need to better understand the biases that may result from the use of survey data for resource allocation. abstract_id: PUBMED:35260956 Quality Indicators for High-Need Patients: a Systematic Review. Background: Healthcare systems are increasingly implementing programs for high-need patients, who often have multiple chronic conditions and complex social situations. Little, however, is known about quality indicators that might guide healthcare organizations and providers in improving care for high-need patients. We sought to conduct a systematic review to identify potential quality indicators for high-need patients. Methods: This systematic review (CRD42020215917) searched PubMed, CINAHL, and EMBASE; guideline clearing houses ECRI and GIN; and Google scholar. We included publications suggesting, evaluating, and utilizing indicators to assess quality of care for high-need patients. Critical appraisal of the indicators addressed the development process, endorsement and adoption, and characteristics, such as feasibility. We standardized indicators by patient population subgroups to facilitate comparisons across different indicator groups. Results: The search identified 6964 citations. Of these, 1382 publications were obtained as full text, and 53 studies met inclusion criteria. We identified over 1700 quality indicators across studies. Quality indicator characteristics varied widely. The scope of the selected indicators ranged from detailed criterion (e.g., "annual eye exam") to very broad categories (e.g., "care coordination"). Some publications suggested disease condition-specific indicators (e.g., diabetes), some used condition-independent criteria (e.g., "documentation of the medication list in the medical record available to all care agencies"), and some publications used a mixture of indicator types. Discussion: We identified and evaluated existing quality indicators for a complex, heterogeneous patient group. Although some quality indicators were not disease-specific, we found very few that accounted for social determinants of health and behavioral factors. More research is needed to develop quality indicators that address patient risk factors. abstract_id: PUBMED:25244959 Examining the need assessment process by identifying the need profiles of elderly care recipients in the Ten-year Long-Term Care Project (TLTCP) of Taiwan. Objective: To deal with the increasing long-term care (LTC) needs of elderly people in Taiwan, the government launched the Ten-year Long-term Care Project (TLTCP) in 2007, and through the care management system, care plans for those in need were distributed and implemented by care managers according to the single assessment process. Based on the emphasis of linking the right need assessment to the care plan, this study aimed to explore the need profiles of LTC recipients with regard to their health indicators to serve as a validity check on the identified dependency levels and care plans in the current care management system. Design: A model based on latent class analysis (LCA) was used for dealing with the issue of health heterogeneity. LCA provides an empirical method that examines the interrelationships among health indicators and characterizes the underlying set of mutually exclusive latent classes that account for the observed indicators. The analysis included a total of 2901 elderly care recipients in the LTC dataset from a southern city, 1 of the 5 major metropolitan areas in Taiwan. The identified dependency levels of the samples and their care plans in need assessment were compared and discussed. Results: Four need profiles were explored in the LTC dataset. Apart from the low (LD) (32.95%) and moderate dependent groups (MD) (17.48%), there were 2 groups identified among the high-dependency levels, including the severe physical and psychological dependency (SPP) (26.37%) and the comorbidities and severe dependency (CSD) groups (23.20%), which in sum were approximately identified as high dependency (HD) by care managers in the LTC dataset. In addition, the CSD group currently costs more for their care plans on average in LTC services (NT. 277,081.15, approximately 9200 USD) than the SPP group (NT. 244,084.21) and the other groups. Conclusion: Need assessment is a key to success in care management in LTC. The results of this study showed the importance of focusing on multifacet indicators, especially the mental and social health indicators in need assessments by improving the unified assessment process to sensitively detect those with various needs and then link them to the right care plan. abstract_id: PUBMED:34243784 A review of appropriate indicators for need-based financial resource allocation in health systems. Background: Optimal, need-based, and equitable allocation of financial resources is one of the most important concerns of health systems worldwide. Fulfilling this goal requires considering various criteria when allocating resources. The present study was conducted to identify the need indicators used to allocate health resources in different countries worldwide. Methods: A systematic review conducted on all published articles and reports on the need-based allocation of health financial resources in the English language from 1990 to 2020 in databases, including PubMed, Cochrane, and Scopus as well as those in Persian language databases, including magiran, SID, and Google and Google scholar search engines. After performing different stages of screening, appropriate studies were identified and their information were extracted independently by two people, which were then controlled by a third person. The extracted data were finally analyzed by content analysis method using MAXQDA 10 software. Result: This search yielded 823 studies, of which 29 were included for the final review. The findings indicated that many need-based resource allocation formulas attempt to deal with health care needs using some weighting methods for individuals. In this regard, the most commonly used indicators were found as follows: age, gender, socio-economic status or deprivation, ethnicity, standardized mortality ratio (SMR), the modified health indicators (disease consequences, self-assessed health, and disability), geographical area / place of residence (geographical) (rural versus urban), cross-boundary flows, cost of services, and donations. Conclusion: The indicators used in allocating the health systems' financial resources in each country should be designed in order to be simple and transparent and in accordance with the moral norms of that society. Moreover, these should be a good representative of the health needs of people in different geographical areas of that country. In addition, their related data should be available to an acceptable extent. abstract_id: PUBMED:32127233 Information need, learning need and educational need, definitions and measurements: A systematic review. Background: There are various definitions and tools for Information Need (IN), Learning Need (LN) and Educational Need (EN) which are used interchangeably in the patient education. Objective: Clarifying the definitions of IN, LN and EN as well as comparing item generation methods and the dimensions of the available tools to be used appropriately. Methods: We searched PubMed/Scopus/Embase/Science Direct databases from 1960 to 2019. Two reviewers selected studies and extracted data independently. Results: We identified 22 tools comprised of 14 IN, 5 LN and 3 EN tools. The Patient Learning Needs Scale (PLNS) was the only general tool. The content of tools includes anatomy, physiology, diagnostic tests, symptoms, treatments, medications, diet, activity and self-care. No difference was found between EN/LN/IN tools in terms of item generation and dimensions according to their concept definitions. Seven tools assessed all 7 domains of patient education components. Conclusion: The EN, LN and IN are different concepts, but using these concepts in the tools is not based on their definitions and they have been used interchangeably. IN and LN tools were more complete and comprehensive. Practical Implications: The findings of this review can help researchers and clinicians to use EN, LN and IN tools more appropriately. abstract_id: PUBMED:38319884 Rethinking students' psychological need states: The unique role of need unfulfilment to understanding ill-being in academic settings. Prior research has shown that students face various stressors which can affect their psychological health. The present study examines the role of students' psychological need states in explaining their burnout and dropout intentions. More precisely, relying on recent findings from Self-Determination Theory research, we examined whether students' psychological need unfulfilment could contribute to explain their ill-being over and above need satisfaction and frustration. To this end, we also tested the validity of a tripartite instrument allowing to assess these need states in academic settings (Psychological Need States in Education-Scale [PNSE-S]). A study was conducted among two samples of high school (N = 473; Sample 1) and college (N = 1143; Sample 2) students. Results supported the construct validity of the 35-item PNSE-S in both samples by showing that students' relatedness, autonomy, and competence unfulfilment can be modelled as distinct need states alongside the frustration and satisfaction of those three needs. Moreover, these different need states displayed a well-differentiated pattern of associations with various facets of student burnout and with dropout intentions. Results also showed the critical role of psychological need unfulfilment in explaining students' ill-being. abstract_id: PUBMED:29016252 Nursing Facility and Home and Community Based Service Need Criteria in the United States. Over the past two decades states have expanded the delivery of long term care from institutions to the home and community. Although state Medicaid need criteria for the home and community based services (HCBS) waiver program must be equivalent to those for nursing facility (NF) criteria, other screening procedures for these services are not specified. This study examined variations in need criteria, methods for determining need, and assessment forms for NF and HCBS across the states, and identified whether these procedures were more stringent for HCBS than for NFs. The NF minimum need criteria was most often (31 states) based on a combination of nursing, medical, functional, or other psychosocial criteria, 10 states used nursing criteria, and 10 states used limitations in functional activities criteria. The method used for determining need for NF was based on explicit criteria in 25 states, point systems in seven states, and 19 states used guidelines that allowed for judgment by those conducting the assessment. The average length of a NF assessment form was six and one-half pages, which was four pages shorter than the average HCBS form. Within some states the HCBS waivers have higher need criteria, more comprehensive need determination, and longer assessment forms than NFs. Wide variability in screening procedures creates potential inequities to LTC services across the states. Within states, more restrictive screening procedures for HCBS than for NF may limit access to community based long term care. abstract_id: PUBMED:34900021 Utilizing microblogs for optimized real-time resource allocation in post-disaster scenarios. In the aftermath of a disaster event, it is of utmost important to ensure efficient allocation of emergency resources (e.g. food, water, shelter, medicines) to locations where the resources are needed (need-locations). There are several challenges in this goal, including the identification of resource-needs and resource-availabilities in real time, and deciding a policy for allocating the available resources from where they are available (availability-locations) to the need-locations. In recent years, social media, and especially microblogging sites such as Twitter, have emerged as important sources of real-time information on disasters. There have been some attempts to identify resource-needs and resource-availabilities from microblogging sites. However, there has not been much work on having a policy for optimized and real-time resource allocation based on the information obtained from microblogs. Specifically, the allocation of critical resources must be done in an optimal way by understanding the utility of emergency resources at various need-locations at a given point of time. This paper attempts to develop such a utility-driven model for optimized resource allocation in a post-disaster scenario, based on information extracted from microblogs in real time. Experiments show that the proposed model achieves much better allocation of resources than baseline models-the allocation by the proposed model is not only more efficient in terms of quickly bringing down resource-deficits at various need-locations, but also more fair in distributing the available resources among the various need-locations. abstract_id: PUBMED:31001493 Agreement and association between normative and subjective orthodontic treatment need using the Index of Orthodontic Treatment Need. Aims: To determine the association and level of agreement between young adults' perception of orthodontic treatment need (subjective need) and the orthodontists' assessment of treatment need (normative objective need). Methodology: For this study, 670 students [280 males and 390 females; mean age (standard deviation) of 15.32 (1.81) years] were selected from public and private schools from different demographic areas of Jeddah city, Saudi Arabia, and divided into two age groups (12-15 years) and (16-19 years). All the participants were examined, and the Index of Orthodontic Treatment Need (IOTN) components [dental health component (DHC) and aesthetic component (AC)] were recorded. Results: Kappa statistics showed a statistically significant but fair agreement between clinician AC (CAC) and student AC (SAC) assessments in both age groups (k = 0.343 and 0.334, respectively; P < 0.001), whereas Spearman's correlation coefficient showed a statistically significant but moderate association (r = 0.487 and 0.517, respectively; P < 0.001). The degrees of agreement were 76.4% and 76.7% at the no-need and mild-need levels of treatment, respectively. There was a statistically significant but weak association between the subjective and normative needs (SAC and IOTN-DHC) in both age groups (r = 0.336 and 0.360, respectively; P < 0.001). However, the degrees of agreement were 58.9% and 61.5% at the no-need and mild-need levels of treatment, respectively. Conclusion: Significant but weak positive association was found between the normative and subjective orthodontic treatment needs, indicating a lack of understanding of the nature of malocclusion and its consequences. Thus, promoting further knowledge and awareness of malocclusion are indicated. abstract_id: PUBMED:36766987 Validating the German Short Basic Psychological Need Satisfaction and Frustration Scale in Individuals with Depression. Satisfaction and frustration of the needs for autonomy, competence, and relatedness, as assessed with the 24-item Basic Psychological Need Satisfaction and Frustration Scale (BPNSFS), have been found to be crucial indicators of individuals' psychological health. To increase the usability of this scale within a clinical and health services research context, we aimed to validate a German short version (12 items) of this scale in individuals with depression including the examination of the relations from need frustration and need satisfaction to ill-being and quality of life (QOL). This cross-sectional study involved 344 adults diagnosed with depression (Mage (SD) = 47.5 years (11.1); 71.8% females). Confirmatory factor analyses indicated that the short version of the BPNSFS was not only reliable, but also fitted a six-factor structure (i.e., satisfaction/frustration X type of need). Subsequent structural equation modeling showed that need frustration related positively to indicators of ill-being and negatively to QOL. Surprisingly, need satisfaction did not predict differences in ill-being or QOL. The short form of the BPNSFS represents a practical instrument to measure need satisfaction and frustration in people with depression. Further, the results support recent evidence on the importance of especially need frustration in the prediction of psychopathology. Answer: Yes, different need indicators can lead to different results in need-based resource allocation. A study assessing the consistency and plausibility of estimates of per capita relative need for health services across Canadian provinces found that the direction and magnitude of differences in need depend on the need indicators used. Different approaches, incorporating demographics alone or combined with socioeconomic and health status, yielded inconsistent results and allocations from models using survey data lacked face validity for some provinces (PUBMED:19622159). Similarly, a systematic review aimed at identifying potential quality indicators for high-need patients revealed a wide variety of indicators, ranging from detailed criteria to broad categories, and few accounted for social determinants of health and behavioral factors (PUBMED:35260956). This suggests that the choice of indicators can significantly influence the assessment of quality of care for high-need patients. In Taiwan, a study examining the need assessment process in elderly care recipients found that focusing on multifaceted indicators, especially mental and social health indicators, is important for sensitively detecting various needs and linking them to the appropriate care plan (PUBMED:25244959). A review of appropriate indicators for need-based financial resource allocation in health systems highlighted that the indicators used should be simple, transparent, and representative of the health needs of people in different geographical areas, and their related data should be available to an acceptable extent (PUBMED:34243784). These studies collectively indicate that the selection of need indicators is critical and can significantly impact the outcomes of resource allocation, quality of care assessments, and the effectiveness of care management systems. The choice of indicators must be carefully considered to ensure that they accurately represent the needs of the population and lead to equitable and effective allocation of resources.
Instruction: Are we measuring the right intraocular pressure after excimer laser photorefractive laser keratoplasty in myopia? Abstracts: abstract_id: PUBMED:7609376 Are we measuring the right intraocular pressure after excimer laser photorefractive laser keratoplasty in myopia? Background: It is possible that the traditional method to determine the intraocular pressure after excimer-laser PRK is inaccurate. Measuring the pressure in the temporal part of the cornea might give the true values. Methods: Intraocular pressure was measured with a Goldmann Applanation Tonometer and with the Tonopen, before and after PRK for myopia in the central and in the temporal parts of the cornea. The paired student t-test was used for statistical analysis. Results: The results of central and temporal measurements before treatment were identical with both instruments. After PRK, central values were 2 to 3 mm Hg lower than temporal values when measured with a Goldmann Tonometer, and about 2 mm lower when measured with the Tonopen. The differences were highly significant (p < 0.0001 and p = 0.004 respectively). Conclusions: The intraocular pressure measured in the usual manner after excimer-laser PRK is lower than the temporally measured pressure. These differences could be caused by absence of the Bowman's membrane, thinning of the cornea and/or change of its topography. abstract_id: PUBMED:7530099 Excimer laser in situ keratomileusis and photorefractive keratectomy for correction of high myopia. Background: The purpose of this research was to study the visual outcome of excimer laser photorefractive keratectomy and laser in situ keratomileusis (LASIK) for the correction of moderate and high myopia. Methods: Twenty partially-sighted eyes of 20 patients were divided into two groups, LASIK and photorefractive keratectomy. Ten eyes underwent LASIK and the other 10 photorefractive keratectomy. Follow up was at 1, 3, 6, and 12 months. The LASIK technique included a nasally based, 150 microns thick, 8.0 x 9.0 mm diameter, truncated, disc-shaped corneal flap created with a microkeratome; and the ablation of the stroma with a 193-nanometer ArF excimer laser. The flap was returned to its original position and held in place by apposition. The photorefractive keratectomy technique included mechanical removal of the epithelium and ablation of the stroma with a 193-nanometer ArF excimer laser. Results: LASIK series: One eye had a ruptured globe during the second postoperative month and was excluded from the study. The preoperative spherical equivalent refraction ranged from -10.62 to -25.87 diopters (D). The attempted correction ranged from -8.00 to -16.00 D. Postoperative refraction and corneal topography stabilized between 4 and 12 weeks. Spectacle-corrected visual acuity was within 1 Snellen line of preoperative in all eyes. The refraction in six eyes (66.6%) was within +/- 1.00 D of the intended correction, and in eight eyes was within +/- 2.00 D (88.8%) at 12 months. The mean attempted correction (11.40 +/- 2.60 D) was close to the mean achieved correction at 12 months (11.96 +/- 3.10 D). The mean postoperative refractive astigmatism (1.50 +/- 0.97; range, 0.25 to 3.50 D) was close to the preoperative astigmatism (1.70 +/- 1.15; range, 0 to 3.75 D). Endothelial cell density at 12 months showed an average 8.67% of cell loss. All eyes showed a clear interface. Photorefractive keratectomy series: The preoperative spherical equivalent refraction ranged from -10.75 to -23.12 D. The attempted correction ranged from -8.80 to -17.60 D. Postoperative refraction showed regression throughout the follow-up period, and corneal topography did not stabilize. Spectacle-corrected visual acuity was within 1 Snellen line in eight eyes. Two eyes lost 2 and 3 Snellen lines. One eye was within +/- 1.00 D, and three eyes (30%) were within +/- 2.00 D of the intended correction at 12 months. The achieved correction mean (7.17 +/- 5.29 D) was 61% of the attempted mean (11.72 +/- 2.81 D) at 12 months. The postoperative refractive astigmatism (1.80 +/- 0.95; range, 0.50 to 4.00 D) was very close to the preoperative (1.90 +/- 1.33; range, 0 to 5.00 D). Endothelial cell density showed an average of 10.56% cell loss at 12 months. The mean haze at 12 months was 1.2 (0 to 4 scale). Conclusion: LASIK, although more complicated because of the use of a microkeratome, was more effective than photorefractive keratectomy in higher myopes. LASIK created less corneal haze. The refraction was more stable with LASIK in the correction of high myopia. Its predictability was three times that of PRK. abstract_id: PUBMED:10445189 Initial results of photorefractive keratectomy and laser in situ keratomileusis performed by a single surgeon. Purpose: To determine the safety and efficacy of photorefractive keratectomy (PRK) and laser in situ keratomileusis (LASIK) early in a surgeon's experience. Setting: Ophthalmology Department, John A. Moran Eye Center, Salt Lake City, Utah, USA. Methods: Between October 1995 and April 1997, a retrospective analysis was performed of 200 eyes in 128 patients who had PRK or LASIK with a follow-up of at least 3 months. The mean age was 38.1 years +/- 10.4 (SD) in the PRK group and 42.3 +/- 10.1 years in the LASIK group. There were 36 women and 30 men in the former and 32 women and 30 men in the latter. Photorefractive keratectomy, PRK/astigmatic keratotomy (AK), LASIK, or LASIK/AK was performed with the Summit Omnimed excimer laser. Preoperative evaluation included uncorrected visual acuity (UCVA), best spectacle-corrected visual acuity (BSCVA), cycloplegic refraction, tonometry, and computerized videokeratography. At 1, 3, and 6 months, UCVA, BSCVA, mean spherical equivalent (SE), astigmatism, corneal haze, subjective vision (day, night, glare, and halo), and complications were measured. Results: Mean SE was 0.16 +/- 0.82 diopter (D) in the PRK group and -0.09 +/- 0.63 D in the LASIK group at 1 month (P < .05), -0.18 +/- 0.66 D and -0.16 +/- 0.58 D, respectively, at 3 months, and -0.33 +/- 0.73 D and -0.09 +/- 0.62 D, respectively, at 6 months. Achieved correction was similar in the 2 groups. Mean BSCVA was 22.8 +/- 6.0 in the PRK group and 22.2 +/- 5.3 in the LASIK group at 1 month, 21.1 +/- 6.0 and 21.1 +/- 3.1, respectively, at 3 months, and 19.8 +/- 3.5 and 22.9 +/- 5.2, respectively, at 6 months (P < .005). Mean surface regularity index was 0.68 +/- 0.40 in the PRK group and 0.86 +/- 0.41 in the LASIK group at 1 month, 0.54 +/- 0.26 and 0.72 +/- 0.37, respectively, at 3 months (P < .05), and 0.49 +/- 0.27 and 0.84 +/- 0.42, respectively, at 6 months. Mean surface asymmetry index was 0.71 +/- 0.43 in the PRK group and 0.55 +/- 0.21 in the LASIK group at 1 month (P < .05), 0.53 +/- 0.24 and 0.51 +/- 0.23, respectively, at 3 months, and 0.46 +/- 0.19 and 0.64 +/- 0.54, respectively, at 6 months. The subjective parameters between the groups were significantly different only at 6 months; patients in the PRK group reported better day and night vision. Conclusion: With careful preparation and proper training, PRK and LASIK appear to be safe and efficacious even during the surgeon's early learning phases. abstract_id: PUBMED:10445714 Laser in situ keratomileusis for myopic anisometropia in children. Purpose: To evaluate the visual and refractive results of laser in situ keratomileusis (LASIK) in pediatric patients with myopic anisometropia and amblyopia, and to assess the predictability, safety, and efficacy of the procedure in children. Methods: Fourteen patients aged 7 to 12 years with myopic anisometropia and amblyopia had LASIK in the more myopic eye (14 eyes) using the Chiron Automatic Corneal Shaper and the Chiron-Technolas Keracor 116 excimer laser. Preoperative spherical equivalent manifest refraction ranged from -4.62 to -12.50 D (mean, -7.87 D) and spectacle-corrected visual acuity ranged from 20/40 to 20/100 (median, 20/50). All patients completed a minimum follow-up of 12 months. Results: One year after LASIK, spherical equivalent manifest refraction ranged from 0 to -1.50 D (mean, -0.55 D). Spectacle-corrected visual acuity improved in all eyes (range, 20/20 to 20/40; median, 20/25). Six eyes (42.9%) had a postoperative spectacle-corrected visual acuity of 20/20. Uncorrected visual acuity was 20/40 or better in 10 eyes (71.4%) (range, 20/20 to 20/70; median, 20/30). Uncorrected visual acuity exceeded preoperative spectacle-corrected visual acuity in all eyes by an average of 2 lines. There were no significant complications. Conclusion: LASIK was effective for correction of myopic anisometropia in this small group of children and reversed refractive amblyopia. LASIK in these children was safe, predictable, and provided good visual results. abstract_id: PUBMED:9159682 Laser in situ keratomileusis to correct high myopia. Purpose: To evaluate the effectiveness, predictability, and safety of laser in situ keratomileusis (LASIK) in 143 eyes with myopia from 8.00 to 20.00 diopters (D). Setting: Alicante Institute of Ophthalmology, University of Alicante School of Medicine, Alicante, Spain. Methods: This prospective study comprised 143 eyes (94 patients) that had LASIK with the Chiron Automated Corneal Shaper and the VISX 20/20 excimer laser using a multizone profile and a sutureless hinged corneal flap technique. Results: Uncorrected visual acuity of 20/40 or better in 45.0% of eyes 3 months postoperatively and in 46.4% at 6 months. Best corrected visual acuity (BCVA) improved by 0.07 at 3 and 6 months and was stable after 3 months. Mean spherical equivalent was -13.19 diopters (D) +/- 2.89 (SD) preoperatively and +0.51 +/- 1.63 D at 3 months and +0.18 +/- 1.66 D at 6 months postoperatively. At 3 months, spherical equivalent was within 1.00 D of emmetropia in 57.5% of all eyes, 71.0% of eyes with a baseline refraction from -8.00 to -11.99 D (n = 59), 44.4% with a baseline refraction from -12.00 to -15.99 D (n = 54), and 53.0% of eyes with a baseline refraction from -16.00 to -20.00 D (n = 30). The respective 6 month percentages were 60.0, 72.4, 46.0, and 50.0%. The regression of effect was similar in all groups (approximately 0.50 D) between 1 and 3 months, although the high myopia group had further regression. Significant corneal steepening and an increase in corneal thickness occurred between 1 and 3 months. Flap thickness was always less than predicted with both the 130 microns plates, and achieved laser ablation was deeper than programmed. The relationships between postoperative refraction and preoperative keratometry and postoperative refraction and the difference in achieved versus programmed ablation were significant. Complications at 6 months included epithelial ingrowth, corneal flap melting, decentered ablation, and irregular astigmatism with loss of BCVA, although none was vision threatening. Conclusion: In this study, LASIK was effective and predictable in the correction of high myopia but was more accurate for myopia up to 12.00 D. Current surgical algorithms must be modified to improve predictability in higher corrections. Longer follow-up is necessary to evaluate long-term incidence of vision-threatening complications. abstract_id: PUBMED:22932325 Predictability of the corneal flap creation with the femtosecond laser in LASIK Objective: To observe the predictability of corneal flap creation with the FEMTO LDV femtosecond laser and analyze preliminarily the factors correlating to the thickness and diameter of the flap . Methods: It was a study of serial cases. 260 eyes of 130 consecutive patients were treated with the FEMTO LDV. The eyes were assigned to two groups according to intended flap thickness, 110 µm (208 eyes) and 90 µm (52 eyes). Intended flap diameter varied from 8.5 to 9.5 mm. Difference analysis of flap diameter and intended diameter as well as flap thickness and intended thickness were made. The data was analyzed with SPSS to sum up a multiple stepwise regression formula that could express their quantitative relationship. Results: The 90 µm flap group had a average flap thickness of (95.12 ± 7.65) µm, while for the 110 µm group the average flap thickness was (104.81 ± 3.09) µm. The difference between right and left eyes was not statistically significant (t(110 µm) = -1.223, t(90 µm) = -1.343, P > 0.05). Corneal flap thickness was inversely correlated with flap diameter (r(110 µm) = -0.143, r(90 µm) = -0.315, P < 0.05), but was not related to preoperative patient age, corneal thickness, keratometric value K or intraocular pressure (r(110 µm) = -0.160, 0.054, -0.011, -0.363; r(90 µm) = 0.024, 0.074, -0.212, -0.434, all P > 0.05). Corneal flap diameter was positively correlated with preoperative corneal keratometric value K and thickness (P < 0.001, P < 0.05). Multiple stepwise regression analysis showed flap diameter was an influencing factor for flap thickness. Preoperative corneal keratometric value K and thickness were influencing factors for flap diameter. Conclusions: The LASIK flap creation with the FEMTO LDV laser has relatively good predictability. Flap diameter is an influencing factor for flap thickness. abstract_id: PUBMED:36769806 Clinical Evaluation of the Efficacy of Femtosecond Laser-Assisted Anterior Lamellar Keratoplasty. Background: To evaluate the clinical outcome, efficacy, and safety of femtosecond laser-assisted anterior lamellar keratoplasty. Material And Methods: In this prospective study, 21 males and 10 females aged from 15 to 62 years (mean 38.5) with different pathologies of the anterior corneal layers were enrolled for anterior lamellar keratoplasty using femtosecond laser VisuMax (Carl Zeiss, Germany). All patients were examined for uncorrected (UCVA) and best-corrected (BCVA) distance and near visual acuity, astigmatism, endothelial cell density, corneal thickness, and intraocular pressure. These examinations were performed before transplantation, at hospital discharge, and after 3 and 12 months. The mean follow-up time was 65.36 ± 28.54 months. Results: A statistically significant improvement of both UCVA and BCVA for distance and near vision was registered. BCVA improved from 0.11 preoperatively to 0.168 (p = 0.03), 0.267 (p < 0.01), and 0.472 (p < 0.01) on the hospital discharge day, three months, and 12 months respectively. There were no statistically significant differences in astigmatism, intraocular pressure, endothelial cell density, and corneal pachymetry as compared to preoperative and postoperative values. Six patients (19%) had a graft failure with a rate of 33% at 26 months. Conclusions: Femtosecond laser-assisted anterior lamellar keratoplasty is a safe and effective surgical method, providing satisfactory graft survival rates. abstract_id: PUBMED:12899478 Interface fluid after laser in situ keratomileusis. Purpose: To describe a case of interface fluid after laser in situ keratomileusis (LASIK) and review the literature on this rare complication after LASIK. Methods: We present a case report and literature review. Articles for this review were chosen from electronic database and manual literature searches. MEDLINE searches were made from 1990 to April 2002, using the key words "interface fluid" and "LASIK." Results: A 40-year-old man had uneventful LASIK for residual refractive error from previous penetrating keratoplasty in his right eye. Diffuse lamellar keratitis began 1 day postoperatively. Topical corticosteroids were administered. Six weeks after LASIK, a layer of interface fluid developed. Intraocular pressure was 9 mmHg when measured centrally by Goldmann applanation tonometry and 30 mmHg by Tono-pen tonometry. The interface fluid resolved with antiglaucoma agents and corticosteroids combined with cyclosporine. Conclusions: This case, along with other reported cases, demonstrate the clinical features of interface fluid after LASIK. abstract_id: PUBMED:10404381 Interface fluid associated with diffuse lamellar keratitis and epithelial ingrowth after laser in situ keratomileusis. We report a case in which diffuse interface keratitis began 1 week after bilateral uneventful laser in situ keratomileusis (LASIK). A layer of fluid in the interface with epithelial ingrowth was noted in the left eye 20 days postoperatively. The same complication occurred in the right eye 5 months after LASIK. Dry-eye syndrome and steroid-induced intraocular pressure elevation occurred in this patient with pre-existing ocular hypertension. A long course of interface inflammation was resistant to topical steroids. Surgical removal of the epithelial ingrowth and drainage of the fluid, combined with medical treatment, resulted in resolution of the inflammation. The cytopathologic examination of the fluid showed epithelial cells without signs of inflammation. The clinical features of this case represent a new complication of LASIK. abstract_id: PUBMED:8784623 Experience during the learning curve of laser in situ keratomileusis. Purpose: To identify pitfalls and offer hints on achieving a successful outcome during the early laser in situ keratomileusis (LASIK) learning process. Setting: Gimbel Eye Centre, Calgary, Alberta, Canada. Methods: This was a retrospective review of the preoperative planning, surgical procedure, intraoperative and postoperative problems, and early postoperative anatomic and refractive results in the first 73 eyes that had LASIK. Results: Intraoperative complications included failure of the keratome to make a cut, excessively thin cap, repositioning difficulty, and inadequate intraocular pressure elevation. Early postoperative complications included excessive central and peripheral wrinkling of the cap, peripheral lipid deposits, and central edema of the cap. One month postoperatively, mean spherical equivalent refraction was -0.90 diopters (D) (range +1.75 to -6.00 D), and 45 eyes had a best corrected visual acuity between 20/15 and 20/40. Conclusion: Our retrospective review of the problems experienced during the early LASIK learning process should help novice lamellar refractive surgeons avoid such problems and shorten the learning curve. Answer: After excimer laser photorefractive keratectomy (PRK) for myopia, it is possible that the traditional method to determine intraocular pressure (IOP) is inaccurate. A study measured IOP with a Goldmann Applanation Tonometer and with the Tonopen, both before and after PRK, in the central and temporal parts of the cornea. The results indicated that after PRK, central IOP values were 2 to 3 mm Hg lower than temporal values when measured with a Goldmann Tonometer, and about 2 mm Hg lower when measured with the Tonopen. These differences were statistically significant (PUBMED:7609376). The study concluded that the IOP measured in the usual manner after excimer-laser PRK is lower than the temporally measured pressure. The discrepancies could be due to the absence of Bowman's membrane, thinning of the cornea, and/or changes in corneal topography following the procedure (PUBMED:7609376). Therefore, it suggests that the traditional central measurements may not be providing the true IOP values after PRK for myopia, and measuring the pressure in the temporal part of the cornea might yield more accurate results.
Instruction: Do adolescents appreciate the risks of smoking? Abstracts: abstract_id: PUBMED:11429301 Do adolescents appreciate the risks of smoking? Evidence from a national survey. Purpose: To evaluate whether adolescents understand the risks of smoking when they decide to start. Estimates of objective risks that can be compared with epidemiologic evidence suggest that adolescents overstate the risks. Ratings of personal risk suggest the opposite. Methods: A nationally representative telephone survey of 300 14- to 22-year-old nonsmokers and 300 14- to 22-year-old smokers was conducted. Respondents estimated both objective and personal risks of smoking, and smokers reported their plans to quit. Objective estimates were compared with both epidemiologic evidence and personal ratings of risk. Regression procedures were used to assess relationships between different estimates of risk and between risk estimates and plans to quit. Results: Two of the three objective estimates of risk revealed high proportions of misunderstanding. Over 40% of smokers and 25% of nonsmokers underestimated, or did not know, the likelihood of smoking-related death, and over 40% did not know, or underestimated, the number of years of life lost owing to smoking. Although young people overestimated lung cancer risk relative to objective data, these estimates are inflated by underestimation of the fatality of lung cancer and by overlap with other illnesses not included in objective risk measures. Young smokers exhibited optimism about personal risks of smoking regardless of their perceptions of objective risk. Both objective and personal measures of risk predicted plans to quit. Conclusions: Because perceptions of both personal and objective risks are related to plans to quit, antismoking messages should include evidence about risk, particularly to the individual smoker. abstract_id: PUBMED:15466070 Adolescents' beliefs about the risks involved in smoking "light" cigarettes. Background: Light cigarettes have been marketed by the tobacco industry as being a healthier smoking choice, a safe alternative to cessation, and a first step toward quitting smoking altogether. Research, however, has failed to show a reduction in smoking-related health risks, an increase in rates of smoking cessation, a decrease in the amount of carbon monoxide or tar released, or a reduction in the rates of cardiovascular disease or lung cancer associated with light cigarette use, compared with regular cigarette use. Nevertheless, more than one-half of adolescent smokers in the United States smoke light cigarettes. This study is the first to investigate adolescents' perception of the risks associated with smoking light cigarettes, as well as adolescents' attitudes and knowledge about the delivery of tar and nicotine, health risks, social effects, addiction potential, and ease of cessation with light cigarettes, compared with regular cigarettes. Design: Participants were 267 adolescents (mean age: 14.0 years) who completed a self-administered questionnaire during class time. After reading scenarios in which they imagined that they smoked regular or light cigarettes, participants estimated the chances that they would personally experience 7 smoking-related health risks and 3 addiction risks. Participants also responded to 14 items concerning their attitudes and knowledge about light cigarettes versus regular cigarettes. Results: Participants thought that they would be significantly less likely to get lung cancer, have a heart attack, die from a smoking-related disease, get a bad cough, have trouble breathing, and get wrinkles when smoking light cigarettes, compared with regular cigarettes, for the rest of their lives. Furthermore, when participants were asked how long it would take to become addicted to the 2 cigarette types, they thought it would take significantly longer to become addicted to light versus regular cigarettes. Adolescents also thought that their chances of being able to quit smoking were higher with light versus regular cigarettes. Similarly, when participants were asked how easy it would be to quit smoking the 2 cigarette types, they thought it would be significantly easier for them to quit smoking light cigarettes than regular cigarettes. Adolescents agreed or strongly agreed that regular cigarettes deliver more tar than light cigarettes and that light cigarettes deliver less nicotine than regular cigarettes. Conclusions: Overall, the results of this study show that adolescents hold misperceptions in both their personal risk estimates and their general attitudes about the health risks, addictive properties, and ease of cessation associated with light cigarettes. With a variety of light and ultralight cigarettes on the market, adolescents are led to think that there is a progression of safety levels to choose from when deciding which cigarettes to smoke. This illusion of control over health outcomes contributes to an underestimation of risks associated with smoking light cigarettes and supports these misperceptions. These results are of concern, given evidence suggesting that, if adolescents think they are less vulnerable to smoking-related health risks (ie, lung cancer), then they are more likely to initiate smoking. Furthermore, there is evidence that adolescents are not fully aware of the addictive nature of cigarettes and therefore think that they can experiment with smoking during adolescence without becoming addicted or experiencing any health consequences. The data presented here support concerns regarding smoking addiction; adolescents might be even more inclined to smoke light cigarettes to delay addiction. Without correct information about light cigarettes, adolescents are unable to make informed decisions about their smoking behaviors. The findings presented here strongly suggest that health care practitioners need to talk to their adolescent clients not only about the overall risks of smoking but also about the specific risks associated with smoking light cigarettes and other tobacco varieties, including the potential for addiction and long-term health consequences. Information shared with adolescents about light cigarettes, both individually by health care practitioners and at the population level via counter-advertising campaigns, may be successful in changing current misperceptions, and ultimately light cigarette smoking patterns, among youth. abstract_id: PUBMED:15313096 Perceived risks and benefits of smoking: differences among adolescents with different smoking experiences and intentions. Background: Explanations of adolescent smoking often make reference to adolescents' beliefs that they are invulnerable to harm. However, empirical examination of whether adolescents do acknowledge risks. Further, few studies have considered perceived benefits in adolescents' behavioral decisions. This study examined perceived smoking-related physical and social risks and benefits between adolescents who have vs. have not smoked and do vs. do not intend to smoke. Methods: Three hundred and ninety-five students (mean age = 14.0) completed a survey concerning their smoking experiences, intentions, and perceived risks and benefits of smoking. Results: Adolescent smokers and those who intend to smoke estimated their chance of experiencing a smoking-related negative outcome as less likely than did nonsmokers and non-intenders. Smokers and intenders also reported the chance of addiction as less likely than did others. In contrast, adolescent smokers and intenders perceived the chance of experiencing a smoking-related benefit as more likely than did nonsmokers and non-intenders. Conclusions: The data suggest that rather than solely focusing on health risks as a way to deter adolescent smoking, the role of perceived social risks and benefits in adolescents' smoking may be an additional critical focus for intervention. In addition, efforts should be made to increase adolescents' awareness of the addictive nature of cigarettes. abstract_id: PUBMED:26187717 Smoking Cessation in Adolescents: targeted approaches that work. Smoking Cessation in adolescents can be considered in a developmental context to enable the clinician to individualise the appropriate assessment and management of the young person they are seeing whether it is in a primary or tertiary care setting. Adolescence is a time of rapid neurocognitive and hormonal change with these factors affected by personality and behavioural factors as well as family, cultural and psychosocial context. Adolescents are uniquely vulnerable to smoking initiation and nicotine addiction throughout these years. Increased awareness of the risks of smoking and using opportunities to assess and intervene regarding smoking cessation are integral to clinical practice for all clinicians seeing young people. This review will discuss the demographics of adolescent smoking, risk factors, assessing smoking and nicotine addiction, the importance of brief interventions, the evidence base for appropriate interventions, particularly in high risk groups and will emphasise innovative training for health professionals in adolescent smoking cessation. abstract_id: PUBMED:27060737 Adolescents' knowledge and opinions about smoking: a qualitative study from the Jhaukhel-Duwakot Health Demographic Surveillance Site, Bhaktapur District, Nepal. Background: The use of tobacco products among adolescents in Southeast Asia represents a major public health burden. Two out of ten adolescents attending school are tobacco users and several factors influence them to initiate tobacco use. Most studies related to tobacco use are quantitative, whereas qualitative studies exploring adolescents' smoking behavior and their views, knowledge and experiences are scarce. Objective: To gain a deep understanding of Nepalese adolescents' knowledge and opinions about smoking and reasons for smoking initiation. Subjects: Adolescents from four secondary schools in the Bhaktapur district, Nepal. Methods: Eight focus-group discussions were conducted with 71 adolescents aged 13-16 years and from grades 8-10. Data were analyzed using manifest qualitative content analysis. Results: The participants knew that smoking represents health risks as well as socio-economic risks, but few described the addictive nature of tobacco and health risks related to passive smoking. Most participants related smoking initiation to the smoking behavior of peers and family members, but easy accessibility to cigarettes, ineffective rules and regulations, and exposure to passive smoking also created environments for smoking. Some expressed confidence to resist peer pressure and refuse to start smoking, but also expressed the need for prevention strategies in schools and for governmental initiatives, such as more strict implementation of tobacco control and regulations to prevent and reduce smoking. Conclusion: Curbing the tobacco epidemic in Nepal requires healthy public policies and multifaceted interventions to address the knowledge gap on health consequences associated with smoking among adolescents, teachers and parents/adults. abstract_id: PUBMED:27103413 A Longitudinal Study of Adolescents' Optimistic Bias about Risks and Benefits of Cigarette Smoking. Objectives: Optimistic bias, a perception that one's own risks are lower than others', can help explain why adolescents smoke cigarettes despite knowing their risks. We examined the extent and changes over time of adolescents' optimistic bias for various smoking-related perceptions of risks and benefits on the aggregate and individual level. Methods: Longitudinal study (6 measurements over 3 years) of 395 adolescents (mean age 14 years, SD=0.4, at baseline) who rated the chance of occurrence of 19 short- and long-term heath risks, social risks, addiction, and benefits related to cigarette smoking for self and comparable others. Results: Optimistic bias was consistently found only for addiction (83% of comparisons; 37%- 60% of adolescents). Addiction-related optimistic bias decreased significantly with time for "still be smoking in 5 years" (β = -2.44, p < .001) and for "become addicted" (β = -1.71, p < .001). This reduction resulted from a greater decrease in perceived risks for others rather than an increase in the adolescent's own perceived risk. For other risks and benefits, adolescents were either realistic or pessimistically biased. Conclusions: Smoking-related optimistic bias in adolescents was not as prevalent as past studies showed. Anti-smoking interventions targeting adolescents should emphasize the risk of addiction and personal relevance of addiction. abstract_id: PUBMED:19106420 Perceptions of smoking-related risks and benefits as predictors of adolescent smoking initiation. Objectives: The predictive value of perceptions of smoking-related risks and benefits with regard to adolescent smoking initiation has not been adequately established. We used prospective, longitudinal data to directly test whether smoking-related perceptions predict smoking initiation among adolescents. Methods: We administered surveys assessing perceptions of smoking-related risks and benefits to 395 high school students, beginning at the start of their ninth-grade year. We conducted follow-up assessments every 6 months until the end of 10th grade, obtaining 4 waves of data. Results: Adolescents who held the lowest perceptions of long-term smoking-related risks were 3.64 times more likely to start smoking than were adolescents who held the highest perceptions of risk. Adolescents who held the lowest perceptions of short-term smoking-related risks were 2.68 times more likely to initiate. Adolescents who held the highest perceptions of smoking-related benefits were 3.31 times more likely to initiate. Conclusions: Findings from this study provide one of the first sets of empirical evidence to show that smoking initiation is directly related to smoking-related perceptions of risks and benefits. Thus, efforts to reduce adolescent smoking should continue to communicate the health risks of smoking and counteract perceptions of benefits associated with smoking. abstract_id: PUBMED:38148780 "I could do almost nothing without digital technology": a qualitative exploration of adolescents' perception of the risks and challenges of digital technology. Background: The fast development of digital technology and of its use at even younger ages is significantly shaping the current generation of adolescents. This is leading to an almost unlimited accessibility that provides a large number of opportunities, but also to many challenges that adolescents have to face. The aim of our study was to explore the perceptions adolescents have of the risks of digital technology. Methods: We conducted online semi-structured interviews as a part of the international Health Behaviour in School-aged Children study. The sample consisted of 15 Slovak adolescents (mean age: 15.33; 20% boys). To analyse our data, we used consensual qualitative research and thematic analysis. Findings: Our findings confirmed that adolescents are aware of the risks associated with the use of digital technology. Regarding their specific types of the perceived risks, we identified four main themes: 1. dependence on the functionality of technology; 2. problematic control; 3. vulnerability in the virtual environment; 4. health risks. Adolescents thus want technology that is functional, safe and does not endanger their health. Conclusion: Despite the fact that adolescents know of the risks they may experience due the digital technology, they still use it. Preventive strategies should focus on functionality, safety and healthiness; furthermore, they should support the constant development of adolescents' digital awareness and raising their awareness about effective and non-threating use of technology. abstract_id: PUBMED:36118554 Social determinants of smoking among school adolescents in Beijing, China. Introduction: Understanding the social determinants that influence adolescent smoking behavior has a meaningful impact on adolescent health. Few studies have simultaneously analyzed the impact of teacher smoking and peer smoking on adolescent smoking. Therefore, the present study aims to investigate the impact of teacher smoking, peer smoking, and other social factors, on adolescent smoking. Methods: The participants were drawn from schools in Beijing, China, in 2011, 2013, and 2015, using a multi-stage random group sampling method. The number of schools selected for each year was 160. The study participants were 57240 adolescents aged 8-19 years. The generalized linear model with a binomial distribution and logarithmic link function was used to estimate the influence of social determinants on adolescent smoking behavior. Results: The results show that both teacher smoking and peer smoking were significantly associated with adolescent smoking. Compared to adolescents whose teachers did not smoke, the prevalence ratio (PR) was 1.28 for adolescents with one teacher who smoked and 1.34 for adolescents with two or more teachers who smoked (95% CI: 1.16-1.41/1.23-1.46, p<0.001/0.001, respectively). Compared to adolescents whose peers did not smoke, the prevalence ratio (PR) was 3.73 for adolescents with one peer who smoked and 8.52 for adolescents with two or more peers who smoked (95% CI: 3.20-4.35/7.48-9.69, p<0.001/0.001, respectively). Conclusions: Teacher smoking and peer smoking are significant social determinants of adolescent smoking. Prevention programs should concurrently target peer groups, school settings, and individual students. abstract_id: PUBMED:30566804 The Tobacco Smoking among Rural Adolescents The article presents results of study of prevalence of tobacco smoking among school adolescents carried out in rural comprehensive schools of the Kaliningrad oblast obtained within the framework of large-scale study (monitoring) of using psychoactive substances by rural school adolescents aged 15-17 years in 2016-2017 in Russia. The technique of multistage sampling was applied to select contingents of adolescents. The sampling included 1085 adolescents, including 531 boys and 554 girls. The analysis demonstrated that prevalence of tobacco smoking among rural adolescents in the Kaliningrad oblast made up to 16.8 among boys and 9.2 among girls per 100 adolescents of corresponding gender and age. Every day smoked 15.8 boys and 8.3 girls out of 100 adolescents aged 15-17 years. In comparison with data of 2013, the indicator of everyday smoking decreased on 37% among boys and on 62% among girls. Never tried to smoke 57.4% of boys and 67.5% of girls aged 15-17 years. Among those who tried continued to smoke 39.9% of boys and 28.3% of girls. The main motives of tobacco smoking are "motives of dependence". These motives were more often mentioned by boys (p>0,05).The girls in 4.3 times more often than boys smoke tobacco "for company" (p<0,05). The factors effecting "development" and "non-development" of tobacco smoking were established. Also, certain indices characterizing family of adolescent, sources of information about tobacco smoking, academic performance, spending free time and attitude of friends to tobacco smoking were analyzed. The results demonstrated that in hierarchy of factors promoting development of tobacco smoking in adolescent age first place if taken by influence of tobacco smoking friends, conditioned by psychological characteristics of adolescent age. The tobacco smoking adolescents have rather high 74%) awareness about harm of smoking. Since this information is acquired by them from separated and very often incidental sources it does not avert them from tobacco smoking. Answer: Adolescents have a complex relationship with the risks of smoking, and the evidence suggests that their appreciation of these risks is not straightforward. Some studies indicate that adolescents may overstate the risks of smoking in certain aspects, while underestimating them in others. For instance, a national survey found that adolescents overestimated the risk of lung cancer from smoking but underestimated the fatality of lung cancer and the number of years of life lost due to smoking. Despite these overestimations, young smokers still exhibited optimism about their personal risks of smoking, regardless of their perceptions of objective risk (PUBMED:11429301). Research also shows that adolescents hold significant misperceptions about the health risks, addictive properties, and ease of cessation associated with light cigarettes, believing them to be less harmful and easier to quit than regular cigarettes (PUBMED:15466070). This illusion of control over health outcomes contributes to an underestimation of the risks associated with smoking light cigarettes. Moreover, adolescents who have smoked or intend to smoke perceive the chance of experiencing smoking-related negative outcomes and addiction as less likely than those who have not smoked or do not intend to smoke. They also perceive the chance of experiencing smoking-related benefits as more likely (PUBMED:15313096). This suggests that adolescents may not fully appreciate the risks of smoking due to a focus on perceived benefits. A longitudinal study found that optimistic bias about smoking-related risks was not as prevalent as previously thought, with the exception of addiction risks. This bias decreased over time, indicating that adolescents may become more realistic about the risks as they age (PUBMED:27103413). Overall, while adolescents are aware of some risks associated with smoking, their understanding is often incomplete or skewed by misperceptions and optimistic biases. These findings highlight the need for targeted health messages and interventions that accurately convey the risks of smoking to adolescents (PUBMED:11429301, PUBMED:15466070, PUBMED:15313096, PUBMED:27103413).
Instruction: Is anatomic complexity associated with renal tumor growth kinetics under active surveillance? Abstracts: abstract_id: PUBMED:25778696 Is anatomic complexity associated with renal tumor growth kinetics under active surveillance? Introduction: Linear growth rate (LGR) is the most commonly employed trigger for definitive intervention in patients with renal masses managed with an initial period of active surveillance (AS). Using our institutional cohort, we explored the association between tumor anatomic complexity at presentation and LGR in patients managed with AS. Methods And Materials: Enhancing renal masses managed expectantly for at least 6 months were included for analysis. The association between Nephrometry Score and LGR was assessed using generalized estimating equations, adjusting for the age, Charlson score, race, sex, and initial tumor size. Results: Overall, 346 patients (401 masses) met the inclusion criteria (18% ≥ cT1b), with a median follow-up of 37 months (range: 6-169). Of these, 44% patients showed progression to definitive intervention with a median duration of 27 months (range: 6-130). On comparing patients managed expectantly to those requiring intervention, no difference was seen in median tumor size at presentation (2.2 vs. 2.2 cm), whereas significant differences in median age (74 vs. 65 y, P < 0.001), Charlson comorbidity score (3 vs. 2, P<0.001), and average LGR (0.23 vs. 0.49 cm/y, P < 0.001) were observed between groups. Following adjustment, for each 1-point increase in Nephrometry Score sum, the average tumor LGR increased by 0.037 cm/y (P = 0.002). Of the entire cohort, 6 patients (1.7%) showed progression to metastatic disease. Conclusions: The demonstrated association between anatomic tumor complexity at presentation and renal masses of LGR of clinical stage 1 under AS may afford a clinically useful cue to tailor individual patient radiographic surveillance schedules and warrants further evaluation. abstract_id: PUBMED:28411913 Active Surveillance for the Small Renal Mass: Growth Kinetics and Oncologic Outcomes. Active surveillance for small renal masses (SRMs) is an accepted management strategy for patients with prohibitive surgical risk. Emerging prospectively collected data support the concept that a period of initial active surveillance in an adherent patient population with well-defined criteria for delayed intervention is safe. This article summarizes the literature describing growth kinetics of SRMs managed initially with observation and oncologic outcomes for patients managed with active surveillance. Existing clinical tools to determine and contextualize competing risks to mortality are explored. Finally, current prospective clinical trials with defined eligibility criteria, surveillance schema, and triggers for delayed intervention are highlighted. abstract_id: PUBMED:28365130 Predictors of growth kinetics and outcomes in small renal masses (SRM ≤4 cm in size): Tayside Active Surveillance Cohort (TASC) Study. Objective: To determine outcomes of small renal masses (≤4 cm) on active surveillance and explore factors which can influence their growth. Patients And Methods: Two hundred twenty six patients between January 2007 and December 2014 were analysed using cross-linked methodology of healthcare data and independent review. Cancer specific and non-specific survival were the primary outcomes. Growth kinetics, factors influencing growth and need for interventions were secondary outcomes. Results: 101 (64.4%) solid and 4 (5.9%) cystic SRMs showed growth. 43 (19.02%) of SRMs required treatment interventions. Seven patients (7/158; 4.4%) died due to renal cancer at a median follow-up of 21.7 (SD 10.6, min 6-42) months, all in solid category. Independent review of serial radiological imaging of these seven cases showed two patients had subtle metastatic disease at the initial presentation, and 5 of the 7 did not adhere to recommended imaging regime. 33 (33/158; 20.8%) died due to other causes including non-renal cancers (14/158; 8.8%). Multivariate analyses showed that lower eGFR at baseline, co-morbidities and tumour location were independently associated with growth in size. Conclusions: A higher cancer-specific mortality was seen in the present study compared to the reported literature. Independent critical review of imaging of cases with poor outcome underscored the importance of adherence to a robust protocol including follow up. Comorbid conditions had a significant impact on growth and overall survival of patients with SRMs. abstract_id: PUBMED:34286596 Association of Clear Cell Likelihood Score on MRI and Growth Kinetics of Small Solid Renal Masses on Active Surveillance. BACKGROUND. The lack of validated imaging markers to characterize biologic aggressiveness of small renal masses (SRMs)-defined as those categorized as cT1a and 4 cm and smaller-hinders medical decision-making among available initial management strategies. OBJECTIVE. The purpose of this article was to explore the association of the clear cell likelihood score (ccLS) on MRI with growth rates and progression of SRMs. METHODS. This retrospective study included consecutive SRMs assigned a ccLS on clinical MRI examinations performed between June 2016 and November 2019 at an academic tertiary-care medical center or its affiliated safety net hospital system. The ccLS reports the likelihood that the SRM represents clear cell renal cell carcinoma (ccRCC) from 1 (very unlikely) to 5 (very likely). The ccLS was extracted from clinical reports. Tumor size measurements were extracted from available prior and follow-up cross-sectional imaging examinations, through June 2020. Serial tumor size measurements were fit to linear and exponential growth curves. Estimated growth rates were grouped by the assigned ccLS. Tumor progression was defined by development of large size (> 4 cm in at least two consecutive measurements) and/or rapid growth (doubling of volume within 1 year). Differences among ccLS groups were evaluated using Kruskal-Wallis tests. Correlations between ccLS and growth rate were evaluated by Spearman correlation (ρ). RESULTS. Growth rates of 386 SRMs (100 ccLS 1-2, 75 ccLS 3, and 211 ccLS 4-5) from 339 patients (median age, 65 years; 198 men, 141 women) were analyzed. Median follow-up was 1.2 years. The ccLS was correlated with growth rates by size (ρ = 0.19; p < .001; ccLS 4-5, 9%/year; ccLS 1-2, 5%/year; p < .001) and by volume (ρ = 0.14; p = .006; ccLS 4-5, 29%/year; ccLS 1-2, 16%/year; p < .001). Disease progression (observed in 49 SRMs) was not significantly associated with ccLS group (p = .61). Two patients (0.6%) developed metastases during active surveillance: one ccLS 1 was a type 2 papillary renal cell carcinoma and one ccLS 4 was ccRCC. CONCLUSION. Growth is associated with ccLS in SRMs, with higher ccLS correlating with faster growth. CLINICAL IMPACT. SRMs with lower ccLS may be considered for active surveillance, whereas SRMs with higher ccLS may warrant earlier intervention. The noninvasive ccLS derived from MRI correlates with growth rate of SRMs and may help guide personalized management. abstract_id: PUBMED:28723504 Active Surveillance for Small Renal Masses: When Less is More. Context: A marked increase in incidentally detected small renal masses (SRMs) has occurred over the past decade. Active surveillance (AS) has emerged as an initial management option for these patients. Objective: (1) To determine selection criteria, assess appropriate imaging modalities and surveillance frequencies, and define triggers for delayed intervention (DI) for patients on AS. (2) To describe oncologic outcomes for patients on AS protocols. Evidence Acquisition: The PubMed database was queried for English language articles using the keywords "surveillance" and "renal mass" or "renal cell carcinoma" or "kidney cancer." The level of evidence, sample size, study design, and relevance to the review were considered as inclusion criteria. Evidence Synthesis: A total of 69 manuscripts were included in the review. Selection criteria at initial evaluation for patients interested in AS include patient-related factors (eg, age, baseline renal function, other comorbidities), tumor-related factors (size, complexity, history of growth, possible renal mass biopsy), and patient preferences (illness uncertainty, quality of life). Cross-sectional imaging is the preferred initial imaging modality. Surveillance imaging should be performed at frequent intervals (3-4 mo) up front; intervals can be reduced over time if favorable growth kinetics are demonstrated. Delayed intervention (DI) should be considered for rapid tumor growth (eg,>0.5cm/yr), an increase in maximum tumor diameter >3-4cm, malignant renal mass biopsy results, development of symptoms, or patient preferences. Oncologic outcomes in well-controlled studies demonstrate a metastatic rate of 1-2%. Most patients who undergo DI remain eligible for nephron-sparing approaches; oncologic outcomes are not compromised by DI strategies. Conclusions: A period of initial AS is safe for most patients with SRMs. Management decisions should focus on a thorough assessment of risk-benefit trade-offs, judiciously integrating patient-related factors, tumor-related factors, and patient preferences. Patient Summary: A period of initial active surveillance for kidney masses of ≤4cm in diameter is safe in most patients. Frequent imaging and follow-up are necessary to determine if the tumor grows. If delayed intervention becomes necessary, cancer outcomes are not compromised by the initial choice of active surveillance when patients adhere to close follow-up regimens. abstract_id: PUBMED:32891507 Results of active surveillance for sporadic renal angiomyolipomas greater than 4cm: A pledge for active surveillance Introduction: Active surveillance (AS) of sporadic renal angiomyolipomas (AML) is under-utilised because of an old dogma fearing a life-threatening retroperitoneal hemorrhage when tumour size exceeds 4cm. The objective of this study was to report the outcome of AS in patients with sporadic AML greater than 4cm. Methods: The results of AS in 35 patients managed for sporadic renal AML greater than 4cm were analysed. During AS, tumour growth, occurrence of new symptoms and/or complications, discontinuation of AS protocol, reason for discontinuation as well as subsequent treatment options were reported. Results: Within a median follow-up of 36 months, 16 (46 %) patients discontinued AS at the end of the study period (mean follow-up 55±66, median 36 months). Patients who discontinued AS were more symptomatic at diagnosis but had similar age, mean tumour size and sex ratio. Active treatment-free survival was 66 % at 5 years. Retroperitoneal hemorrhage was reported in 3 (8.5 %) patients. None of these bleedings required transfusion or monitoring in an intensive care unit. Other reasons for discontinuation were pain (37 %), patient preference (19), changes in the radiological appearance of the tumour (19 %), and hematuria (6 %). Conclusion: This study showed that AS in AML bearing patients was feasible even in the setting of tumours larger than 4cm. More than 50% of the patients were still on AS at 5 years. Discontinuation of AS was not related to bleeding complications in most cases. Level Of Proof: 3. abstract_id: PUBMED:34116937 Clinical and radiological findings for small renal masses under active surveillance. Objective: To analyze the experience performing active surveillance (AS) of small renal masses (SRMs) in our center and to correlate the evolution of SRMs under AS with clinical and radiological findings. Methods: Patients on AS between January 2012 until May 2020 for SRMs in our center have been included. Growth rate (GR) per year was analyzed and correlated with radiographic features. Patients with growth kinetics higher than 5mm/year during follow up were offered active treatment. Results: 73 patients were included in AS: the mean age was 75.7 years, a mean initial tumour size of 21.2 mm, and a mean growth rate of 2.05 mm/year. Around 60 % had an ASA score of 3. The tumor size did not change over time in 43% of cases; in 4% we noticed a regression in size and in 52% of cases growth during follow-up (38% 1-5mm/year and 14% more than 5 mm/year). Delayed active treatment was indicated in 16 (21%) of cases. Treatment applied was as following: 2 radiofrequency ablations, 6 radical and 8 partial nephrectomies. A weak correlation was found between initial size and growth rate (r = 0.38, P = 0.02). No significant association was detected regarding any of the analyzed radiological findings and GR. With a mean follow up time of 33 months none of the patients presented metastatic progression. Conclusion: Active surveillance is a feasible option for management of SRMs in selected patients without jeopardizing oncological safety. In our series, no clinical or radiological characteristics for predicting tumour growth were found. abstract_id: PUBMED:30105557 Imaging Protocols for Active Surveillance in Renal Cell Carcinoma. Purpose Of Review: To review the growth kinetics of small renal masses and available imaging modalities for mass characterization and surveillance, highlight current organizational recommendations for the active surveillance of small renal masses, and discuss the most recently reported oncological outcomes of patients as they relate to various surveillance imaging protocols and progression to delayed intervention. Recent Findings: Overall, organizational guideline recommendations are broad and lack specifics regarding timing and modality for follow-up imaging of small renal masses. Additionally, despite general consensus in the literature about certain criteria to trigger delayed intervention, there exist no formal guidelines. Active surveillance of small renal masses is an acceptable management strategy for patients with prohibitive surgical risk; however, standardized imaging protocols for surveillance are lacking, as are randomized, prospective trials to evaluate the ideal follow-up protocol. abstract_id: PUBMED:29625756 Active Surveillance for Localized Renal Masses: Tumor Growth, Delayed Intervention Rates, and >5-yr Clinical Outcomes. Background: Active surveillance (AS) has gained acceptance as a management strategy for localized renal masses. Objective: To review our large single-center experience with AS. Design, Setting, And Participants: From 2000 to 2016, we identified 457 patients with 544 lesions managed with AS from our prospectively maintained kidney cancer database. A subset analysis was performed for patients with ≥5-yr follow-up without delayed intervention (DI). Outcome Measurements And Statistical Analysis: Linear growth rates (LGRs) were estimated using linear regression for the initial LGR (iLGR) AS interval and the entire AS period. Overall survival (OS) and cumulative incidence of DI were estimated with Kaplan-Meier methods utilizing iLGR groups, adjusting for covariates. DI was evaluated for association with OS in Cox models. Results And Limitations: Median follow-up was 67 mo (interquartile range [IQR] 41-94 mo) for surviving patients. Cumulative incidence of DI (n=153) after 1, 2, 3, 4, and 5 yr was 9%, 22%, 29%, 35%, and 42%, respectively. Median initial maximum tumor dimension was 2.1cm (IQR 1.5-3.1cm). Median iLGR and overall LGR were 1.9 (IQR 0-7) and 1.9 (IQR 0.3-4.2) mm/yr, respectively. Compared with the no growth group, low iLGR (hazard ratio [HR] 1.25, 95% cumulative incidence [CI] 0.82-1.91), moderate iLGR (HR 2.1, 95% CI 1.31-3.36), and high iLGR (HR 1.87, 95% CI 1.23-2.84) were associated with DI (p=0.003). The iLGR was not associated with OS (p=0.8). DI was not associated with OS (HR 1.34, 95% CI 0.79-2.29, p=0.3). Five-year cancer-specific mortality (CSM) was 1.2% (95% CI 0.4-2.8%). Of 99 patients on AS without DI for >5 yr, one patient metastasized. Conclusions: At >5 yr, AS±DI is a successful strategy in carefully managed patients. DI often occurs in the first 2-3 yr, becoming less likely over time. Rare metastasis and low CSM rates should reassure physicians that AS is safe in the intermediate to long term. Patient Summary: In this report, we looked at the outcomes of patients with kidney masses who elected to enroll in active surveillance rather than immediate surgery. We found that patients who need surgery are often identified early and those who remain on active surveillance become less likely to need surgery over time. We concluded that active surveillance with or without delayed surgery is a safe practice and that, when properly managed and followed, patients are unlikely to metastasize or die from kidney cancer. abstract_id: PUBMED:23616172 Active surveillance: concept for renal cell carcinoma? More than 80% of renal cell carcinomas are currently being detected incidentally by ultrasonic imaging and often affect elderly patients with substantial comorbidities. Surgical options cannot be applied in this cohort as routinely as in younger patients, as in some cases the perioperative risk factors may outweigh the risk of tumor-related death. In this context the concept of active surveillance for localized renal cell cancer in elderly patients is becoming increasingly important. The aim of active surveillance is to avoid surgery-related adverse effects and to achieve a controlled observation of tumor behavior with an option of delayed intervention in cases of aggressive clinical tumor features. This review sheds light on the current status of this strategy. Answer: Yes, anatomic complexity is associated with renal tumor growth kinetics under active surveillance. A study exploring the association between tumor anatomic complexity at presentation and linear growth rate (LGR) in patients managed with active surveillance (AS) found that for each 1-point increase in Nephrometry Score sum, the average tumor LGR increased by 0.037 cm/year (P = 0.002) (PUBMED:25778696). This suggests that anatomic complexity, as measured by the Nephrometry Score, may be a clinically useful indicator to tailor individual patient radiographic surveillance schedules.
Instruction: Are routine postoperative radiographs necessary during the first year after posterior spinal fusion for idiopathic scoliosis? Abstracts: abstract_id: PUBMED:31202373 Do We Need Postoperative Chest Radiographs After Posterior Spinal Fusion for Adolescent Idiopathic Scoliosis? Study Design: The question was addressed in three ways: (1) a query of Kids' Inpatient Database (KID) to obtain nationally representative data; (2) retrospective review of cases at a single institution; (3) survey of Scoliosis Research Society (SRS) spine surgeons. Objectives: Evaluate the rate of immediate postoperative pulmonary complications, risk factors, and relevant surgeon practice patterns, to determine the usefulness of routine postoperative chest radiographs after posterior spinal instrumentation and fusion (PSIF) for adolescent idiopathic scoliosis (AIS). Summary Of Background Data: Routine postoperative chest radiography after PSIF for AIS is performed in many institutions to evaluate for acute pulmonary complications, particularly pneumothorax (PTX). The incidence of pneumothorax and its effect on management is unknown. Methods: The frequency of PTX and surgical intervention were recorded. We evaluated associations between PTX and patient demographics or comorbidities, as well as survey respondent demographics and their practice patterns. Results: In the KID data sets, the risk of PTX after PSIF for AIS patients was 0.3% (30/9,036), with intervention required in 13.3% (4/30) of PTX-positive patients (0.04% of all cases). Review of cases at our institution revealed a PTX rate of 3.3% (8/244) by radiology report. No surgical intervention was required. Patients with PTX had, on average, an increased number of vertebrae fused (p = .012), a proximal thoracic scoliosis curve location (p = .009), and/or an intraoperative blood transfusion (p = .002). SRS respondents reported a PTX risk of 0.8% (87/11,318), and 32.2% (89/276) of respondents indicated routine use of postoperative chest radiographs. Of those, 46.1% (41/89) specified willingness to change practice patterns if provided evidence of low PTX rates. Conclusions: Pneumothorax is uncommon after PSIF for AIS. The need for intervention is even less common. Routine postoperative chest radiographs are of questionable value after PSIF for AIS. Level Of Evidence: Level III. abstract_id: PUBMED:24840654 Are routine postoperative radiographs necessary during the first year after posterior spinal fusion for idiopathic scoliosis? A retrospective cohort analysis of implant failure and surgery revision rates. Background: Radiographs are routinely obtained at postoperative visits during the first year after posterior spinal fusion (PSF) for idiopathic scoliosis (IS). The goal of this study was to determine how often radiographic findings change postoperative care. Methods: A total of 227 consecutive patients aged 10 to 21 years who underwent surgery for IS at our institution from 2004 to 2010 were identified. Charts were reviewed to determine the frequency of the following clinical symptoms during the first year after surgery: pain greater than expected, implant prominence, and sensory/motor disturbance. Radiographs were reviewed to identify implant failure and curve change. Logistic regression analysis was used to identify clinical symptoms associated with treatment deviation. Results: During the first year after surgery, an average of 6 (range, 2 to 12) radiographs were obtained from patients during an average of 3 (range, 2 to 10) follow-up visits. Pain (14%) was the most common symptom. Neurologic symptoms (13%) and implant prominence (4%) were less common. Implant failure was identified in 4 subjects (2%), of which 3 required revision surgery. The incidence of revision surgery was 2.9/1000 radiographs (95% confidence interval, 0.6-8.3). Curve progression >5 degrees in the uninstrumented curve occurred in 2 patients (0.9%). Curve progression did not result in a change in treatment for any of the patients. Pain was the only clinical symptom associated with implant failure (P=0.0047). 169/227 patients did not have any symptoms and only one of these underwent revision surgery. The sensitivity of a clinical test, which uses the presence of pain to guide the need for radiographic evaluation and rule out implant failure, was 75%, specificity 87%, positive predictive value 10%, and negative predictive value 99.5%. Conclusions: After obtaining baseline postoperative radiographs, additional radiographs during the first year after surgery for IS may not be required in the absence of clinical symptoms. Reducing the number of radiographs taken during the first year after surgery for IS in patients without symptoms can reduce radiation exposure to patients and health care costs without affecting treatment. Level Of Evidence: Level II, Diagnostic Study. abstract_id: PUBMED:27927297 Is Routine Postoperative Radiologic Follow-up Justified in Adolescent Idiopathic Scoliosis? Study Design: Retrospective review. Objectives: To determine whether routine periodic radiographic examination is worthwhile in adolescent idiopathic scoliosis (AIS) patients undergoing instrumented fusion with third-generation implants. Summary Of Background Data: In common practice, patients who have undergone surgery for idiopathic scoliosis are followed up for a minimum of 2 years by clinical assessment and routine radiographic study at 3, 6, 12, and 24 months. The radiation related to these examinations is not negligible. To our knowledge, the use of routine radiographic follow-up after posterior spinal fusion for adolescent idiopathic scoliosis has not been evaluated. Methods: We retrospectively analyzed full-spine X-rays and clinical records from the first 2 postoperative years of 43 patients. We sought any clinical feature (eg, pain, deformity progression, material protrusion) justifying X-ray, and any relevant radiologic finding (eg, loss of correction, instrumentation loosening, junctional kyphosis). Results: Excluding the immediate postoperative films, 14.8% of X-rays were clinically justified, 8.3% were associated with a relevant finding, and 4.3% led to a therapy change. All patients with clinical deformity progression had a relevant X-ray finding. Pain was associated with a relevant finding in 23.5% of cases (positive predictive value, 0.1); 7.4% of films with no clinical justification showed a relevant finding (negative predictive value, 0.86). Lower Risser sign increased the risk of having a relevant radiographic finding (p < .05). Conclusions: With the current use of third-generation implants, routine biplanar postoperative X-rays at 3, 6, 12, and 24 months do not seem to be justified in AIS and should be avoided in mature, asymptomatic patients. abstract_id: PUBMED:24480957 Utility of early postoperative radiographs after posterior spinal fusion for adolescent idiopathic scoliosis. Study Design: Retrospective. Objective: Evaluate the ability of serial full-length spine radiographs to detect clinically significant implant-related (IR) and non-implant-related (NIR) radiographical abnormalities in the first 6 months after routine posterior spinal fusion for adolescent idiopathic scoliosis (AIS). Summary Of Background Data: Patients with AIS are exposed to repeated doses of ionizing radiation during the course of their treatment with potential consequences for their long-term health. Postoperative algorithms for AIS often involve frequent standing plain radiographs during the first 6 months after surgery to detect IR and NIR abnormalities that may impact a patient's clinical course. However, the actual clinical utility of such repeated spine radiographs has not been studied. Methods: Retrospective chart and radiographical review was conducted at a single institution for patients with AIS after posterior spinal fusion between 2007 and 2012. Radiographical abnormalities identified on full-length spine radiographs or additional imaging modalities in the first 6 postoperative months were grouped into IR or NIR findings. The findings were considered clinically significant if they resulted in a deviation from an anticipated postoperative course or additional interventions. Results: For 129 patients, 761 full-length spine radiographs were obtained in the first 6 postoperative months. Eight patients (11 radiographs) had IR or NIR abnormalities, with only 2 of these considered clinically significant. Seven of the remaining 121 were identified to have IR or NIR abnormalities using other imaging modalities, with 2 considered clinically significant. The sensitivity and specificity of a full-length spine radiograph for detecting a clinically significant abnormality was 50% and 95%, respectively. Conclusion: Routine full-length spine radiographs used with high frequency in the first 6 months after posterior spinal fusion rarely detected a radiographical abnormality that resulted in a meaningful change to a patient's clinical management. Blanket postoperative screening algorithms should be reconsidered to minimize patient radiation exposure. abstract_id: PUBMED:24048089 Do routine radiographs within the first two years following pediatric posterior spinal fusion prompt revision surgery? Study Design: This is a retrospective review of 265 consecutive patients with scoliosis treated by posterior spinal fusion (PSF) and 2990 postoperative radiographs at a single institution. Objective: To document implant-related complications on postoperative radiographs within the first 2 years after PSF and determine if these led to additional surgical procedures. Summary Of Background Data: To date, there is an absence of studies investing the utility of radiographs after pediatric PSF surgery. Methods: The patient's average age was 14.9 years (8.2-21.8 yr). Positive radiographical findings included implant loosening, fracture, migration, or loss of fixation. We also noted which patients' required additional surgery for a radiographical finding and/or clinical finding (neurological deficit, intractable back pain, fever, and elevated erythrocyte sedimentation rate, c-reactive protein, or while blood cell count). Results: At an average follow-up of 16.3 months (12-24), only 18 (0.60%) radiographs demonstrated a positive finding. Of these, only 5 (0.17%) radiographs (along with clinical presentation) prompted revision. No patient returned to the operating room purely from a radiographical finding. The remaining 13 (0.37%) abnormal plain images were not associated with a clinical abnormality and were managed nonoperatively. The timing of the second operation was in the range from 1 day to 23 months (average, 11 mo). Postoperative plain radiographs had a sensitivity of 26.5%, a specificity of 99.5%, and a positive predictive value of 25%. Conclusion: Only 0.60% of postoperative images (18/2990) demonstrated an implant-related complication, and in the absence of clinical indications, none of these patients required surgical intervention. Isolated postoperative radiographs did not lead to any change in management, and consideration should be given to reviewing current protocols for plain radiographs as a monitoring tool after PSF. abstract_id: PUBMED:28259262 Can Postoperative Radiographs Accurately Identify Screw Misplacements? Study Design: Retrospective case series. Objective: The objective of this study was to determine the safety of postoperative radiographs to assess screw placement. Summary Of Background Data: Previously defined criteria are frequently employed to determine pedicle screw placement on intraoperative supine radiographs. Postoperatively, radiographs are typically used as a precursor to identify screws of concern, and a computed tomographic (CT) is typically ordered to confirm screw safety. Methods: First, available postoperative PA and lateral radiographs were reviewed by 6 independently blinded observers. Screw misplacement was assessed using previously defined criteria. A musculoskeletal radiologist assessed all CT scans for screw placement. Pedicle screw position was classified either as acceptable or misplaced. Misplacements were subclassified as medial, lateral, or anterior. Results: One hundred four patients with scoliosis or kyphosis underwent posterior spinal fusion and had postoperative CT scan available were included. In total, 2,034 thoracic and lumbar screws were evaluated. On CT scan, 1,772 screws were found to be acceptable, 142 were laterally misplaced, 30 medially, and 90 anteriorly. Of the 30 medially placed screws, 80% to 87% screws were believed to be in positions other than medial, with a median of 73% (63% to 92%) of these screws presumed to be in normal position. Similarly, of the 142 screws placed laterally, 49% to 81% screws were identified in positions other than lateral, with a median of 77% (59% to 96%) of these screws felt to be in normal position. Of the 90 anteriorly misplaced screws, 16% to 87% screws were identified in positions other than anterior, with 72% (20% to 98%) identified as normal. The criteria produced a median 52% sensitivity, 70% specificity, and 68% accuracy across the 6 observers. Conclusion: Radiograph is a poor diagnostic modality for observing screw position. Level Of Evidence: Level IV. abstract_id: PUBMED:30972568 Are postoperative standing radiographs relevant before hospital discharge in adolescent idiopathic scoliosis? Purpose: Postoperative standing radiographs are usually performed before hospital discharge after AIS fusion. However, patients are often still painful and have not recovered yet their physiological balance. The aim of this study was therefore to evaluate the relevance of such early radiographs and more specifically investigate whether postoperative alignment could be analyzed. Methods: All consecutive AIS patients operated between January 2015 and December 2015 were included. All patients underwent biplanar stereoradiographs before hospital discharge, at 4 months postoperative and at last follow-up. Fifteen parameters (eight coronal and seven sagittal), reflecting correction and spinal alignment were measured and compared. The incidence of implant misplacement, requiring or not surgical revision, was recorded. Results: In total, 100 patients were included. A significant difference was found for 12 out of the 15 (80%) parameters between the first erect radiograph and the 4-month follow-up visit, including the CVA and the SVA, which are commonly used to assess postoperative alignment. Clavicle, UIV and LIV tilts also decreased significantly at 4 months postoperative. In opposition, no significant change occurred for the same parameters between the 4-month visit and latest follow-up. In nine patients, a pedicle screw was considered misplaced on the first radiograph, but all patients remained asymptomatic and no revision was performed. Conclusion: There is no need for additional immediate postoperative radiographs in AIS, if an intraoperative radiograph has already been taken. This finding could help reducing radiation exposure in immature patients and should be further studied in other etiologies. These slides can be retrieved under Electronic Supplementary Material. abstract_id: PUBMED:21464768 Prediction of postoperative trunk imbalance after posterior spinal fusion with pedicle screw fixation for adolescent idiopathic scoliosis. The objective of this retrospective study was to explore, which radiographic parameters, immediately after posterior spinal fusion with pedicle screw fixation for adolescent idiopathic scoliosis (AIS), best correlate with subjacent disc wedging at a minimum of 2-year follow-up. Sixty-four consecutive AIS patients who underwent posterior pedicle screw-only instrumentation were studied. Preoperative and postoperative radiographs were obtained to measure various parameters regarding global coronal, shoulder, sagittal, and regional balance. Specific correlation of these parameters to selected 2-year postoperative disc wedging and lowest-instrumented vertebra (LIV) tilt and translation were analyzed. The average lateral disc opening changed from 4.59±4.75 preoperatively to 1.46±2.82 at 2 weeks and 2.81±6.43 at 2 years postoperatively. Two-year postoperative lateral disc opening significantly correlated with the 2-week postoperative lateral disc opening, C7 plumbline relative to the posterior-superior corner of the first sacral vertebra distance, and LIV-center sacral vertical line (CSVL) distance (r=0.7433, P<0.0001). The two-year postoperative LIV tilt significantly correlated with the 2-week postoperative LIV tilt, T12-LIV lordosis, LIV-CSVL distance, and C7-CSVL distance (r=0.8879, P<0.0001). Two-year postoperative LIV-CSVL significantly correlated with 2-week postoperative LIV-CSVL distance and lateral disc opening (r=0.6104, P<0.0001). Two-year postoperative disc wedging, LIV tilt, and LIV translation occurred most often when disc wedging and LIV deviation or obliquity existed immediately postoperatively. This study identified a potential indicator for AIS repair. Preoperative surgical planning and intraoperative correction are important for avoiding subjacent regional imbalance after scoliosis fusion. abstract_id: PUBMED:28554809 Five-year outcomes of the First Distal Uninstrumented Vertebra after posterior fusion for Adolescent Idiopathic Scoliosis Lenke 1 or 2. Background: Tilt of the First Distal Uninstrumented Vertebra (FDUV) reflects changes in the main curve and compensatory lumbar curve after posterior fusion to treat thoracic Adolescent Idiopathic Scoliosis (AIS). Hypothesis: FDUV tilt 5 years or more post-fusion depends chiefly on reduction of the main curve and on other factors such as selection of the last instrumented vertebra. Material And Method: A multicenter retrospective cohort of 182 patients with Lenke 1 or 2 AIS treated with posterior instrumentation and followed up for a mean of 8 years and a minimum of 5 years was studied. The patients were divided into two groups based on whether tilt of the upper endplate of the FDUV was ≤5° or >5°at last follow-up. Variables associated with tilt were identified by multiple logistic regression. Results: Six variables were significantly associated with FDUVtilt: percentage of correction at last follow-up, correction loss, lumbar modifier B, number of instrumented vertebrae, inclusion within the instrumentation of the distal neutral vertebra, and inclusion within the instrumentation of the lowest vertebra intersected by the central sacral vertical line. Discussion And Conclusion: The main variables associated with FDUVtilt ≤5° were a final correction percentage ≥60% and absence of correction loss between the postoperative period and last follow-up. Given the stable reduction provided by contemporary instrumentations, we recommend selective thoracic fusion of Lenke 1 or 2 AIS with lumbar modifiers A, B, and C. The lowest instrumented vertebra should be either the neutral vertebra or the vertebra intersected by the central sacral vertical line if it is distal to the neutral vertebra. Level Of Evidence Iv: Retrospective multicenter study. abstract_id: PUBMED:33442850 Preoperative supine traction radiographs often result in higher Lenke classifications than supine bending radiographs in adolescent idiopathic scoliosis. Study Design: Retrospective chart and radiographic review. Objective: The purpose of this study is to determine if both traction and side-bending radiographs yield the same Lenke classification. Supine side-bending radiographs are used to evaluate curve flexibility and assign Lenke classification in Adolescent Idiopathic Scoliosis (AIS). Supine traction radiographs are another tool used by treating surgeons to gauge flexibility and appropriate levels for spinal fusion in AIS. Methods: Retrospective chart and radiographic review were performed on AIS patients that underwent a posterior spinal fusion from 2008 to 2017. Cobb angles and Lenke classifications were determined on all upright posterioanterior (PA) spine radiographs, supine traction radiographs, and four supine bending radiographs. Statistical analysis using independent t tests and chi-square tests as appropriate were compared between patients with or without discordant Lenke classifications with p value set at < 0.05 for statistical significance. Results: 184 patients met inclusion criteria, 36 males and 148 females. The average Cobb angle for the proximal thoracic (PT) curve was 27.2°, main thoracic (MT) curve was 60.5°, and thoracolumbar/lumbar (TL/L) curve was 48.0°. Significantly less curve correction was found with supine traction radiographs compared with bending radiographs: PT (23.1° vs 18.9°, p < 0.001), MT (38.9° vs 37.9°, p = 0.015), and TL/L (25.9° vs. 18.0°, p < 0.001). Lenke Classification was found concordant in 151/184 (82.1%). Traction views in the discordant Lenke classification group demonstrated less curve correction than those in the concordant group: PT (27.4° vs. 22.1°, p = 0.011), MT (45.3° vs. 37.5°, p < 0.001), and TL/L (29.3° vs 25.1°, p = 0.019). Conclusion: Supine traction and supine bending radiographs provided a concordant Lenke classification 82.1% of the time. However, supine traction radiographs demonstrate less curve correction, a higher Lenke classification, and underestimated the TL/L curve correction to a greater degree. A single supine traction film is not an adequate substitute to side-bending radiographs when determining Lenke classification in patients with Adolescent Idiopathic Scoliosis. Level Of Evidence: III. Answer: The necessity of routine postoperative radiographs during the first year after posterior spinal fusion for idiopathic scoliosis has been questioned in several studies. A retrospective cohort analysis of implant failure and surgery revision rates found that additional radiographs during the first year after surgery may not be required in the absence of clinical symptoms. The study suggested that reducing the number of radiographs could reduce radiation exposure and health care costs without affecting treatment, as pain was the only clinical symptom associated with implant failure (PUBMED:24840654). Another retrospective review concluded that routine biplanar postoperative X-rays at 3, 6, 12, and 24 months do not seem to be justified in AIS with third-generation implants, especially in mature, asymptomatic patients (PUBMED:27927297). Similarly, a study evaluating the utility of early postoperative radiographs found that routine full-length spine radiographs in the first 6 months after posterior spinal fusion rarely detected abnormalities that resulted in a meaningful change to a patient's clinical management, suggesting that blanket postoperative screening algorithms should be reconsidered to minimize patient radiation exposure (PUBMED:24480957). Furthermore, a retrospective review of 265 consecutive patients found that isolated postoperative radiographs did not lead to any change in management, and only a small percentage of images demonstrated an implant-related complication, none of which required surgical intervention in the absence of clinical indications (PUBMED:24048089). In contrast, a study that evaluated the relevance of early postoperative standing radiographs before hospital discharge in AIS found significant differences in spinal alignment parameters between the first erect radiograph and the 4-month follow-up visit, indicating that immediate postoperative radiographs may not be necessary if an intraoperative radiograph has already been taken (PUBMED:30972568). Overall, the evidence suggests that routine postoperative radiographs during the first year after posterior spinal fusion for idiopathic scoliosis may not be necessary in asymptomatic patients and that the practice could be modified to reduce unnecessary radiation exposure and health care costs without compromising patient care.
Instruction: The premaxillary space: a location for filler injection? Abstracts: abstract_id: PUBMED:24479675 The premaxillary space: a location for filler injection? Background: A four-layer fat structure is found in the anterior cheek region. The deepest of these fat layers is the premaxillary fat. Objective: To determine whether the premaxillary fat is located in a closed compartment and whether injection of substances affects the appearance of the face. Methods & Materials: Filler or arterial mass (0.5-0.8 mL) after Thiel were injected, and gross dissections, horizontal and sagittal sections, and three-dimensional reconstructions with computed tomography were performed. Results: The premaxillary fat is located in a semiopen space, the premaxillary space, with mainly bony and muscular borders. The injected substances spread in lateral and superficial located tissues. Conclusion: Injection of substances into this space leads to discrete improvements in the appearance of the nasolabial groove. abstract_id: PUBMED:32390776 Premaxillary Deficiency: Techniques in Augmentation and Reconstruction. Progressive premaxillary retrusion is a common sequela of the facial aging process. In most cases, this manifests with central maxillary recession. Central maxillary insufficiency is also commonly encountered within certain ethnic communities, or in cleft lip nasal deformity, and may represent a challenge for the plastic and reconstructive surgeon attempting correction in the setting of facial contouring, rhinoplasty, or reconstruction following oncologic resection or trauma. Aesthetically, premaxillary retrusion may be coincident with an acute nasolabial angle and ptotic nasal tip. Minor deformities may be addressed with use of either alloplastic implants, autogenous tissue, lipotransfer, or injectable filler. Larger composite defects may require reconstruction with implementation of free tissue transfer. Herein, we describe techniques that aim to augment, or reconstruct, the premaxillary region in the context of nasal deformity, osseous resorption, or composite maxillofacial defects. abstract_id: PUBMED:33713373 Comparison of hyaluronic acid filler ejection pressure with injection force for safe filler injection. Background: Owing to the increase in the number of medical procedures performed every year, the frequency of filler injection-related complications has also increased. Although slow, gentle injections with low pressure are usually considered to be safe, the differences in ejection pressure during a filler injection remain to be determined. This study aimed to identify the optimal pressure during filler injections and evaluate its capacity to overcome the arterial blood pressure and reflux the filler material. Methods: Twelve combinations of four hyaluronic acid (HA) fillers with different rheological properties and three needles of different diameters were assessed to determine the force exerted by the injection model. The ejection forces corresponding to varying injection forces were measured and HA filler ejection pressures were calculated. Results: The highest and lowest injection forces were achieved using 30- and 25-G needles, respectively. In accordance with the expected ejection force, high ejection pressure was achieved by administering the HA filler under a high injection force. Irrespective of the injection force, the ejection pressure was likely to be higher than the vascular pressure at the time of entry into the vessel, rendering the injection dangerous. Conclusion: During filler injection, penetration of blood vessels and intravascular injection can be avoided by approaching the target area gently using a cannula or needle. abstract_id: PUBMED:31077308 Premaxillary Injection for Perioral Rejuvenation and Lip Enhancement. Background: This is the first study to evaluate the effect of premaxillary filler injection on nasal tip projection, upper lip projection, and upper lip vermilion height. Objectives: The primary objective of this study was to analyze the change in nasal tip projection (measured by the Goode ratio) and the change in upper lip projection (measured by the Z angle) following premaxillary hyaluronic acid injection. A secondary objective was to measure the change in upper lip vermilion height. We hypothesized that treated subjects will show an increase in nasal tip projection, upper lip projection, and upper lip vermilion height. Methods: Twenty volunteer patients with signs of perioral aging or poor upper lip projection were enrolled in this prospective cohort study and underwent premaxillary hyaluronic acid filler injection between November 2017 and June 2018. Nasal tip projection, upper lip projection, and upper lip vermilion height were assessed from baseline and posttreatment photographs based on the Goode ratio, Z angle, and lip vermilion height ratio, respectively. Results: No significant change was noted between pre- and posttreatment Goode ratio measurements (P = 0.841). There was a significant decrease in Z angle and therefore significant increase in upper lip projection with treatment (P < 0.001). The lip vermilion height ratio demonstrated a trend of increased upper lip vermilion height but this did not achieve statistical significance (P = 0.561). Conclusions: Premaxillary filler treatment resulted in a significant increase in upper lip projection. Premaxillary filler injection when performed in a safe manner is a valuable treatment option for perioral rejuvenation. Level Of Evidence: 4: abstract_id: PUBMED:36880741 Anatomical proposal for hyaluronic acid filler injection in subzygomatic arch depression: A dual-plane injection technique. A subzygomatic arch depression creates a bulky face outline. To smoothen these depressions and correct facial contours, hyaluronic acid filler injection methods are frequently used. However, the complexity of the subzygomatic region make it difficult for practitioners to effectively volume the region. The conventional injection of single layer injection has limitations of lack in volume addition and unwanted undulations and spreading. The anatomical factors were reviewed with ultrasonography, three-dimensional photogrammetric analysis, and cadaver dissection. In this anatomical study, the present knowledge on localizing filler injection with a more precisely demarcated dual-plane injection was suggested. This study presents novel anatomical findings related to the injection of hyaluronic acid filler injection in the subzygomatic arch depression. abstract_id: PUBMED:30966492 Thermal Conductivity of Aluminosilicate- and Aluminum Oxide-Filled Thermosets for Injection Molding: Effect of Filler Content, Filler Size and Filler Geometry. In this study, epoxy molding compounds (EMCs) with aluminosilicate (AlS) and aluminum oxide (AlO) were fabricated as fillers by a twin-screw-extruder (TSE) and shaped to plate samples using injection molding. AlS and AlO, electrical insulating mineral materials, were used as fillers to improve the thermal conductivity (λc) of composites. Composites with different filler particle sizes, filler contents and filler geometry were fabricated and the influence of these variables on the λc was studied. The λc of composites was measured with the hot-disk method. The distribution of fillers in composites was observed using scanning electron microscopy (SEM). Using the Lewis-Nielsen equation, experimental values of λc were compared with those predicted. The predicted results fit the experimental values well. The result showed that λc increases significantly when the filler content of composites is approximately over 50 vol %. abstract_id: PUBMED:32884186 HA Filler Injection and Skin Quality-Literature Minireview and Injection Techniques. Aging of the face produces many changes in the structure and integrity of the skin and other anatomical aspects, the three major signs of which are volume loss, sagginess, and skin quality change. Through surgical or nonsurgical procedures, it is possible to slow or even reverse these mechanisms through artificial means, including thread lifting, radiofrequency, or filler injection. Filler injections are particularly popular in recent years, owing to their convenience, efficacy, and long-lasting results. In this minireview, the author outlines the basic mechanisms behind facial aging, discusses current literature on each aspect of facial aging, and offers injection protocol recommendations based on past literature and clinical experience. abstract_id: PUBMED:30005788 Injection Rhinoplasty Using Filler. Rhinoplasty is a commonly performed cosmetic surgery in Asia. Rhinoplasty using filler is preferred because has fewer side effects and shorter down time. The part of external nose between the skin and bone or cartilages consists of 4 layers. To prevent vascular compromise, the injection should be into deep fatty layer, preventing embolization. Filler is usually injected in the order of radix, rhinion, tip, and the supratip area. To minimize asymmetry, the surgeon should always mark the midline on the nasal bridge and perform the procedure without deviating from it. abstract_id: PUBMED:37251111 Acute diplopia after glabellar hyaluronic acid filler injection. Purpose: Blindness is a well-known complication of filler injection in the glabellar region. Acute diplopia from filler injection without vision loss is a rare complication that typically results in clinical ophthalmoplegia which can have permanent sequelae. Here, we describe a patient who presented with acute diplopia with grossly intact full extraocular motility after glabella hyaluronic acid filler injection that resolved after 1 month. Observations: A previously healthy 43-year-old woman underwent her first hyaluronic acid injection in the glabella and developed immediate binocular double vision with severe pain and skin mottling above her right eyebrow and central forehead. Hyaluronidase injections, nitroglycerin paste, and aspirin were immediately administered. On exam, there was significant skin mottling over the glabella, extending to the forehead and nose with a small incomitant horizontal and vertical misalignment. No changes to her vision were observed and extraocular motility was grossly full. The rest of her exam was unremarkable. Over the course of the following month, the patient's diplopia resolved, but she developed skin necrosis and scarring. Conclusions: Importance: Proper knowledge of facial and periocular anatomy is critical for practitioners to safely perform filler injections and manage potential complications. Patients should be counseled about the potential rare risks of these elective procedures. abstract_id: PUBMED:32951305 Case Report: The role of intradermal micro-droplet hyaluronic acid injection in the correction of filler nodules complication. Filler injection has been widely used as a non-surgical intervention for skin rejuvenation and augmentation for the past decade. Hyaluronic acid (HA) is considered the most common treated material with high safety profile. Various mechanisms, including promoting collagen synthesis are postulated the efficacy of this powerful skin rejuvenation. The greater number of its used, the more complications were seen. Among the non-vascular complications from filler injection, the most frequent one is nodule formation. It could occur as early or late onset even with HA injection. One of the causes of these nodule formation is from poor injection technique leading to visible skin beads and lumps. The current treatments of these filler side effects mainly aim to remove or reduce excessive or displaced filler product. To the best of our knowledge, there has been no report of using the intradermal injection of the micro-droplet HA to correct filler nodule and simultaneously revitalize the skin quality. This novel treatment technique could be a safe alternative for nodule formation correction. Answer: Yes, the premaxillary space is a location for filler injection. According to a study, the premaxillary fat is located in a semiopen space with mainly bony and muscular borders, and injecting substances into this space can lead to discrete improvements in the appearance of the nasolabial groove (PUBMED:24479675). This area is significant in facial aesthetics and aging, as progressive premaxillary retrusion is a common sequela of the facial aging process, and various techniques, including the use of fillers, can be employed to augment or reconstruct the premaxillary region (PUBMED:32390776). Furthermore, premaxillary filler injection has been evaluated for its effect on nasal tip projection, upper lip projection, and upper lip vermilion height, with significant increases in upper lip projection observed following hyaluronic acid filler injection in this area (PUBMED:31077308). This suggests that the premaxillary space is not only a viable location for filler injection but also an effective one for addressing specific aesthetic concerns related to facial aging and contouring.
Instruction: Anatomy of a supergroup: does a criterion of normal perimetric performance generate a supernormal population? Abstracts: abstract_id: PUBMED:14578433 Anatomy of a supergroup: does a criterion of normal perimetric performance generate a supernormal population? Purpose: To interpret individual results from automated perimeters, a normative database must be developed. Typically, a set of criteria determines those subjects that may be included in the database. This study examined whether a criterion of normal performance on an established perimeter generates a subgroup with supernormal perimetric performance. Methods: The right-eye perimetric results of 100 subjects were analyzed. Subjects had visual acuities of 6/12 or better, no history of eye disease, and normal slit lamp biomicroscopic and ophthalmoscopic examinations. Subjects performed test-retest visual field examinations on a Humphrey Field Analyzer (HFA) 24-2 test (Zeiss Humphrey Systems, Dublin, CA), and on a custom frequency-doubling (FD) perimeter with targets spaced in the same 24-2 pattern. Results: Test-retest correlation (Spearman rank correlation coefficients, rs) for mean defect (MD) and pattern SD (PSD) were 0.65 and 0.40 (HFA), and 0.82 and 0.39 (FD perimeter). Three subjects with HFA MDs in the lower 5% had similarly low MDs on retest, whereas no subject was common between the test and retest for the lower 5% of HFA PSD. Correlation between the HFA and FD test results were 0.41 (MD) and 0.05 (PSD). Based on these correlations, the bias introduced into perimetric probability limits were determined, by using Monte Carlo simulations. Conclusions: Although a criterion of a normal MD may produce a subpopulation with supernormal perimetric performance, a criterion of a normal PSD is less likely to do so. Also, a criterion on one test type is less likely to create a supernormal group on a different test type. The bias introduced into perimetric probability limits is small. abstract_id: PUBMED:32506269 Normal to supernormal 30-Hz flicker ERGs predict visual function and prognosis in central retinal artery occlusion. Purpose: To determine whether normal to supernormal 30-Hz flicker ERGs are associated with visual function and prognosis in patients with central retinal artery occlusion (CRAO). Methods: A total of 133 patients diagnosed with unilateral acute non-arteritic CRAO by examination with electroretinography (ERG) were examined. Thirty-hertz flicker ERG amplitudes were analyzed according to the severity of CRAO, and their correlation with visual function and prognosis was assessed. All patients were categorized into one of three groups according to the flicker ERG amplitude ratios between affected and fellow eyes: ≥ 110%, supernormal; 90-110%, normal; and < 90%, subnormal. Results: Thirty-hertz flicker ERG parameters differed significantly according to the severity of retinal ischemia, i.e., between incomplete versus complete CRAO: mean amplitude (55.89 ± 24.38 > 45.41 ± 23.47 μV, P = 0.029), amplitude ratio (76.1 ± 23.1% > 58.7 ± 25.7%, P < 0.001), and proportion of normal to supernormal flicker ERGs (30.5% > 12.4%, P < .001). One-month and final follow-up visual function and prognosis were better in the normal to supernormal flicker ERGs: 1 month BCVA (1.55 ± 0.65 > 2.05 ± 0.61 logMAR, P < .001), final BCVA (1.32 ± 0.79 > 1.97 ± 0.68 logMAR, P < .001), 1-month VA improvement (- 0.50 ± 0.47 > - 0.10 ± 0.29 logMAR, P < .001), and final VA improvement (- 0.74 ± 0.61 > - 0.18 ± 0.37 logMAR, P < .001). Conclusions: Normal to supernormal 30-Hz flicker ERG changes in eyes with CRAO are associated with the milder severity of retinal ischemia and showed better visual function and outcome than the subnormal ERG group. abstract_id: PUBMED:31638046 Structural evaluation of preperimetric and perimetric glaucoma. Purpose: To evaluate diagnostic ability of macular ganglion cell layer-inner plexiform layer (GCL-IPL) for detection of preperimetric glaucoma (PPG) and perimetric glaucoma and comparison with peripapillary RNFL. Methods: Three hundred and thirty seven eyes of 190 patients were enrolled (127 normals, 70 PPG, 140 perimetric glaucoma). Each patient underwent detailed ocular evaluation, standard automated perimetry, and spectral domain optical coherence tomography. Diagnostic abilities of GCL-IPL and RNFL parameters were determined. Data were compared using one-way analysis of variance, Pearson's Chi-square test, and area under the curve (AUC). Results: After adjusting for age, gender, and signal strength, all GCL-IPL and RNFL parameters except mean thickness and disc area differed significantly. Among GCL-IPL thicknesses, inferotemporal had the highest AUC (0.865) for classifying perimetric glaucoma from normals, inferior (0.746) for PPG from normals, and inferotemporal (0.750) for perimetric glaucoma from PPG. When using RNFL, inferior thickness had the highest AUC (0.922) in discriminating POAG from normal, while the same parameter had lower AUC (0.813) in discriminating PPG from normal. The average thickness had maximum AUC (0.775) for discriminating POAG from PPG. For discriminating perimetric glaucoma and normals, inferotemporal GCL-IPL had the highest strength (sensitivity 81.43% and specificity 77.96%), slightly lower than inferior RNFL thickness (sensitivity 87.85% and specificity 84.26%). The same parameters were sensitive in discriminating perimetric glaucoma from PPG (87.14% and 92.85%, respectively). However, their specificities were poor (56.43% both). Conclusion: RNFL had better diagnostic ability, when compared with GCL-IPL for detecting PPG and perimetric glaucoma. However, difference was small and may not be clinically relevant. abstract_id: PUBMED:35410511 Supernormal foveal photoreceptor density in Alport syndrome: A case report. Purpose: To investigate foveal photoreceptor configuration in Alport syndrome, a rare inherited disease characterized by Collagen IV dysfunction. Methods: Adaptive optics scanning laser ophthalmoscope (AOSLO) in vivo imaging of the foveal center and quantitative analysis of cone photoreceptor topography in a 17-year-old male patient with Alport syndrome presenting absence of a foveal avascular zone (FAZ) and foveal hypoplasia in both eyes. Results: Cone density analysis based on AOSLO images revealed an unusual linear cone topography profile displaying supernormal densities within the fovea (z-scores up to + 3.57 and + 2.97 in right and left eyes, respectively). Conclusion: Foveal hypoplasia has previously been associated with normal or reduced cone density. Our observation is the first case of disease-related supernormal cone density within the foveola, shedding light upon the role of Collagen IV in foveal maturation. abstract_id: PUBMED:33595484 Comparison of spectral domain optical coherence tomography parameters between disc suspects and "pre-perimetric" glaucomatous discs classified on disc photo. Purpose: The aim of this study was to compare SD-OCT parameters between disc suspects and "pre-perimetric" glaucomatous discs classified on disc photos. Methods: Disc photos of suspicious discs with normal Humphrey visual fields (HVF) were graded as normal or pre-perimetric glaucomatous based on the consensus of three masked glaucoma specialists. RNFL and GCL-IPL maps of SD-OCT (Cirrus OCT) of these eyes were studied. Quantitative RNFL parameters were compared. Both groups were also compared with respect to parameters being classified as abnormal (at the 1% level), and the pattern of GCL-IPL and NFL maps were assessed qualitatively and classified as normal or pre-perimetric glaucomatous by a masked glaucoma specialist. Results: The average and inferior RNFL thicknesses were decreased in pre-perimetric glaucomatous eyes compared to normal eyes (p 0.01) The average, minimal, inferotemporal and inferior sector GCL-IPL thicknesses were decreased in pre-perimetric glaucomatous eyes (all P < 0.002) The highest AUC was for the inferior RNFL thickness (0.771) followed by average RNFL thickness (0.757) The sensitivity and specificity for any one abnormal RNFL parameter was 71.9% and 59.7%, for GCL-IPL parameters was 70% and 69.1% The positive (PLR) and negative likelihood ratios (NLR) were 1.78 and 0.47 for RNFL and 2.26 and 0.43 for GCL-IPL parameters. For the qualitative assessment of RNFL and GCL-IPL maps, the sensitivity, specificity, PLR and NLR were 75%, 77.2%, 3.29, and 0.32, respectively. Conclusion: Pre-perimetric disc suspects had greater OCT changes compared to normal disc suspects. Qualitative assessment of RNFL and GCL-IPL maps had the highest discriminatory ability. abstract_id: PUBMED:34945121 The Diagnostic Value of Pulsar Perimetry, Optical Coherence Tomography, and Optical Coherence Tomography Angiography in Pre-Perimetric and Perimetric Glaucoma. The purpose of this article is to investigate the diagnostic value of Pulsar perimetry (PP), optical coherence tomography (OCT), and optical coherence tomography angiography (OCTA) in pre-perimetric glaucoma (PPG) and perimetric glaucoma (PG). This retrospective cross-sectional study included 202 eyes (145 eyes in the control group, 40 eyes in the PPG group, and 17 eyes in the PG group) from 105 subjects. The results were analyzed by paired t-tests and Wilcoxon signed-rank test. The area under the curve (AUC), sensitivity, and specificity were used to evaluate the diagnostic accuracy. Pearson correlation was used to investigate the relationships of each parameter. The most sensitive parameters for differentiating the control group from the PPG group by using Pulsar, OCT, and OCTA were square loss variance of PP (AUC = 0.673, p < 0.001), superior ganglion cell complex thickness (AUC = 0.860, p < 0.001), and superior-hemi retina thickness (AUC = 0.817, p < 0.001). In the PG group, the most sensitive parameters were mean defect of PP (AUC = 0.885, p < 0.001), whole image of ganglion cell complex thickness (AUC = 0.847, p < 0.001), and perifoveal retina thickness (AUC = 0.833, p < 0.001). The mean defect of PP was significantly correlated with vascular parameters (radial peripapillary capillary (RPC), p = 0.008; vessel density of macular superficial vascular complex (VDms), p = 0.001; vessel density of macular deep vascular complex (VDmd), p = 0.002). In conclusion, structural measurements using OCT were more sensitive than vascular measurements of OCTA and functional measurements of PP for PPG, while PP was more sensitive than the structural and vascular measurements for PG. The mean defect of PP was also shown to be highly correlated with the reduction of vessel density. abstract_id: PUBMED:31214904 Waist-to-Hip Ratio as Supernormal Stimuli: Effect of Contrapposto Pose and Viewing Angle. In women, the waist-to-hip ratio (WHR) is an indicator of attractiveness, health, youthfulness, and reproductive potential. In the current study, we hypothesized that viewing angle and body postures influence the attractiveness of these forms based on the view dependency of WHR stimuli (vdWHR). Using eye tracking, we quantified the number of fixations and dwell time on 3D images of a female avatar in two different poses (standing and contrapposto) from eight viewing angles incrementing in 45 degrees of rotation. A total of 68 heterosexual individuals (25 men and 43 women) participated in the study. Results showed that the contrapposto pose was perceived as more attractive than the standing pose and that lower vdWHR sides of the stimuli attracted more first fixation, total fixations, and dwell time. Overall, the results supported that WHR is view-dependent and vdWHRs lower than optimal WHRs are supernormal stimuli that may generate peak shifts in responding. Results are discussed in terms of the attractiveness of women's movements (gaits and dance) and augmented artistic presentations. abstract_id: PUBMED:35677697 Case Report: Supernormal Vascular Aging in Leningrad Siege Survivors. Age-related changes in the vascular system play an important role in the biological age and lifespan of a person and maybe affected from an early age onward. One of the indicators of changes in the vascular system is arterial wall stiffness and its main measure, i.e., carotid-femoral pulse wave velocity (cfPWV). We examined arterial wall stiffness in a sample of 305 Leningrad Siege survivors to assess how hunger and stressful conditions during fetal development and early childhood affected the state of the cardiovascular system at a later age and what factors may neutralize the negative impact sustained in early childhood. Here, we presented an evaluation of two unique patients with supernormal vascular aging (SUPERNOVA) phenotype from this cohort and described the details of congruence between hereditary resistance and practiced lifestyle yielding slower biological aging rate. abstract_id: PUBMED:32808145 Case of lens-induced uveitis associated with supernormal flicker ERG amplitudes after cataract surgery. Purpose: To report our findings in a case of lens fragment-induced uveitis associated with supernormal flicker electroretinograms (ERGs) twenty months after the cataract surgery. Methods: This is an observational case report. Full-field flicker ERGs were recorded with the RETeval system. Optical coherence tomography (OCT) and slit-lamp biomicroscopy were used to assess the uveitis during the follow-up period. Results: A 70-year-old man, who had undergone cataract surgery 20 months earlier, visited our hospital with a complaint of decreased vision in his right eye. Slit-lamp biomicroscopy revealed corneal edema and a lens fragment was detected in the inferior part of the anterior chamber. OCT showed cystoid macular edema, and flicker ERGs showed a marked increase in the amplitude and a delay in the implicit time in the right eye. These abnormalities of the flicker ERGs improved gradually after the removal of lens fragment and application of topical anti-inflammatory medications. Conclusion: Our case of lens-induced uveitis had supernormal flicker ERG amplitudes. Clinicians should be aware that eyes with uveitis can have larger-than-normal ERG amplitudes. abstract_id: PUBMED:34195585 Supernormal lung function and risk of COPD: A contemporary population-based cohort study. Background: Investigation of the natural history of chronic obstructive pulmonary disease (COPD) has led to the recognition that individuals with higher than normal lung function may have lower risk of developing COPD. We tested the hypothesis that individuals with supernormal lung function have lower risk of COPD. Methods: We followed 108,246 adults from the Copenhagen General Population Study recruited between 2003 and 2015 for clinical COPD outcomes until 2018. A subset of 16,892 attended another examination approximately 10 years later, allowing to investigate lung function decline and COPD development (forced expiratory volume in 1 se (FEV1)/forced vital capacity (FVC)<0·70 and FEV1<80% predicted with chronic respiratory symptom). Supernormal lung function was defined as FEV1>upper limit of normal (ULN). Findings: At baseline, 3944(4%) had supernormal lung function, 91,938(85%) normal lung function, and 12,364(11%) had below normal lung function. Individuals with baseline supernormal versus normal lung function had higher FEV1 decline but did not differ in FEV1/FVC decline. None had COPD at 10 years in those with supernormal lung function, while 3% had in those with normal lung function. Early-life risk factors associated with COPD development and smoking exposure in different stages of life were less common in individuals with supernormal lung function. Compared to individuals with normal lung function, multivariable adjusted hazard ratios in those with supernormal lung function were 0·19(95% confidence interval:0·08-0·46) for acute obstructive lung disease hospitalisations, 0·56(0·45-0·69) for pneumonia hospitalisations, and 0·81(0·72-0·91) for all-cause mortality. Interpretation: Supernormal lung function is associated with lower risk of developing COPD. Funding: Herlev and Gentofte Hospital and Lundbeck Foundation. Answer: The study identified in abstract PUBMED:14578433 investigated whether a criterion of normal performance on an established perimeter generates a subgroup with supernormal perimetric performance. The research analyzed the right-eye perimetric results of 100 subjects who had visual acuities of 6/12 or better, no history of eye disease, and normal slit lamp biomicroscopic and ophthalmoscopic examinations. They performed test-retest visual field examinations on a Humphrey Field Analyzer (HFA) and on a custom frequency-doubling (FD) perimeter. The results showed that test-retest correlation for mean defect (MD) and pattern standard deviation (PSD) were moderate to low. Three subjects with HFA MDs in the lower 5% had similarly low MDs on retest, while no subject was common between the test and retest for the lower 5% of HFA PSD. The correlation between the HFA and FD test results was also moderate for MD and very low for PSD. Based on these correlations and Monte Carlo simulations, the study concluded that although a criterion of a normal MD may produce a subpopulation with supernormal perimetric performance, a criterion of a normal PSD is less likely to do so. Additionally, a criterion on one test type is less likely to create a supernormal group on a different test type. The bias introduced into perimetric probability limits by these criteria is small. In summary, the study suggests that using a criterion of normal perimetric performance on an established perimeter does not necessarily generate a supernormal population, particularly when considering PSD as a criterion. The bias introduced by such a criterion is considered to be small, indicating that the normative databases used for interpreting individual perimetric results are likely to be reliable.
Instruction: Pre-morbid and outcome correlates of first episode mania with psychosis: is a distinction between schizoaffective and bipolar I disorder valid in the early phase of psychotic disorders? Abstracts: abstract_id: PUBMED:20434220 Pre-morbid and outcome correlates of first episode mania with psychosis: is a distinction between schizoaffective and bipolar I disorder valid in the early phase of psychotic disorders? Objectives: The validity of schizoaffective disorder (SA) diagnosis has for long been a matter of controversy and its delineation from bipolar I disorders (BD) has often been questioned. However, most studies have been conducted in chronic samples and have therefore been biased towards patients with poorer outcome, which may have hampered the possibility to identify significant differences between both diagnoses. Methods: 108 subjects presenting a first DSM-III-R manic episode with psychotic features were assessed at baseline and 12 months after stabilisation on symptoms and functional characteristics, and patients with BD (n=87) were compared with those with SA bipolar subtype (SAB) (n=21). Results: SAB patients had a higher prevalence of first degree relatives with schizophrenia and a lower premorbid functional level. They had a longer prodromal phase, a longer duration of untreated psychosis and remained symptomatic for a longer period. They also had higher levels of positive symptoms in the acute manic phase; however, with two exceptions, the type of psychotic symptoms were similar in both groups. At stabilisation and 12 months after stabilisation, SA patients had higher levels of negative symptoms, with poorer functional level at 12 months. Conclusions: These data suggest SA is a valid diagnosis in the early phase of psychotic disorders considering it defines a subgroup of first episode psychotic mania patients with distinct characteristics compared to BD. While a dimensional approach to diagnosis may be more adapted to this phase of illness, SA disorder offers, in the context of categorical classifications, a useful intermediate category that reflects a clinical reality. abstract_id: PUBMED:34706439 Prodromal features in first-psychotic episodes of major affective and schizoaffective disorders. Background: Study aims were to analyze psychopathological details of prodromes leading to first-lifetime psychotic episodes and apply them to improve prediction of final diagnoses. Methods: Comprehensive records of subjects with final diagnoses of bipolar I (BD-I; n = 216), schizoaffective (SzAffD; n = 71), or psychotic major-depressive (MDD; n = 42) disorders in the Harvard-McLean First-Psychotic Episode Project were analyzed to identify psychopathological details of prodromes leading to first-lifetime episodes with psychotic features and their ability to predict final diagnoses tested with multivariable logistic regression modeling. Results: While held blind to final diagnoses, we identified 84 distinct psychopathological characteristics of prodromes to first-psychotic episodes, including perceptual disturbances, affective symptoms, sleep disturbances, onset rate, and duration. Prevalence of 19 factors appeared to differ among final diagnoses, and were tested with multivariable regression modeling. Significantly and independently more associated with final diagnoses of MDD than BD-I were 7 features: suicidal ideation, somatic delusions, anorexia, lack of insomnia, older presenting age, depressive symptoms, and lack of impulsivity; 9 others were associated more with later SzAffD than MDD or BD-I: lack of insomnia, homicidal behavior, lack of excitement, visual hallucinations, command hallucinations, longer prodrome, male sex, responding to internal stimuli, and younger age at presentation. Limitations: Historical-retrospective and prospective assessments may have misidentified some prodromal features, and subjects with final psychotic-MDD diagnosis were relatively few. Conclusions: Psychopathological features identified during prodromes leading to first-episodes with psychotic features predicted and distinguished among final diagnoses of MDD, BD-I, and SzAffD. The findings add to growing impressions that early psychopathology has value in predicting final diagnoses of major affective and schizoaffective disorders. abstract_id: PUBMED:33043430 Long-term morbidity in major affective and schizoaffective disorders following hospitalization in first psychotic episodes. Objective: To evaluate morbidity during long-term follow-up with clinical treatment of affective and schizoaffective disorder subjects followed from hospitalization for first major psychotic episodes. Methods: We followed adult subjects systematically at regular intervals from hospitalization for first-lifetime episodes of major affective and schizoaffective disorders with initial psychotic features. We compiled % of days with morbidity types from detailed records and life charts, reviewed earliest antecedent morbidities, compared both with final diagnoses and initial presenting illness types, and evaluated morbidity risk factors with regression modeling. Findings: With final diagnoses of bipolar-I (BD-I, n = 216), schizoaffective (SzAffD, 71), and major depressive (MDD, 42) disorders, 329 subjects were followed for 4.47 [CI: 4.20-4.47] years. Initial episodes were mania (41.6%), mixed states (24.3%), depression (19.5%), or apparent nonaffective psychosis (14.6%). Antecedent morbidity presented 12.7 years before first episodes (ages: SzAffD ≤ BD-I < MDD). Long-term % of days ill ranked SzAffD (83.0%), MDD (57.8%), BD-I (45.0%). Morbidity differed by diagnosis and first-episode types, and was predicted by first episodes and suggested by antecedent illnesses. Long-term wellness was greater with BD-I diagnosis, first episode not mixed or psychotic nonaffective, rapid onset, and being older at first antecedents, but not follow-up duration. Conclusions: Initially, psychotic BD-I, SzAffD, or MDD subjects followed for 4.47 years from first hospitalization experienced much illness, especially depressive or dysthymic, despite ongoing clinical treatment. Antecedent symptoms arose years before index first episodes; antecedents and first episode types predicted types and amounts of long-term morbidity, which ranked: SzAffD > MDD > BD-I. abstract_id: PUBMED:3813808 Schizoaffective psychosis. II. Manic, bipolar, and depressive subtypes. Three schizoaffective subtypes-manic, bipolar, and depressive-were compared across multiple baseline (demographic, premorbid, morbid) and long-term outcome dimensions. Though the subtypes were comparable at baseline, the patients with depressive schizoaffective disorder scored consistently better at follow-up, although none of these differences was statistically significant. Results failed to support the validity of such subtyping in schizoaffective disorder, at least in predominantly chronic populations. abstract_id: PUBMED:3101647 Schizoaffective psychosis. I. Comparative long-term outcome. Patients from the Chestnut Lodge (Rockville, Md) follow-up study with schizoaffective (SA) psychosis (n = 68) were compared with patients with DSM-III schizophrenia (S, n = 163), bipolar disorder (n = 19), and unipolar disorder (n = 44) on multiple premorbid, morbid, and outcome dimensions. This study required that patients with SA disorder satisfy DSM-III criteria for both S and affective disorder (minus mutual exclusionary criteria). The SA cohort demographic and premorbid profile paralleled that of the cohort with unipolar disorder. At follow-up, however, the profile of SA psychosis paralleled that of S, with no significant differences between these patients on virtually all outcome measures. Results suggest that among samples of long-term inpatient, SA psychosis as defined herein is closer to S than affective disorder. abstract_id: PUBMED:1897617 Outcome of schizoaffective disorder at two long-term follow-ups: comparisons with outcome of schizophrenia and affective disorders. Objective: This research assessed whether the outcome of schizoaffective disorder is more similar to that of schizophrenia or that of affective disorders. Method: The authors conducted a prospective follow-up study of 101 schizoaffective, schizophrenic, bipolar manic, and depressed patients assessed at three times: during hospitalization and 2 and 4-5 years later. The follow-up test battery involved detailed assessment of social functioning, work performance, symptoms, posthospital treatment, and rehospitalization. Results: Outcome for schizoaffective patients 4-5 years after hospitalization differed significantly from that for patients with unipolar depression. However, the differences between schizoaffective and bipolar manic patients were more equivocal. Unlike the patients with bipolar disorder, only a limited number of patients with schizoaffective disorder showed complete recovery in all areas throughout the year preceding the 2-year follow-up and the year preceding the 4- to 5-year follow-up. The differences in outcome between schizoaffective and schizophrenic patients were also mixed. These two groups showed some similarities in outcome, but there were fewer schizoaffective than schizophrenic patients with uniformly poor outcome in all areas. Conclusions: Overall, schizoaffective patients showed some similarities to both schizophrenic and bipolar manic patients. Schizoaffective patients had somewhat better overall posthospital functioning than patients with schizophrenia, somewhat poorer functioning than bipolar manic patients, and significantly poorer functioning than patients with unipolar depression. The data suggest that when mood-incongruent, schizophrenic-like psychotic symptoms are present in the acute phase, they predict considerable difficulty in outcome, even when affective syndromes are also present, as in schizoaffective disorder. It is likely that schizoaffective disorder is not just a simple variety of affective disorder. abstract_id: PUBMED:34400122 Clozapine in patients with schizoaffective disorder: A systematic review. Schizoaffective disorder is defined by the appearance of positive psychotic symptomatology as well as affective features, even when it is considered a controversial nosologic entity, proving difficult to accord on its definition or diagnostic criteria. Due to these conceptual differences, it has been a challenge to study effective therapeutic measures and, consequently, the availability of data in the current literature, resulting in the extrapolation of clinical guidelines and recommendations initially established for patients with schizophrenia or bipolar disorder. The current study aimed to systematically search and summarize the published evidence to date about the use of clozapine in patients with schizoaffective disorder. Seven studies were identified, that are heterogeneous on their designs and methodology, including samples of patients mixed with bipolar or schizophrenic disorders. The evidence was summarized both in a table and a narrative fashion, suggesting that clozapine may be an effective treatment for both psychotic and affective symptoms, indistinctively of an acute or maintenance phase. abstract_id: PUBMED:3064281 Outcome studies of schizoaffective disorders. Outcome studies of schizoaffective disorder have taught us much about the long-term consequences of the syndrome, and they have provided some indication of the potential usefulness of maintaining "schizoaffective disorder" as a diagnostic category separate from schizophrenia and major affective disorder. In a review of outcome studies that compared schizoaffective patients to schizophrenic or affective patients, we found consistent results despite wide variations in diagnostic criteria, length of followup, and demographic characteristics. Global measures of outcome show that schizophrenic patients are more impaired than schizoaffective patients, who in turn are more impaired than affective patients. However, studies of specific outcome domains such as symptomatology, social functioning, and occupational functioning indicate that schizoaffective disorder is heterogeneous and that subtyping by polarity (e.g., schizoaffective-manic vs. schizoaffective-depressed) accounts for some of this variance. The consistency of these findings in the face of methodological variability suggests that it would be premature to classify schizoaffective patients with schizophrenia or affective disorder, but also that strict diagnostic criteria for schizoaffective disorder are at best preliminary and need to be thoroughly validated. abstract_id: PUBMED:8878328 The Suffolk County Mental Health Project: demographic, pre-morbid and clinical correlates of 6-month outcome. The diagnostic specificity and predictive utility of the classical prognostic indicators in schizophrenia were examined in psychotic patients enrolled in the Suffolk County Mental Health Project. First-admission psychotic patients with schizophrenia (N = 96), major depression (N = 42), and bipolar disorder (N = 64) drawn from 10 facilities in Suffolk County, New York, were assessed during their initial hospitalization and at 6-month follow-up. Longitudinal consensus diagnoses were determined after the 6-month interview. The diagnostic groups shared similar background characteristics, but schizophrenics had poorer pre-morbid adjustment, longer periods of psychosis before hospitalization and more negative symptoms initially. Except for rehospitalization, schizophrenics had the worst and bipolars the best functioning at follow-up. Among the classical prognostic indicators, the best predictor of 6-month outcome for each diagnostic group was premorbid functioning. abstract_id: PUBMED:23528796 Differences between first episode schizophrenia and schizoaffective disorder. Background: The diagnostic and clinical overlap between schizophrenia and schizoaffective disorder is an important nosological issue in psychiatry that is yet to be resolved. The aim of this study was to compare the clinical and functional characteristics of an epidemiological treated cohort of first episode patients with an 18-month discharge diagnosis of schizophrenia (FES) or schizoaffective disorder (FESA). Methods: This study was part of the larger First Episode Psychosis Outcome Study (FEPOS) which involved a medical file audit study of all 786 patients treated at the Early Psychosis Prevention and Intervention Centre between 1998 and 2000. Of this cohort, 283 patients had an 18-month discharge diagnosis of FES and 64 had a diagnosis of FESA. DSM-IV diagnoses and clinical and functional ratings were derived and validated by two consultant psychiatrists. Results: Compared to FES patients, those with FESA were significantly more likely to have a later age of onset (p=.004), longer prodrome (p=.020), and a longer duration of untreated psychosis (p<.001). At service entry, FESA patients presented with a higher illness severity (p=.020), largely due to the presence of more severe manic symptoms (p<.001). FESA patients also had a greater number of subsequent inpatient admissions (p=.017), had more severe depressive symptoms (p=.011), and higher levels of functioning at discharge. Discussion: The findings support the notion that these might be considered two discernable disorders; however, further research is required to ascertain the ways and extent to which these disorders are discriminable at presentation and over time. Answer: The distinction between schizoaffective (SA) and bipolar I disorder (BD) in the early phase of psychotic disorders has been a subject of debate. However, studies suggest that there are indeed differences between the two diagnoses, particularly in the context of first episode mania with psychosis. Patients with schizoaffective bipolar subtype (SAB) were found to have distinct characteristics compared to those with BD. SAB patients had a higher prevalence of first-degree relatives with schizophrenia, a lower premorbid functional level, a longer prodromal phase, a longer duration of untreated psychosis, and remained symptomatic for a longer period. They also had higher levels of positive symptoms in the acute manic phase and, at stabilization and 12 months after stabilization, had higher levels of negative symptoms with poorer functional levels at 12 months (PUBMED:20434220). Furthermore, prodromal features leading to first-psychotic episodes were found to predict and distinguish among final diagnoses of major depressive disorder (MDD), BD-I, and schizoaffective disorder (SzAffD). Specific psychopathological features during prodromes were associated with later SzAffD compared to MDD or BD-I, including visual hallucinations, command hallucinations, longer prodrome, and responding to internal stimuli (PUBMED:34706439). Long-term morbidity following hospitalization for first psychotic episodes also differed by diagnosis. Schizoaffective disorder (SzAffD) patients experienced a higher percentage of days ill compared to BD-I and MDD patients, with SzAffD ranking highest in long-term morbidity. Antecedent symptoms and first episode types predicted types and amounts of long-term morbidity (PUBMED:33043430). Overall, these findings suggest that schizoaffective disorder can be considered a valid diagnosis in the early phase of psychotic disorders, as it defines a subgroup of first episode psychotic mania patients with distinct characteristics compared to BD. While a dimensional approach to diagnosis may be more adapted to this phase of illness, the categorical classification of SA disorder offers a useful intermediate category that reflects clinical reality (PUBMED:20434220).
Instruction: Do farming exposures cause or prevent asthma? Abstracts: abstract_id: PUBMED:33189120 Effect of Farming on Asthma. In this review, we discuss an immunobiology model of farm exposure towards the protective effect of asthma. Unraveling the protective effect of farming exposure could help develop novel strategies to prevent asthma. Asthma is a chronic airway inflammation that causes coughing, wheezing, chest tightness or shortness of breath. The reasons for the increase in the prevalence of asthma worldwide is still unclear but has been hypothesized to be attributable to westernization/urbanization of rural regions thus resulting in the loss of rural farming environmental. In this review we discuss the effect of the environmental factors, specifically farming, on the risk of asthma in children. Here, we will summarize the main findings of 27 studies related to 11 different cohorts. Several studies have shown preventive effect of traditional farming on the prevalence and incidence of asthma in childhood. Furthermore, consumption of unpasteurized cow's milk, exposure to farm animals as well as fodder have been shown to have a protective effect on asthma. The precise mechanism of the protective effect is still unclear. There are assumptions, that maternal/childhood exposures to farm animals result in higher microbial exposures through which the protective effect might be mediated. Also, consumption of unpasteurized milk (when consumed during pregnancy by mother or early childhood by children) can modulate cytokine production patterns which could be responsible for the observed protective effect. CONCLUSION: This review provides evidence of the protective effect of farming environment i.e., exposure to farm animals, their fodder as well as consumption of unpasteurized cow's milk suggesting that novel strategies could be developed to prevent asthma. abstract_id: PUBMED:29863827 Exposures Related to House Dust Microbiota in a U.S. Farming Population. Background: Environmental factors can influence the house dust microbiota, which may impact health outcomes. Little is known about how farming exposures impact the indoor microbiota. Objective: We aimed to identify exposures related to bacterial communities in house dust in a U.S. farming population. Methods: We used 16S rRNA amplicon sequencing to characterize bacterial communities in vacuumed dust samples from the bedrooms of a subset of 879 households of farmers and farmers' spouses enrolled in the Agricultural Lung Health Study (ALHS), a case-control study of asthma nested within the Agricultural Health Study (AHS) in North Carolina and Iowa. Information on current farming (past 12 mo), including both crop and animal farming, and other potential microbial sources was obtained via questionnaires. We used linear regression to evaluate associations between exposures and bacterial diversity within each sample, analysis of similarity (ANOSIM), and permutational multivariate analysis of variance (PERMANOVA) to identify exposures related to diversity between samples, and analysis of composition of microbiome to examine whether exposures related to diversity were also related to differential abundance of specific operational taxonomic units (OTUs). Results: Current farming was positively associated with bacterial diversity in house dust, with or without adjustment for nonfarm exposures related to diversity, including presence of indoor pets, home condition, and season of dust collection. Many taxa exhibited differential abundance related to farming. Some taxa in the phyla Chloroflexi and Verrucomicrobia were associated [false discovery rate (FDR)<0.05] with farming but not with other nonfarm factors. Many taxa correlated with the concentration of house dust of endotoxin, commonly studied as a general marker of exposure to the farming environment. Conclusions: In this farming population, house dust microbiota differed by current farming status. Understanding the determinants of the indoor microbiota is the first step toward understanding potential relationships with health outcomes. https://doi.org/10.1289/EHP3145. abstract_id: PUBMED:25086344 Influence of farming exposure on the development of asthma and asthma-like symptoms. Based upon age and type of farming exposures, a wide range of studies demonstrate either protective or deleterious effects of the farming environment on asthma. In this review, we highlight key studies supporting the concept that farming exposure protects children from asthma and atopy based on studies performed largely in European pediatric cohorts. Various types of farming in certain regions appear to have a greater effect on asthma protection, as does the consumption of unpasteurized milk. In the United State, where concentrated animal feeding operations (CAFOs) are more common, asthma is increased in children exposed especially to swine CAFOs; whereas, rates of atopy and allergy are lower in these children. We also review studies evaluating the role of farming exposures both as a child and/or as an adult on asthma seen in adults. The importance of microbes in farming environments and the contribution of various components of the innate immune system including toll-like receptors to the underlying mechanisms of asthma related to farming exposures are also reviewed. abstract_id: PUBMED:34626494 Farming lifestyle and human milk: Modulation of the infant microbiome and protection against allergy. There has been an increased prevalence of several allergic manifestations such as food allergy, atopic eczema, allergic rhinitis and asthma. Several explanations have been proposed why this has occurred, but one of the main contributing factors may be the gradual loss of microbial exposures over time in regions where allergy is prevalent. Such exposures occur in individuals who practise a traditional farming lifestyle and are protected against allergy. Infant consumption of human milk, more commonly practised in these farming communities, may provide an alternative in combatting allergy, as it known to be beneficial to infant health. In this review, we cover human milk and its role in shaping the gut microbiome promoting the growth of beneficial bacteria like Bifidobacterium, as well as the downstream impact of the farming lifestyle, human milk and Bifidobacterium has on developing infant immunity. abstract_id: PUBMED:34922896 Adverse early-life environmental exposures and their repercussions on adult respiratory health. Objective: To review in the literature the environmental problems in early life that impact the respiratory health of adults. Sources: Non-systematic review including articles in English. Search filters were not used in relation to the publication date, but the authors selected mainly publications from the last five years. Summary Of The Findings: In this review, the authors present the exposure pathways and how the damage occurs depending on the child's stage of development; the authors describe the main environmental pollutants - tobacco smoke, particulate matter, air pollution associated with traffic, adverse childhood experiences and socioeconomic status; the authors present studies that evaluated the repercussions on the respiratory system of adults resulting from exposure to adverse environmental factors in childhood, such as increased incidence of Chronic Obstructive Pulmonary Disease (COPD), asthma and allergies; and, a decline in lung function. The authors emphasize that evidence demonstrates that adult respiratory diseases almost always have their origins in early life. Finally, the authors emphasize that health professionals must know, diagnose, monitor, and prevent toxic exposure among children and women. Conclusion: The authors conclude that it is necessary to recognize risk factors and intervene in the period of greatest vulnerability to the occurrence of harmful effects of environmental exposures, to prevent, delay the onset or modify the progression of lung disease throughout life and into adulthood. abstract_id: PUBMED:29133137 A systematic review of the public health risks of bioaerosols from intensive farming. Background: Population growth, increasing food demands, and economic efficiency have been major driving forces behind farming intensification over recent decades. However, biological emissions (bioaerosols) from intensified livestock farming may have the potential to impact human health. Bioaerosols from intensive livestock farming have been reported to cause symptoms and/or illnesses in occupational-settings and there is concern about the potential health effects on people who live near the intensive farms. As well as adverse health effects, some potential beneficial effects have been attributed to farm exposures in early life. The aim of the study was to undertake a systematic review to evaluate potential for adverse health outcomes in populations living near intensive livestock farms. Material And Methods: Two electronic databases (PubMed and Scopus) and bibliographies were searched for studies reporting associations between health outcomes and bioaerosol emissions related to intensive farming published between January 1960 and April 2017, including both occupational and community studies. Two authors independently assessed studies for inclusion and extracted data. Risk of bias was assessed using a customized score. Results: 38 health studies met the inclusion criteria (21 occupational and 1 community study measured bioaerosol concentrations, 16 community studies using a proxy measure for exposure). The majority of occupational studies found a negative impact on respiratory health outcomes and increases in inflammatory biomarkers among farm workers exposed to bioaerosols. Studies investigating the health of communities living near intensive farms had mixed findings. All four studies of asthma in children found increased reported asthma prevalence among children living or attending schools near an intensive farm. Papers principally investigated respiratory and immune system outcomes. Conclusions: The review indicated a potential impact of intensive farming on childhood respiratory health, based on a small number of studies using self-reported outcomes, but supported by findings from occupational studies. Further research is needed to measure and monitor exposure in community settings and relate this to objectively measured health outcomes. abstract_id: PUBMED:35363149 Development and Application of an Open Tool for Sharing and Analyzing Integrated Clinical and Environmental Exposures Data: Asthma Use Case. Background: The Integrated Clinical and Environmental Exposures Service (ICEES) serves as an open-source, disease-agnostic, regulatory-compliant framework and approach for openly exposing and exploring clinical data that have been integrated at the patient level with a variety of environmental exposures data. ICEES is equipped with tools to support basic statistical exploration of the integrated data in a completely open manner. Objective: This study aims to further develop and apply ICEES as a novel tool for openly exposing and exploring integrated clinical and environmental data. We focus on an asthma use case. Methods: We queried the ICEES open application programming interface (OpenAPI) using a functionality that supports chi-square tests between feature variables and a primary outcome measure, with a Bonferroni correction for multiple comparisons (α=.001). We focused on 2 primary outcomes that are indicative of asthma exacerbations: annual emergency department (ED) or inpatient visits for respiratory issues; and annual prescriptions for prednisone. Results: Of the 157,410 patients within the asthma cohort, 26,332 (16.73%) had 1 or more annual ED or inpatient visits for respiratory issues, and 17,056 (10.84%) had 1 or more annual prescriptions for prednisone. We found that close proximity to a major roadway or highway, exposure to high levels of particulate matter ≤2.5 μm (PM2.5) or ozone, female sex, Caucasian race, low residential density, lack of health insurance, and low household income were significantly associated with asthma exacerbations (P<.001). Asthma exacerbations did not vary by rural versus urban residence. Moreover, the results were largely consistent across outcome measures. Conclusions: Our results demonstrate that the open-source ICEES can be used to replicate and extend published findings on factors that influence asthma exacerbations. As a disease-agnostic, open-source approach for integrating, exposing, and exploring patient-level clinical and environmental exposures data, we believe that ICEES will have broad adoption by other institutions and application in environmental health and other biomedical fields. abstract_id: PUBMED:34912233 Mmp12 Is Upregulated by in utero Second-Hand Smoke Exposures and Is a Key Factor Contributing to Aggravated Lung Responses in Adult Emphysema, Asthma, and Lung Cancer Mouse Models. Matrix metalloproteinase-12 (Mmp12) is upregulated by cigarette smoke (CS) and plays a critical role in extracellular matrix remodeling, a key mechanism involved in physiological repair processes, and in the pathogenesis of emphysema, asthma, and lung cancer. While cigarette smoking is associated with the development of chronic obstructive pulmonary diseases (COPD) and lung cancer, in utero exposures to CS and second-hand smoke (SHS) are associated with asthma development in the offspring. SHS is an indoor air pollutant that causes known adverse health effects; however, the mechanisms by which in utero SHS exposures predispose to adult lung diseases, including COPD, asthma, and lung cancer, are poorly understood. In this study, we tested the hypothesis that in utero SHS exposure aggravates adult-induced emphysema, asthma, and lung cancer. Methods: Pregnant BALB/c mice were exposed from gestational days 6-19 to either 3 or 10mg/m3 of SHS or filtered air. At 10, 11, 16, or 17weeks of age, female offspring were treated with either saline for controls, elastase to induce emphysema, house-dust mite (HDM) to initiate asthma, or urethane to promote lung cancer. At sacrifice, specific disease-related lung responses including lung function, inflammation, gene, and protein expression were assessed. Results: In the elastase-induced emphysema model, in utero SHS-exposed mice had significantly enlarged airspaces and up-regulated expression of Mmp12 (10.3-fold compared to air-elastase controls). In the HDM-induced asthma model, in utero exposures to SHS produced eosinophilic lung inflammation and potentiated Mmp12 gene expression (5.7-fold compared to air-HDM controls). In the lung cancer model, in utero exposures to SHS significantly increased the number of intrapulmonary metastases at 58weeks of age and up-regulated Mmp12 (9.3-fold compared to air-urethane controls). In all lung disease models, Mmp12 upregulation was supported at the protein level. Conclusion: Our findings revealed that in utero SHS exposures exacerbate lung responses to adult-induced emphysema, asthma, and lung cancer. Our data show that MMP12 is up-regulated at the gene and protein levels in three distinct adult lung disease models following in utero SHS exposures, suggesting that MMP12 is central to in utero SHS-aggravated lung responses. abstract_id: PUBMED:15115863 Do farming exposures cause or prevent asthma? Results from a study of adult Norwegian farmers. Background: A protective effect of endotoxin exposure on atopy and asthma in farmers' children has been postulated. Studies of adult farmers have shown conflicting results but often lack exposure data. The prevalence of asthma in farmers with different exposure levels to microbial agents and irritant gases was compared. Methods: Atopy was defined as a positive response to multiple radioallergosorbent tests (RAST) with a panel of 10 common respiratory allergens, and asthma was ascertained by a questionnaire using a stratified sample (n = 2169) of a farming population from south-eastern Norway. Exposure of farmers to total dust, fungal spores, bacteria, endotoxins, and ammonia was assessed by exposure measurements. Results: The prevalence of asthma was 3.7% for physician diagnosed asthma and 2.7% for current asthma. The prevalence of atopy was 14%, but most asthmatic subjects were non-atopic (80%). Compared with farmers without livestock, (1) asthma was significantly higher in cattle farmers (OR(adj) 1.8, 95% CI 1.1 to 2.8) and pig farmers (OR(adj) 1.6, 95% CI 1.0 to 2.5), (2) non-atopic asthma was significantly higher in pig farmers (OR(adj) 2.0, 95% CI 1.2 to 3.3) and in farmers with two or more types of livestock (OR(adj) 1.9, 95% CI 1.1 to 3.3), and (3) atopic asthma was less common in farmers with two or more types of livestock (OR(adj) 0.32, 95% CI 0.11 to 0.97). Exposure to endotoxins, fungal spores, and ammonia was positively associated with non-atopic asthma and negatively associated with atopic asthma. No associations were found with atopy. Conclusions: Exposure to endotoxins and fungal spores appears to have a protective effect on atopic asthma but may induce non-atopic asthma in farmers. abstract_id: PUBMED:34707611 Traditional Farming Lifestyle in Old Older Mennonites Modulates Human Milk Composition. Background: In addition to farming exposures in childhood, maternal farming exposures provide strong protection against allergic disease in their children; however, the effect of farming lifestyle on human milk (HM) composition is unknown. Objective: This study aims to characterize the maternal immune effects of Old Order Mennonite (OOM) traditional farming lifestyle when compared with Rochester (ROC) families at higher risk for asthma and allergic diseases using HM as a proxy. Methods: HM samples collected at median 2 months of lactation from 52 OOM and 29 ROC mothers were assayed for IgA1 and IgA2 antibodies, cytokines, endotoxin, HM oligosaccharides (HMOs), and targeted fatty acid (FA) metabolites. Development of early childhood atopic diseases in children by 3 years of age was assessed. In addition to group comparisons, systems level network analysis was performed to identify communities of multiple HM factors in ROC and OOM lifestyle. Results: HM contains IgA1 and IgA2 antibodies broadly recognizing food, inhalant, and bacterial antigens. OOM HM has significantly higher levels of IgA to peanut, ovalbumin, dust mites, and Streptococcus equii as well TGF-β2, and IFN-λ3. A strong correlation occurred between maternal antibiotic use and levels of several HMOs. Path-based analysis of HMOs shows lower activity in the path involving lactoneohexaose (LNH) in the OOM as well as higher levels of lacto-N-neotetraose (LNnT) and two long-chain FAs C-18OH (stearic acid) and C-23OH (tricosanoic acid) compared with Rochester HM. OOM and Rochester milk formed five different clusters, e.g., butyrate production was associated with Prevotellaceae, Veillonellaceae, and Micrococcaceae cluster. Development of atopic disease in early childhood was more common in Rochester and associated with lower levels of total IgA, IgA2 to dust mite, as well as of TSLP. Conclusion: Traditional, agrarian lifestyle, and antibiotic use are strong regulators of maternally derived immune and metabolic factors, which may have downstream implications for postnatal developmental programming of infant's gut microbiome and immune system. Answer: Farming exposures have been associated with both protective and deleterious effects on asthma, depending on various factors such as the type of farming, age of exposure, and specific environmental factors involved. Several studies have shown a preventive effect of traditional farming on the prevalence and incidence of asthma in childhood. Exposure to farm animals, their fodder, and the consumption of unpasteurized cow's milk have been identified as protective factors against asthma (PUBMED:33189120). This protective effect is thought to be mediated by higher microbial exposures from farm animals and the modulation of cytokine production patterns by unpasteurized milk consumed during pregnancy or early childhood (PUBMED:33189120). In contrast, certain types of farming, particularly in the United States where concentrated animal feeding operations (CAFOs) are more common, have been associated with increased asthma in children, especially those exposed to swine CAFOs. However, rates of atopy and allergy are lower in these children (PUBMED:25086344). The house dust microbiota in a U.S. farming population has been found to differ by current farming status, with current farming positively associated with bacterial diversity in house dust. This suggests that farming exposures impact the indoor microbiota, which may have implications for health outcomes (PUBMED:29863827). In adult Norwegian farmers, exposure to endotoxins and fungal spores appears to have a protective effect on atopic asthma but may induce non-atopic asthma. The prevalence of asthma was higher in cattle and pig farmers compared to farmers without livestock (PUBMED:15115863). Overall, the relationship between farming exposures and asthma is complex and influenced by a variety of factors. While certain aspects of farming life may offer protection against asthma, other exposures may increase the risk, particularly for non-atopic asthma. Understanding these relationships is crucial for developing strategies to prevent asthma and improve respiratory health (PUBMED:33189120, PUBMED:25086344, PUBMED:29863827, PUBMED:15115863).
Instruction: Does longstanding nicotine exposure impair bone healing and osseointegration? Abstracts: abstract_id: PUBMED:24353962 Nicotine Increases Osteoblast Activity of Induced Bone Marrow Stromal Cells in a Dose-Dependent Manner: An in vitro Cell Culture Experiment. Previous studies by our group showed that nicotine delivered via a transdermal nicotine patch significantly enhanced posterior spinal fusion rates in rabbits. Nicotine transdermal patches provide a steady serum level; there may be a dose-dependent effect of nicotine on posterior spinal fusion. In an in vitro cell culture model of rabbit bone marrow-derived osteoblast-like cells, cells were exposed to different concentrations of nicotine (0, 20, 40, 80 ng/mL and 10, 100, 250 μg/mL). Wells were stained with an alkaline phosphatase (ALP) staining kit to determine ALP enzyme activity. Cells were stained with Von Kossa for mineralization. A two-way analysis of variance (ANOVA) using dose and time as variables showed significant differences among groups; post hoc analysis showed that the 100-μg/mL dose of nicotine significantly enhanced ALP activity over controls. A one-way ANOVA using dose as the variable showed that the 100- and 250-μg/mL doses had significantly greater mineralization than controls. Dose-response analysis revealed a statistically significant effect of nicotine dose on ALP activity and Von Kossa activity. The effects of nicotine on spinal fusion may be dose-dependent and due to stimulation of osteoblastic activity. Nicotine may not be responsible for the inhibited bone healing observed in smokers. abstract_id: PUBMED:23802681 Nicotine affects cutaneous wound healing in stressed mice. Stress is an important condition of modern life. Nicotine addiction can modulate the physiological response to stress. Cutaneous healing is a complex process resulting in scar formation, which can be delayed by stress. Therefore, the aim of this study was to investigate the effects of nicotine administration on cutaneous wound healing in chronically stressed mice. Male mice were submitted to rotational stress, whereas control animals were not subjected to stress. These stressed and control animals were treated with a transdermal nicotine patch that was changed every day. A full-thickness excisional lesion was also generated, and 14 days later, lesions had recovered. However, the Stress + Nicotine group presented a delay in wound contraction. These wounds showed a decrease in inflammatory cell infiltration and lower expression of transforming growth factor-β (TGF-β), whereas there was an increase in angiogenesis and tumor necrosis factor-α (TNF-α) expression. In vitro fibroblast migration was also impaired by the nicotine treatment of stressed-stimulated cells. In conclusion, nicotine administration potentiates the delay in wound closure observed in mice submitted to stress. abstract_id: PUBMED:19660042 Effect of smoking, abstention, and nicotine patch on epidermal healing and collagenase in skin transudate. Delayed wound healing may explain postoperative tissue and wound dehiscence in smokers, but the effects of smoking and smoking cessation on the cellular mechanisms remain unclear. Suction blisters were raised in 48 smokers and 30 never smokers. The fluid was retrieved and the epidermal roof was excised. Transepidermal water loss (TEWL) was measured after 2, 4, and 7 days. Then, the smokers were randomized to continuous smoking or abstinence with a transdermal nicotine patch or a placebo by concealed allocation. The sequence was repeated after 4, 8, and 12 weeks in all smokers and abstainers and in 6 never smokers. Matrix metalloproteinase (MMP)-8 and MMP-1 levels in suction blister fluid were assessed by an enzyme-linked immunosorbent assay. Random-effects models for repeated measurements were applied and p< or =0.05 was considered significant. One week after wounding the TEWL was 17.20 (14.47-19.92) g/cm(2) hour (mean, 95% CI) in smokers and 13.89 (9.46-18.33) in never smokers (p<0.01). In abstinent smokers TEWL was 18.95 (15.20-22.70)(p<0.01, when compared with smokers). In smokers, MMP-8 was 36.4 (24.3-48.5) ng/mL (mean, 95% CI) and 15.2 (1.4-30.2) ng/mL in never smokers (p<0.01). Abstinent smokers' MMP-8 level was 21.2 ng/mL (6.6-43.0) (p=0.02, when compared with smokers). MMP-1 was unaffected by smoking and abstention. Transdermal nicotine patch did not affect any parameter. We conclude that smoking attenuates epidermal healing and may enhance extracellular matrix degradation. Three months of abstinence from smoking does not restore epidermal healing, whereas 4 weeks of abstinence normalizes suction blister MMP-8 levels. These findings suggest sustained impaired wound healing in smokers and potential reversibility of extracellular matrix degradation. abstract_id: PUBMED:18647709 Effects of fetal and neonatal exposure to nicotine on blood pressure and perivascular adipose tissue function in adult life. In Wistar rats, maternal exposure to nicotine was shown to impair the inhibitory function of perivascular adipose tissue on vascular contractility in the aorta of the offspring. It is not known whether an impairment of perivascular adipose tissue function occurs in smaller arteries, and whether the control of blood pressure is affected. Here we studied the blood pressure effects and the alteration of perivascular adipose tissue function in mesenteric arteries of the offspring born to Wistar-Kyoto rat (WKY) dams exposed to nicotine. Nulliparous female WKY rats were given either nicotine bitartrate (1 mg/kg/day) or saline (vehicle) by subcutaneous injection 2 weeks prior to mating, during pregnancy and until weaning. Blood pressure of the offspring and functional studies with mesenteric arteries were conducted. Tissue samples (thoracic aorta, mesenteric arteries, and kidneys) were collected for morphological and immunohistochemical examinations. Blood pressure increased from 14 weeks of age onwards in the offspring born to nicotine-exposed dams. Nicotine-exposed offspring showed a significant increase in the number of brown adipocytes in aortic perivascular adipose tissue relative to control offspring. In mesenteric arteries from control offspring, contractile responses induced by phenylephrine, serotonin, and 9,11-dideoxy-11alpha, 9alpha-epoxymethanoprostaglandin F(2)alpha (U44619) were significantly attenuated in the presence of perivascular adipose tissue, an effect not observed in the nicotine-exposed tissues. Endothelium-dependent relaxation responses to carbachol, kidney weight, the total number of nephrons and glomerulus' size were comparable in nicotine and saline groups. We conclude that fetal and neonatal exposure to nicotine caused blood pressure elevation. Alterations in perivascular adipose tissue composition and modulatory function are some of the mechanisms associated with this blood pressure increase. abstract_id: PUBMED:10070098 Nicotine-modified postinfarction left ventricular remodeling. Cigarette smoking has been noted to impair wound healing in tissues such as skin, bone, and gut. This study was designed to examine whether nicotine adversely affects postinfarction cardiac wound healing and remodeling in an experimental model of myocardial infarction. For this purpose, two groups of rats were studied. The control group received a simple bandage, and the nicotine group had a section (1.75 mg/day) of a nicotine patch attached on their backs. After a 7-day treatment period, an anterior wall infarction was induced. A bandage-free 7-day healing period followed, after which hearts were isolated for mechanical tests. Nicotine-treated rats developed significantly enlarged left ventricles with thin, infarcted walls and a rightward shift in the passive pressure-volume relationship. Pressure-strain analysis also indicated possible changes in the material properties of the wound for nicotine-treated rats. In conclusion, nicotine has significant adverse effects on postinfarction healing and left ventricular remodeling. These observations have important clinical implications because of the enhanced risk for development of heart failure. abstract_id: PUBMED:21182933 Effect of in utero and lactational nicotine exposure on the male reproductive tract in peripubertal and adult rats. The objective of this study was to determine the effect of in utero and lactational exposure to nicotine on the male reproductive tract. Dams were randomly assigned to receive saline or nicotine bitartrate (1mg/kg-d s.c.) daily for two weeks prior to mating until weaning (postnatal day 21). Male offspring were sacrificed at 7 (peri-pubertal) and 26 (adult) weeks of age. Nicotine-exposure resulted in retention of spermatids after stage VIII, tubular vacuolation, degeneration of pachytene and round spermatids at stage VII in the testes; and lymphocyte infiltration, germ cell exfoliation, and hypospermia in epididymides, at 7 weeks of age. Nicotine-exposure had no effect on testis or epididymal morphology, daily sperm production, epididymal sperm reserve, sperm viability at 26 weeks of age, and circulating testosterone levels at either age examined. We conclude that maternal nicotine-exposure during pregnancy and lactation can induce transient structural changes in the testis and epididymis of male offspring. abstract_id: PUBMED:17592031 Fetal and neonatal nicotine exposure and postnatal glucose homeostasis: identifying critical windows of exposure. Fetal and lactational exposure to nicotine at concentrations comparable with those in women who smoke causes impaired glucose tolerance in male offspring in postnatal life. It remains unknown whether there are critical windows of susceptibility to nicotine exposure. Female nulliparous Wistar rats were given saline vehicle or nicotine bitartrate (1 mg/kg per day) prior to pregnancy, which was then: A) discontinued during pregnancy and lactation; B) continued until parturition; C) continued until weaning; and D) discontinued during pregnancy and restarted from lactation until weaning. At 26 weeks of age, offspring in each group were challenged with an oral glucose load. Beta-cell mass, apoptosis, and proliferation were measured at birth, and at 4 and 26 weeks of age. The animals in group C (exposed to nicotine throughout pregnancy and lactation) had reduced beta-cell mass from birth through 26 weeks of age and impaired glucose homeostasis at 26 weeks of age. beta-Cell mass was also reduced at birth and at 4 weeks of age in animals exposed to nicotine during pregnancy alone (group B). However, enhanced proliferation following weaning led to recovery of this defect to 98% of control levels by week 26. The response to the glucose load in groups A, B, and D did not differ from controls. Continued exposure to nicotine from conception through lactation results in permanent beta-cell loss and subsequent impaired glucose tolerance. This model of type 2 diabetes requires that nicotine exposure occurs both in utero and during lactation. abstract_id: PUBMED:17322582 Fetal and neonatal exposure to nicotine disrupts ovarian function and fertility in adult female rats. Women born to mothers who smoked during pregnancy have been shown to have impaired fertility, although the mechanisms underlying this association are unknown. Nicotine administration in adult animals has adverse effects on the ovary and uterus; however, the effects of fetal exposure to nicotine on postnatal ovarian function have not been determined. The goal of this study was to assess the effect of fetal and neonatal exposure to nicotine on ovarian function and fertility of the offspring. Nulliparous female Wistar rats were given 1 mg.kg-1.d-1 nicotine bitartrate, subcutaneously for 14 d prior to mating, during pregnancy and throughout lactation until weaning. Measures of fertility, breeding success, and serum levels of ovarian steroid hormones in offspring were assessed at 4 and 6 mo of age. Fetal and neonatal exposure to nicotine significantly increased the time to pregnancy as the animals aged. Similarly, evidence of altered ovarian steroidogenesis including increased serum progesterone concentrations and a decreased estrogen:progesterone ratio was observed in 6-mo-old animals. We conclude that fetal and neonatal exposure to nicotine results in delayed ovarian dysfunction in adult female offspring. abstract_id: PUBMED:28637659 Enduring effects of perinatal nicotine exposure on murine sleep in adulthood. The long-term consequences of early life nicotine exposure are poorly defined. Approximately 8-10% of women report smoking during pregnancy, and this may promote aberrant development in the offspring. To this end, we investigated potential enduring effects of perinatal nicotine exposure on murine sleep and affective behaviors in adulthood (~13-15 wk of age) in C57Bl6j mice. Mothers received a water bottle containing 200 µg/ml nicotine bitartrate dihydrate in 2% wt/vol saccharin or pH-matched 2% saccharin with 0.2% (vol/vol) tartaric acid throughout pregnancy and before weaning. Upon reaching adulthood, offspring were tested in the open field and elevated plus maze, as well as the forced swim and sucrose anhedonia tests. Nicotine-exposed male (but not female) mice had reduced mobility in the open field, but no differences were observed in anxiety-like or depressive-like responses. Upon observing this male-specific phenotype, we further assessed sleep-wake states via wireless EEG/EMG telemetry. Following baseline recording, we assessed whether mice exposed to nicotine altered their homeostatic response to 5 h of total sleep deprivation and whether nicotine influenced responses to a powerful somnogen [i.e., lipopolysaccharides (LPS)]. Males exposed to perinatal nicotine decreased the percent time spent awake and increased time in non-rapid eye movement (NREM) sleep, without changes to REM sleep. Nicotine-exposed males also displayed exaggerated responses (increased time asleep and NREM spectral power) to sleep deprivation. Nicotine-exposed animals additionally had blunted EEG slow-wave responses to LPS administration. Together, our data suggest that perinatal nicotine exposure has long-lasting effects on normal sleep and homeostatic sleep processes into adulthood. abstract_id: PUBMED:18852877 Fetal and neonatal nicotine exposure in Wistar rats causes progressive pancreatic mitochondrial damage and beta cell dysfunction. Nicotine replacement therapy (NRT) is currently recommended as a safe smoking cessation aid for pregnant women. However, fetal and neonatal nicotine exposure in rats causes mitochondrial-mediated beta cell apoptosis at weaning, and adult-onset dysglycemia, which we hypothesize is related to progressive mitochondrial dysfunction in the pancreas. Therefore in this study we examined the effect of fetal and neonatal exposure to nicotine on pancreatic mitochondrial structure and function during postnatal development. Female Wistar rats were given saline (vehicle control) or nicotine bitartrate (1 mg/kg/d) via subcutaneous injection for 2 weeks prior to mating until weaning. At 3-4, 15 and 26 weeks of age, oral glucose tolerance tests were performed, and pancreas tissue was collected for electron microscopy, enzyme activity assays and islet isolation. Following nicotine exposure mitochondrial structural abnormalities were observed beginning at 3 weeks and worsened with advancing age. Importantly the appearance of these structural defects in nicotine-exposed animals preceded the onset of glucose intolerance. Nicotine exposure also resulted in significantly reduced pancreatic respiratory chain enzyme activity, degranulation of beta cells, elevated islet oxidative stress and impaired glucose-stimulated insulin secretion compared to saline controls at 26 weeks of age. Taken together, these data suggest that maternal nicotine use during pregnancy results in postnatal mitochondrial dysfunction that may explain, in part, the dysglycemia observed in the offspring from this animal model. These results clearly indicate that further investigation into the safety of NRT use during pregnancy is warranted. Answer: The effects of nicotine on bone healing and osseointegration appear to be complex and dose-dependent. According to the provided abstracts, nicotine has been shown to have both positive and negative effects on bone and tissue healing processes. On one hand, an in vitro cell culture experiment demonstrated that nicotine could enhance osteoblast activity in a dose-dependent manner. Specifically, higher doses of nicotine (100-μg/mL and 250-μg/mL) significantly increased alkaline phosphatase (ALP) activity and mineralization in rabbit bone marrow-derived osteoblast-like cells, suggesting that nicotine may stimulate osteoblastic activity and potentially improve spinal fusion outcomes (PUBMED:24353962). On the other hand, other studies have reported negative effects of nicotine on healing. Nicotine exposure was found to delay cutaneous wound healing in stressed mice, with a decrease in inflammatory cell infiltration and transforming growth factor-β (TGF-β) expression, and an increase in angiogenesis and tumor necrosis factor-α (TNF-α) expression (PUBMED:23802681). Additionally, smoking, which delivers nicotine among other compounds, has been associated with attenuated epidermal healing and may enhance extracellular matrix degradation, with some of these effects persisting even after smoking cessation (PUBMED:19660042). In the context of bone healing specifically, the abstracts do not provide direct evidence of nicotine impairing osseointegration. However, given the known adverse effects of smoking on wound healing and the complex influence of nicotine on cellular processes, it is plausible that chronic nicotine exposure could have detrimental effects on bone healing and osseointegration, particularly at certain doses or in the presence of other stressors. More research would be needed to clarify the direct impact of nicotine on bone healing and osseointegration in vivo.
Instruction: Redundant publications in scientific ophthalmologic journals: the tip of the iceberg? Abstracts: abstract_id: PUBMED:15121360 Redundant publications in scientific ophthalmologic journals: the tip of the iceberg? Objective: The number of scientific publications is often used to measure scientific achievement. This practice can motivate unethical conduct, such as redundant or duplicate publications, defined as publication of the same scientific contents in more than 1 journal. The aim of this study was to estimate the amount of redundant publications in ophthalmologic journals. Design: Retrospective analysis of published literature. Methods: We developed an electronic search engine for redundancies to estimate the amount of duplicate publications in scientific journals. When redundancies reached a given degree (matching score), the articles were screened manually based on authors, titles, and abstracts. We applied this method to the 22 433 articles that were published between 1997 and 2000 in 70 ophthalmologic journals indexed by MEDLINE. Main Outcome Measures: The number of duplicate publications with a matching score of 0.6 or more, the number of involved journals, and the number of authors. Results: Redundancies reached a matching score of 0.6 or more in 13 967 pairs of articles. Out of them, a sample of 2210 was reviewed manually. We found 60 redundant articles and estimated that 1.39% of the publications were redundant. Thirty-two journals and an estimate of 1092 authors were involved. In 5% of cases, the scientific conclusions were modified. Conclusions: Because of the restrictive selection process, the impracticability of detecting all redundant publications, and the estimated amount of duplicates increases with lower matching scores, we regard our estimate to be the tip of the iceberg. Duplicate publications have several negative impacts, but neither peer reviewers nor editors can protect their journal from them completely. Several deterrents for duplicate publications are possible, but as long as publications remain the central requirement for academic advancement, a solution seems unlikely. Nevertheless, it is the responsibility of all those who care about objective research and evidence-based medicine to address this problem-not only in ophthalmology. abstract_id: PUBMED:11391360 Redundant surgical publications: tip of the iceberg? Background: A redundant publication is one which duplicates previous, simultaneous, or future publications by the same author or group or, alternatively, could have been combined with the latter into one paper. As there is no information about the extent of this problem in the surgical literature, we set out to assess the incidence, spectrum, and salient characteristics of redundant publications in 3 leading surgical journals. Methods: Original articles (excluding reviews, editorials, abstracts, and letters) published during 1998 in the journals Surgery, The British Journal of Surgery, and Archives of Surgery were searched by using the on-line search engine PUBMED. Each original article was scrutinized to identify redundancy by combining the names of the first, second, and last authors with a few key words from the title. Papers were defined as "suspected" redundant publications if they were found to address the same topic as the "index" article and shared some or most of the elements of methodology, results, or conclusions. The full versions of all suspected papers were retrieved and compared with the index articles. A grading system was developed to define several types of redundant publications: A. "dual"; B. "potentially dual"; C. "salami-slicing." Results: A total of 660 articles were screened. There were 92 index articles (14%) leading to 147 suspected papers found in other journals, representing some potential form of a redundant publication. The vast majority of suspected papers were published within approximately a year of the index paper and were not cited by the latter. Most (69%) of the suspected papers were also published in surgical journals. Only 12 (8.1%) appeared in, or originated from, a "local-foreign" journal. Twenty (13.6%) of the suspected papers met the criteria for dual publications, 50 (34%) for potentially dual publications, and 77 (52.4%) were considered products of salami-slicing. Conclusions: Almost 1 in every 6 original articles published in leading surgical journals represents some form of redundancy. Current on-line search technology provides an effective tool for identifying and tracing such publications, but it is not used routinely as part of the peer review process. Redundancies occur in several well-defined patterns; the phenomenon is widespread, and it cuts across the entire spectrum of surgeons in the United States and abroad. Redundant publications must be recognized not as a mere nuisance but as a real threat to the quality and intellectual impact of surgical publishing. abstract_id: PUBMED:16949937 Repetitive, duplicate, and redundant publications: a review for authors and readers. Repetitive, duplicate, and redundant publications are an important concern in the scientific literature. Their occurrence affects science and carries with it sanctions of consequence. This editorial provides a brief review of the definitions, classifications, impact, sanctions, and prevention strategies regarding repetitive, duplicate, and redundant publications. abstract_id: PUBMED:28038824 Ethics of reviewing scientific publications. Introduction: The approval or rejection of scientific publications can have important consequences for scientific knowledge, so considerable responsibility lies on those who have to assess or review them. Today it seems that the peer review process, far from being considered an outdated system to be abandoned, is experiencing a new upturn. Aim And Methods: This article proposes criteria for the conduct of reviewers and of those who select them. While commenting on new emerging models, it provides practical recommendations for improving the peer-review system, like strengthening the role of guidelines and training and supporting reviewers. Conclusions: The process of peer review is changing, it is getting more open and collaborative, but those same ethical principles which guided it from its very origin should remain untouched and be firmly consolidated. The paper highlights how the ethics of reviewing scientific publications is needed now more than ever, in particular with regard to competence, conflict of interest, willingness to discuss decisions, complete transparency and integrity. abstract_id: PUBMED:17260663 Redundant publications in the orthopedic literature. Rates of redundant publications in the general surgery literature are approximately 14%. This article identifies the rate of redundant publications in the orthopedic literature. All original articles published during the year 2000 in The Journal of Bone and Joint Surgery (American Volume), Journal of Orthopaedic Trauma, Journal of Spinal Disorders, and Spine were searched using PubMed. Redundancy rate was 4 (3.15%) of 127 for The Journal of Bone and Joint Surgery (American volume), zero (0%) of 70 for Journal of Orthopaedic Trauma, 2 (2.90%) of 69 for Journal of Spinal Disorders, and 11 (3.12%) of 353 for Spine. abstract_id: PUBMED:20388454 Fraud and misconduct in scientific publications Introduction: Editors of scientific publications have, traditionally, been unaware of frauds and misconduct, being more concerned with subjects associated to impact or with editorial review. But, in the last few years they have been checking and reporting that there is misconduct in the scientific field, and furthermore, it is not uncommon. Method: The most common misconduct of authors is reviewed. These are seen as an infringement of the conditions that a scientific work must have, and include fraud, such as plagiarism, repeated publications or redundant publications. Their frequency and the perspectives from a publishing point of view are discussed. Conclusions: Many editors are demanding clear regulations to prevent misconduct. Editorial review and the provision of evaluation tools for reviewers are prevention, but not infallible formulas. What is most important could be that editorial teams be aware of its existence. abstract_id: PUBMED:36060719 Trust but verify: An analysis of redundant publications from two major psychiatry journals in India. Background: No analysis of redundant or duplicate publications, deemed unethical and unscientific, has been undertaken in psychiatric literature. Aim: To analyze the proportion and patterns of redundant publications associated with index articles published in two major Indian psychiatry journals. Methods: Index articles were original papers published in the Indian Journal of Psychiatry and the Indian Journal of Psychological Medicine between 2015 and 2017. Using a systematic search strategy that combined author names and article keywords, we combed the literature to identify and characterize redundant publications related to these index articles. Redundant publications were classified into one of the following categories using a priori definitions: dual, suspected dual, salami slicing, meat extender, and extended sample publication. Results: From 324 index articles screened, a total of 27 articles (8.4%) were identified to have 32 associated redundant publications of the following types: dual (n = 3), suspected dual (n = 2), salami slicing (n = 22), meat extender (n = 3), and extended sample publication (n = 2). A majority of the redundant articles (n = 23, 71.9%) failed to clearly cross-reference the prior publication(s). We also identified nine non-redundant but related publications with no proper cross-referencing in five of them. Conclusion: Redundant publications are a common practice in the psychiatry journals screened. Salami slicing is the most common form of redundancy, with no proper cross-referencing in most cases. Concerted efforts are needed to detect and deal with this concerning practice that undermines both science and ethics. abstract_id: PUBMED:33906140 Text mining in scientific publications with Argentine authors In the present work we use text mining as a treatment tool for a large scientific database, with the aim of obtaining new information about all the publications signed by Argentine authors and indexed until 2019, in the area of life sciences. More than 75 000 articles were analysed, published in around 5000 media, signed by about 186 000 authors with a workplace in Argentina or in collaborations with Argentine laboratories. Using automated tools that were developed ad hoc, the text of around 70 800 abstracts was analysed, seeking, through non-supervised digital detection, the main topics addressed by the authors, and the relationship with health problems in Argentina and their treatment. Results are also presented regarding the number of publications per year, the journals that have published them, and their authors and collaborations. These results, together with the predictions that were obtained, could become a useful tool to optimize the management of resources dedicated to basic and clinical research. abstract_id: PUBMED:29519161 Repeating probability of authors with retracted scientific publications. Both the scientific community and the general public have expressed concern over scientific misconduct. The number of retracted articles has increased dramatically over the past 20 years and now comprises about .02% of the 2 million articles published each year. Retraction of publications available in large public databases can be analyzed as an objective measure for scientific misconduct and errors. In this project, we analyzed retractions of scientific publications using the Web of Science (WoS) and PubMed databases. We found that a power law is applicable to distributions of retracting authors and retracted publications with exponents of about -.6 and -3.0, respectively. Application of a power-law model for retracted publications implies that retraction is not a random event. Analysis of the retraction distributions suggests that a small fraction (1-2%) of retracting authors with ≧5 retractions are responsible for around 10% of retraction. The probabilities for their repeating retraction are calculated using a statistical model: 3-5% likelihood of repeat retraction for authors with a single retraction at five years after the latest retraction and 26-37% for authors with five retractions at five years after the latest retraction. By focusing on those with repeated retractions, this analysis could contribute to identification of measures to reduce such repetition of retractions. abstract_id: PUBMED:28463926 Retracted Publications in Orthopaedics: Prevalence, Characteristics, and Trends. Background: Retracted publications are a crucial, yet overlooked, issue in the scientific community. The purpose of our study was to assess the prevalence, characteristics, and trends of retracted publications in the field of orthopaedics. Methods: Five databases were utilized to identify retracted publications in orthopaedics. The cited articles were assessed for various characteristics, including reason for retraction, based on the Committee on Publication Ethics (COPE) guidelines and trends over time. Results: From 1984 to June 4, 2016, 59 of 229,502 orthopaedic publications were retracted (3 per 10,000 articles). There was a spike in the prevalence (22 of 59) of retracted articles in 2015. When compared with the total number of retracted publications identified through PubMed, the field of orthopaedics represented 1.4% of all retracted publications. The original version of 47 of these 59 retracted publications was still available on the respective journal's web site; 14 (30%) of these were not noted as having been retracted. The mean time from electronic publication to retraction was 19.4 ± 23.3 months. The mean number of citations of a retracted publication after the date of retraction was 9.3 ± 19.3. Reasons for retraction included plagiarism (32%), misconduct (27%), redundant publication (22%), miscalculation or experimental error (8%), and unethical research (0%); the reason for retraction was not stated for 10% of the publications. There was no correlation between a journal's impact factor and the mean number of months to retraction (p = 0.564). Conclusions: While uncommon, the retraction of publications within the field of orthopaedics may be increasing. The most often cited reasons for retraction were plagiarism, misconduct, and redundant publication. Retracted articles continue to be cited in the literature after retraction. Clinical Relevance: Greater awareness of the COPE guidelines within the orthopaedic community and more efficient means to prevent the citation of retracted articles are needed. Answer: The study conducted on redundant publications in scientific ophthalmologic journals suggests that the issue may indeed be just the tip of the iceberg. The retrospective analysis, which utilized an electronic search engine to estimate the amount of duplicate publications, applied this method to 22,433 articles published between 1997 and 2000 in 70 ophthalmologic journals indexed by MEDLINE. The results showed that redundancies with a matching score of 0.6 or more were found in 13,967 pairs of articles. Upon manual review of a sample of 2,210 articles, 60 redundant articles were identified, leading to an estimate that 1.39% of the publications were redundant. This involved 32 journals and an estimated 1,092 authors. In 5% of the cases, the scientific conclusions were modified (PUBMED:15121360). The study acknowledges that due to the restrictive selection process and the impracticability of detecting all redundant publications, the actual number of duplicates could be higher, especially as the estimated amount of duplicates increases with lower matching scores. The authors of the study conclude that duplicate publications have several negative impacts and that neither peer reviewers nor editors can completely protect their journals from them. They suggest that several deterrents for duplicate publications are possible, but as long as publications remain a central requirement for academic advancement, a complete solution seems unlikely. The responsibility to address this problem lies with everyone who cares about objective research and evidence-based medicine, not only in ophthalmology (PUBMED:15121360).
Instruction: Is HbA(1c) affected by glycemic instability? Abstracts: abstract_id: PUBMED:14514571 Is HbA(1c) affected by glycemic instability? Objective: HbA(1c) is a standard clinical assessment of glycemia and the basis of most data relating glycemic control to complications. It remains unclear, however, whether HbA(1c) is affected by glycemic variation and mean glycemia. Research Design And Methods: To test this question, we analyzed the statistical relationship between HbA(1c) levels and glycemic variability as measured by self-monitoring of blood glucose (SMBG). The records of 256 subjects were studied. SMBG data for the preceding 3 months were downloaded, and HbA(1c) was measured by ion-exchange high-performance liquid chromatography. Simple- and random-effects linear regression models were used to assess the independent contributions of mean blood glucose (BG) and SD of BG to HbA(1c), after adjusting for the mean BG. Results: Mean +/- SD for HbA(1c) was 7.66 +/- 1.11% and for BG was 8.5 +/- 1.9 mmol/l (153.3 +/- 34.9 mg/dl); SD of BG for individual subjects was 3.5 mmol/l (63.3 mg/dl), varying from 0.4 mmol/l (8.1 mg/dl; very stable glycemia) to 8.4 mmol/l (152.5 mg/dl; very unstable glycemia). A close correlation between mean BG and HbA(1c) was demonstrated (r = 0.62). Also, within-subject SD of BG correlated with HbA(1c) (r = 0.375), indicating that people with poorer glycemic control had higher BG variance. After adjusting for mean BG in a linear regression model, however, the effect of the within-subject SD of BG on the HbA(1c) was insignificant. Several further analyses confirmed the strength of the observation. Conclusions: HbA(1c) reflects mean glycemia and is not meaningfully affected by glycemic instability after adjusting for mean BG. abstract_id: PUBMED:22705451 HbA(1c)--an analyte of increasing importance. Since the incorporation in 1976 of HbA(1c) into a monitoring program of individuals with diabetes, this test has become the gold standard for assessment of glycemic control. Analytical methods have steadily improved in the past two decades, largely through the efforts of the National Glycohemoglobin Standardization program (NGSP). The new definition of HbA(1c) and the introduction of an analytically pure calibrator have increased the possibility for greater improvements in analytical performance. Controversies exist in the reporting of HbA(1c). The use of HbA(1c) has expanded beyond the use solely as a measure of glycemic control into a test for screening and diagnosing diabetes. With improvements in analytical performance, the effects of demographic factors such as age and ethnicity and clinical factors such as iron deficiency have been observed. In this review, the history, formation, analytical methods and parameters that affect HbA(1c) analysis are discussed. abstract_id: PUBMED:24985893 Comparison of IFCC-calibrated HbA(1c) from laboratory and point of care testing systems. Objective: WHO, IDF and ADA recommend HbA(1c) ≥6.5% (48 mmol/mol) for diagnosis of diabetes with pre-diabetes 6.0% (42 mmol/mol) [WHO] or 5.7% (39 mmol/mol) [ADA] to 6.4% (47 mmol/mol). We have compared HbA(1c) from several methods for research relating glycaemic markers. Research Design And Methods: HbA1c was measured in EDTA blood from 128 patients with diabetes on IE HPLC analysers (Bio-Rad Variant II NU, Menarini HA8160 and Tosoh G8), point of care systems, POCT, (A1cNow+ disposable cartridges and DCA 2000(®)+ analyser), affinity chromatography (Primus Ultra2) and the IFCC secondary reference method (Menarini HA8160 calibrated using IFCC SRM protocol). Results: Median (IQ range) on IFCC SRM was 7.5% (6.8-8.4) (58(51-68) mmol/mol) HbA(1c) with minimum 5.3%(34 mmol/mol)/maximum 11.9%(107 mmol/mol). There were positive offsets between IFCC SRM and Bio-Rad Variant II NU, mean difference (1SD), +0.33%(0.17) (+3.6(1.9) mmol/mol), r(2)=0.984, p<0.001 and Tosoh G8, +0.22%(0.20) (2.4(2.2) mmol/mol), r(2)=0.976, p<0.001 with a very small negative difference -0.04%(0.11) (-0.4(1.2) mmol/mol), r(2)=0.992, p<0.001 for Menarini HA8160. POCT methods were less precise with negative offsets for DCA 2000(®)+ analyser -0.13%(0.28) (-1.4(3.1) mmol/mol), r(2)=0.955, p<0.001 and A1cNow+ cartridges -0.70%(0.67) (-7.7(7.3) mmol/mol), r(2)=0.699, p<0.001 (n=113). Positive biases for Tosoh and Bio-Rad (compared with IFCC SRM) have been eliminated by subsequent revision of calibration. Conclusions: Small differences observed between IFCC-calibrated and NGSP certified methods across a wide HbA(1c) range were confirmed by quality control and external quality assurance. As these offsets affect estimates of diabetes prevalence, the analyser (and calibrator) employed should be considered when evaluating diagnostic data. abstract_id: PUBMED:20424541 Difficulties in interpreting HbA(1c) results. Glycated hemoglobin (HbA(1c)) is a parameter broadly employed in the assessment of glycemic control in diabetes. The 2010 "Standards of medical care in diabetes", published by the American Diabetes Association (ADA), recommended performing the HbA(1c) test at least every 6 months in patients in whom disease is clinically stable, while subjects after modifications of therapy or in whom glycemic goals have not been met should be tested every 3 months. Moreover, the ADA suggested the HbA(1c) assay be implemented in the diagnosis of diabetes and in the detection of an increased risk of developing this disease. Among various approaches employed to measure the concentration of HbA(1c), high-pressure liquid chromatography is considered to be a reference method. HbA(1c) tests might not be clinically reliable in some circumstances. In cases when HbA(1c) levels do not correlate with glycemia and clinical symptoms, the results should be interpreted with caution, several conditions known to influence the measurement should be taken into account, and use of another diagnostic method, or even testing another marker of glycemic control, e.g., fructosamine or 1,5-anhydroglucitol, should be considered. abstract_id: PUBMED:23428267 Dapsone-induced decrease in the HbA(1c) level may cause artefacts in diabetic control Glycated haemoglobin (HbA(1c)) level is used to monitor glycaemic control in diabetic patients, and has recently been included as a diagnostic factor for type 2 diabetes mellitus (DM). A 50-year-old male with well-controlled type 1 DM, Mb Addison and coeliac disease was diagnosed with dermatitis herpetiformis and treated with dapsone. Subsequently, the patient presented a pronounced drop in HbA(1c) levels without changes in blood glucose levels. S-fructosamine values were in agreement with the previous stable HbA(1c) values. Blood analyses showed inappropriately low HbA(1c) values due to mild haemolysis and shortening of erythrocyte lifetime without anaemia. abstract_id: PUBMED:19939770 Propositions for the standardized expression of HbA 1c results The 2007 international consensus about the standardization of HbA(1c) determination and expression of results is progressively implemented in most countries. In France, a common working group of the Société française de biologie clinique (SFBC) and the Société francophone de diabétologie (SFD) has expressed the following recommendations. HbA(1c) results are expressed in percentage of total hemoglobin and in mmol HbA(1c)/mol Hb, but are not converted into estimated average glucose. A table indicating the correspondence between HbA(1c) and estimated average glucose may be given with the results, subject to precautions of interpretation at the individual level. abstract_id: PUBMED:23318566 Measurement of Hba(1C) in patients with chronic renal failure. Background: Carbamylated hemoglobin (carbHb) is reported to interfere with measurement and interpretation of HbA(1c) in diabetic patients with chronic renal failure (CRF). There is also concern that HbA1c may give low results in these patients due to shortened erythrocyte survival. Methods: We evaluated the effect of carbHb on HbA(1c) measurements and compared HbA(1c) with glycated albumin (GA) in patients with and without renal disease to test if CRF causes clinically significant bias in HbA(1c) results by using 11 assay methods. Subjects included those with and without renal failure and diabetes. Each subject's estimated glomerular filtration rate (eGFR) was used to determine the presence and degree of the renal disease. A multiple regression model was used to determine if the relationship between HbA(1c) results obtained from each test method and the comparative method was significantly (p<0.05) affected by eGFR. These methods were further evaluated for clinical significance by using the difference between the eGRF quartiles of >7% at 6 or 9% HbA(1c). The relationship between HbA(1c) and glycated albumin (GA) in patients with and without renal failure was also compared. Results: Some methods showed small but statistically significant effects of eGFR; none of these differences were clinically significant. If GA is assumed to better reflect glycemic control, then HbA(1c) was approximately 1.5% HbA(1c) lower in patients with renal failure. Conclusions: Although most methods can measure HbA(1c) accurately in patients with renal failure, healthcare providers must interpret these test results cautiously in these patients due to the propensity for shortened erythrocyte survival in renal failure. abstract_id: PUBMED:23105618 Lack of precision in HbA(1c) values on variant II in cases of hemoglobin Q India. This study was carried out to check the precision of HbA(1c) values on Bio-Rad Variant II in cases of a rare hemoglobin variant Q India. The study was carried out over a three month period on samples collected for HbA(1c) estimation. Seven out of eleven patients showed variable results of HbA(1c) with a very high and unacceptable intraday mean coefficient of variation (CV) of 9.93%. We conclude, that the results of HbA(1c) on Variant II can not be reported without adversely affecting HbA(1c) as a marker of long-term glycemic control in patients who have hemoglobin Q India. The HbA(1c) value of these patients needs to be assessed by a different instrument/method or the glycemic control be monitored by an alternate test like serum Fructosamine. abstract_id: PUBMED:21427500 Impact of glycemic control on the clinical outcome in diabetic patients with percutaneous coronary intervention--from the FU-registry. Background: It is not yet clear whether glycemic control affects the clinical outcome of percutaneous coronary intervention (PCI) in diabetic patients. Methods And Results: This study compared the effects of glycemic control on the clinical outcome in 2 groups of patients with diabetes mellitus (DM) who underwent PCI: a poor-glycemic-control group, who showed greater than 6.9% HbA(1c) at the time of PCI (Pre-HbA(1c)) (`≥6.9 group', n=334 patients) and a good-glycemic-control group, who showed less than <6.9% at Pre-HbA(1c) (`<6.9 group', n=212 patients). The patients in the ≥6.9 group were further divided into 2 groups for further comparisons: a `DM control group' and a `Poor control group'. At follow-up (300 days), the incidence of major adverse cardiac event (MACE) was significantly (P<0.05) lower in the <6.9 group (18.4% vs. 26.2%). However, there was no difference in MACE between the DM control group and the Poor control group. In a multivariate analysis, there was no relationship between the incidence of MACE and Pre-HbA(1c), Pre-HbA(1c)≥6.9% or the HbA(1c) difference (Pre-HbA(1c)-HbA(1c) at follow-up). Conclusions: Clinical outcomes in the <6.9 group were superior to those in the ≥6.9 group as pre-PCI glycemic control affected the baseline characteristics. The results suggested that glycemic control started at PCI was not associated with an improvement in the clinical outcome at follow-up. abstract_id: PUBMED:14757288 Strongest correlation of HbA(1c) with 1-month-earlier glucose levels in type 2 diabetes. We examined the correlations of hemoglobin A(1c) (HbA(1c)) with each plasma glucose (PG) level obtained at 0 (the same day), 1 and 2 month(s) prior to HbA(1c) determination. Data were from glycemic profiles of four patients of type 2 diabetes mellitus treated with tablets whose HbA(1c) and pre- and post-breakfast PG levels were monitored each month. There was no significant difference in the correlation coefficients in cases 1 and 2, who presented with linear glycemic time courses. In contrast, HbA(1c) correlated with 1-month-earlier pre-breakfast PG level more strongly than 2-month-earlier post-breakfast PG level in cases 3 and 4, and than same-day post-breakfast PG level in case 3 (P<0.05, ANOVA). The cases 3 and 4 presented with fluctuating glycemic time courses. Samples were separated into upslope's and downslope's sections according to HbA(1c) fluctuation in the latter two cases. Reflecting around the 1-month lag between HbA(1c) and PG, the two sections' regression lines for PG versus HbA(1c) corresponded in the only samples related to 1-month-earlier pre- and post-breakfast PG (t-test). In conclusion, it appears that pre- and post-breakfast PG levels are the most reliable predictors of 1-month-later HbA(1c) in type 2 diabetic outpatients who undergo medical examinations every month. Answer: HbA1c, also known as glycated hemoglobin, is a standard clinical assessment of glycemia and is widely used to monitor long-term glycemic control in individuals with diabetes. The relationship between HbA1c levels and glycemic variability has been a subject of investigation to determine if HbA1c is influenced by fluctuations in blood glucose levels. According to a study analyzing the statistical relationship between HbA1c levels and glycemic variability measured by self-monitoring of blood glucose (SMBG) in 256 subjects, a close correlation between mean blood glucose (BG) and HbA1c was demonstrated (r = 0.62). Within-subject standard deviation (SD) of BG also correlated with HbA1c (r = 0.375), indicating that people with poorer glycemic control had higher BG variance. However, after adjusting for mean BG in a linear regression model, the effect of the within-subject SD of BG on HbA1c was found to be insignificant. This suggests that HbA1c reflects mean glycemia and is not meaningfully affected by glycemic instability after adjusting for mean BG (PUBMED:14514571). In summary, while individuals with higher glycemic variability may have higher HbA1c levels, the HbA1c measurement itself is primarily a reflection of mean glycemia over a period of time and is not significantly affected by glycemic instability when mean blood glucose is taken into account.
Instruction: Sonographic evaluation of hydronephrosis in the pediatric population: is well-tempered sonography necessary? Abstracts: abstract_id: PUBMED:25792581 Sonographic evaluation of hydronephrosis in the pediatric population: is well-tempered sonography necessary? Objectives: Standardized protocols exist for diuretic renography. There are no specific guidelines regarding hydration before renal sonography. This study assessed the importance of the hydration status by sonographic measurements of the anteroposterior diameter and its effect on Society for Fetal Urology (SFU) hydronephrosis grading. Methods: Children aged 6 weeks to 16 years (mean age, 22 months) with unilateral SFU grade 3 or 4 hydronephrosis requiring diuretic renal scintigraphy were recruited to undergo prehydration and posthydration renal sonography. Hydrated diuretic renal scintigraphy, or "well-tempered" renography, was then performed. Renal sonograms were reviewed by a blinded pediatric radiologist and pediatric urologist. Two-sided statistical tests assessed whether SFU grades and the anteroposterior diameter changed significantly after hydration. Results: Among 67 kidneys, the pediatric urologist (L.P.M.) and pediatric radiologist (V.J.R.) reported no SFU grade change in 45 (67%) and 52 (78%) kidneys after hydration. In kidneys that changed, the posthydration grade was more likely to be higher. This difference was statistically significant (14 of 22 and 13 of 15 differences were higher grades after hydration for L.P.M. and V.J.R., respectively; P= .06; P= .007). Most kidneys that changed with hydration differed by only 1 SFU grade. Differences greater than 1 grade were seen in 5 control kidneys, which increased from SFU grade 0 to 2. The mean anteroposterior diameter increased significantly between prehydration and posthydration sonography for both hydronephrotic kidneys (1.46 versus 1.72 cm; P< .001) and control kidneys (0.22 versus 0.39 cm; P= .019), but did not correlate with increased SFU grades. Conclusions: Hydration does have a substantial effect on the anteroposterior diameter, but it does not correlate with a substantial effect on the SFU grade; therefore, well-tempered sonography seems unnecessary. abstract_id: PUBMED:26543712 Correlation of the clinical parameters with sonographic findings of hemorrhagic cystitis in pediatric hematooncology patients. To find a relationship between clinical and sonographic appearance of hemorrhagic cystitis (HC) in pediatric hematooncology patients. Clinical and sonographic findings of 31 children (M:F = 18:13; mean age, 12.7 years) with HC in pediatric hematooncology patients were reviewed. For each patient, the onset of HC after transplantation, use of bladder-toxic agent, presence of BK viruria, and duration of disease were reviewed. Sonographic findings including bladder wall thickness (BWT), the type of bladder wall thickening (nodular vs. diffuse), occurrence of hydronephrosis or pyelonephritis were reviewed. We analyzed sonographic appearance and clinical manifestations of HC. HC occurred within 4 months after HSCT/BMT. 27 patients (87.0 %) were positive for BK viruria and 24 patients (77.4 %) took bladder-toxic agents. On sonography, nodular type bladder wall thickening was more frequent (54.8 %), and BWT was thicker in this group (p = 0.003). There was a positive correlation between the BWT on initial sonography and duration of cystitis (r (2) = 0.340). Hydronephrosis developed in 25.8 % of patients with HC, and as HC persisted longer, hydronephrosis occurred more (p = 0.004). In patients with HC after HSCT/BMT, the BWT on initial sonography correlates well with the duration of cystitis. And, longer time of HC develops the risk of hydronephrosis. abstract_id: PUBMED:27091913 Value of Multimode Sonography for Assessment of Pelvic Lipomatosis Compared With Computed Tomography. Objectives: To assess imaging features of pelvic lipomatosis with a multimode sonographic technique, including 2-dimensional (2D) grayscale sonography, 3-dimensional (3D) sonography, and transrectal sonography, and compare its diagnostic features with those of computed tomography (CT). Methods: In this study, 7 patients with different clinical manifestations were incidentally discovered by 2D sonography and followed by 3D and transrectal sonography before CT was performed for comparison. The urinary tract morphologic characteristics, bladder shape, and amount and distribution of perivesical or perirectal fatty tissue in all 7 patients were evaluated by both imaging modalities. Ten healthy participants were recruited and imaged as a control group. Results: The following sonographic features were shown in all 7 patients: (1) bilateral hydroureters and hydronephrosis on 2D sonography; (2) bladder shifting anteriorly and superiorly on 2D sonography; (3) nonvisualization of the prostate through the bladder window on transabdominal scanning and nonvisualization of the bladder when the prostate was scanned by transrectal sonography; (4) bladder elongation in the craniocaudal dimension, which appeared as a gourd or pear shape on 3D sonography; and (5) excessive fat accumulation between the prostate and rectum as well as in perivesical or perirectal regions on 2D and transrectal sonography. Computed tomography in these patients confirmed the sonographic findings, with evidence of hydronephrosis, bladder location shift and elongation, and excessive fitty tissue with extrinsic compression. Conclusions: This preliminary study has shown that the application of a multimode sonographic technique (ie, 2D, 3D, and transrectal sonography) can provide unique evidence and imaging features of pelvic lipomatosis, which are comparable with CT for making a suggestive diagnosis of pelvic lipomatosis. Thus, multimode sonography may be the modality of choice for assessment of patients with a suspicion of lipomatosis. abstract_id: PUBMED:12375782 Laparoscopic sonography in the urinary system. This work reports the use of laparoscopic-transducer sonography for the examination of the urinary system in a swine model. Animals underwent a two-phase study. In the first phase, the urinary system was examined using laparoscopic sonography. In the second a partial ureteral obstruction was induced, and sonographic changes were recorded and evaluated. Sonography was used to evaluate kidneys, ureters, and bladder. Anatomic structures were evaluated and the following pathological findings were identified: renal cysts, one polycystic kidney, dilation of the renal pelvis, hydronephrosis, and one perirenal pseudocyst. Where necessary, contrast digital fluoroscopy (excretory urography and retrograde ureteropyelography) was also performed. Laparoscopic sonography mainly is used for evaluation prior to laparoscopic surgery to guide decisions relating to surgery. The quality of the images obtained laparoscopically is superior to that of percutaneous or transabdominal images, because artifacts are reduced and the contact surface of the transducer is placed directly over the study area. Laparoscopic sonography proved highly effective for studying renal and ureteral disorders prior to minimally invasive surgery. abstract_id: PUBMED:20876900 Acute abdomen in pregnancy: role of sonography. Objective: The purpose of this presentation is to review the role of sonography in evaluation of acute abdomen during pregnancy. Methods: Illustrative cases were collected from gravid patients who presented with signs and symptoms suspicious for acute abdomen and subsequently underwent sonography. Results: This presentation shows sonographic findings of various maternal complications that can present with acute abdominal pain in pregnant patients. Conclusions: Sonography remains the first line of imaging in pregnant patients presenting with acute abdomen. Patient triage or additional imaging may be obtained on the basis of the sonographic findings. abstract_id: PUBMED:3550130 Lupus nephritis. New sonographic findings. The value of sonography in the evaluation of patients with lupus nephritis has been limited to determination of location and size of the kidneys and the exclusion of hydronephrosis; quantitative estimation of the severity of renal compromise by sonographic appearance is less accurate than laboratory tests. Similarly, the lack of specificity of the sonographic changes in most nephritities makes sonography a poor predictor of the underlying etiology. We report a previously undescribed sonographic pattern first recognized in two cases of proven lupus nephritis. We successfully used this pattern to predict the presence of "silent" lupus nephritis in a previously undiagnosed case of systemic lupus erythematosus. abstract_id: PUBMED:12008814 Renal sonography in the intensive care unit: when is it necessary? Objective: To evaluate the efficacy of renal sonography performed in intensive care units on patients with the diagnosis of acute or acute-on-chronic renal failure. Methods: We reviewed all renal sonograms performed in our institution during 1 year on critically ill patients for evaluation of renal failure. Renal failure was defined as a serum creatinine level greater than 1.5 mg/dL or an increase of greater than 20% from the baseline creatinine level. Exclusion criteria included patient age younger than 18 years and signs or symptoms of obstructive uropathy. Using the electronic medical record, we recorded patient age, sex, blood urea nitrogen level, serum creatinine level, blood urea nitrogen-creatinine ratio, and clinical indication for intensive care unit admission. Sonographic reports were reviewed for the presence or absence of hydronephrosis. The total cost of these examinations was estimated with the use of Medicare reimbursement rates for 2000. Results: One hundred five renal sonographic examinations were performed on 104 patients meeting all inclusion criteria. Only 1 study had positive results for hydronephrosis, which was graded as mild. Incidental findings not immediately affecting patient care and including ascites and simple renal cysts were identified in 91 patients. The estimated total cost of the examinations was $13,350.75. Conclusions: In critically ill patients with acute renal failure and no physical findings suggesting obstructive uropathy, renal sonography to evaluate for hydronephrosis is probably not indicated. This holds true regardless of patient age, sex, medical or surgical disposition, and blood urea nitrogen-creatinine ratio. abstract_id: PUBMED:24154905 Focused renal sonography performed and interpreted by internal medicine residents. Objectives: Intensivist-performed focused sonography, including renal sonography, is becoming accepted practice. Whether internal medicine residents can be trained to accurately rule out renal obstruction and identify sonographic findings of chronic kidney disease is unknown. The purpose of this study was to test the ability of residents to evaluate for this specific constellation of findings. Methods: Internal medicine residents were trained in a 5-hour module on focused renal sonography evaluating renal length, echogenicity, hydronephrosis, and cysts on a convenience sample of medical ward, intermediate care, and medical intensive care unit patients. All patients underwent comprehensive sonography within 24 hours. The primary outcome was represented by the Fleiss κ statistic, which indicated the degree of interobserver agreement between residents and radiologists. Sensitivity, specificity, and positive and negative predictive values were calculated using the comprehensive radiologist-read examination as the reference. Results: Seventeen internal medicine residents imaged 125 kidneys on 66 patients. The average number of studies performed was 7.3 (SD, 6.6). Residents demonstrated excellent agreement with radiologists for hydronephrosis (κ = 0.73; P < .001; SE, 0.15; sensitivity, 94%; specificity, 93%), moderate agreement for echogenic kidneys (κ = 0.43; P < .001; SE, 0.13; sensitivity, 40%; specificity, 98%), and substantial agreement for renal cysts (κ = 0.61; P < .001; SE, 0.12; sensitivity, 60%; specificity, 96%). Residents showed sensitivity of 100% and specificity of 88% for identification of atrophic kidneys, defined as length less than 8 cm. Conclusions: After a 5-hour training course, medical residents accurately identified hydronephrosis and key sonographic findings of chronic kidney disease in a cohort of medical patients. Screening for hydronephrosis and renal atrophy can be performed by medical residents after adequate training. abstract_id: PUBMED:6950453 Abdominal manifestations of pediatric leukemias: sonographic assessment. The abdominal sonograms of 64 leukemic children were reviewed and correlated with clinical, hematological, and, when available, autopsy findings. Sonographic abnormalities including enlargement of the liver, spleen, pancreas, and kidneys, with or without alteration of parenchymal echo architecture, hydronephrosis, lymphadenopathy, and ascites were encountered in 70% of the patients. Ultrasound proved useful in detecting occult visceral leukemia and relapse, monitoring tumor response to chemotherapy, and assessing the complications of chemotherapy. Ultrasound is recommended for the routine evaluation and follow-up of the pediatric leukemic patient. abstract_id: PUBMED:653904 Renal sonography--errors, pitfalls, and their sources (author's transl) Retrospective evaluation of sonographic diagnoses showed 14 false interpretations, proven by operation or renal angiogram. A classification of errors and pitfalls in renal sonography is attempted and the place of this method in the diagnostic plan in kidney diseases is discussed. Answer: The study titled "Sonographic evaluation of hydronephrosis in the pediatric population: is well-tempered sonography necessary?" aimed to assess the importance of hydration status on sonographic measurements of the anteroposterior diameter and its effect on Society for Fetal Urology (SFU) hydronephrosis grading. The study included children aged 6 weeks to 16 years with unilateral SFU grade 3 or 4 hydronephrosis who underwent prehydration and posthydration renal sonography, followed by hydrated diuretic renal scintigraphy, or "well-tempered" renography. The renal sonograms were reviewed by a blinded pediatric radiologist and pediatric urologist. The results showed that after hydration, there was no SFU grade change in 67% and 78% of kidneys according to the pediatric urologist and pediatric radiologist, respectively. In kidneys that did change grade, the posthydration grade was more likely to be higher, but most changes were only by 1 SFU grade. The mean anteroposterior diameter increased significantly between prehydration and posthydration sonography for both hydronephrotic and control kidneys. However, this increase did not correlate with increased SFU grades. The study concluded that while hydration does have a substantial effect on the anteroposterior diameter, it does not correlate with a substantial effect on the SFU grade, suggesting that well-tempered sonography seems unnecessary (PUBMED:25792581).
Instruction: Can pain and function be distinguished in the Oxford Knee Score in a meaningful way? Abstracts: abstract_id: PUBMED:23526094 Can pain and function be distinguished in the Oxford Knee Score in a meaningful way? An exploratory and confirmatory factor analysis. Purpose: The purpose of this paper was to examine if pain and functioning can be distinguished in the Oxford Knee Score (OKS) in a meaningful way. This was done by (1) conducting exploratory factor analysis to explore the factorial structure of the OKS and (2) conducting confirmatory factor analysis to examine whether a two-factor solution is superior to a one-factor solution. Methods: Secondary data analysis of four independent datasets containing OKS scores on 161,973 patients was performed. Four independent datasets contained data on: (1) 156, 788 patients from the NHS HES/PROMS dataset, (2) 2,405 consecutive patients from the South West London Elective Operating Centre, (3) 2,353 patients enrolled in the Knee Arthroplasty Trial and (4) 427 consecutive patients listed for knee replacement surgery at the Nuffield Orthopaedic Centre in Oxford. Results: Factor extraction methods suggested that, depending on the method employed, both one- and two-factor solutions are meaningful. Overall and in each data set some cross-loading occurred and item loadings were consistent across two factors. On confirmatory factor analysis, both one- and two-factor models had acceptable fit indices. This allowed the creation of the 'OKS pain component' and the 'OKS functional component' subscales. Conclusions: Factor analysis confirmed the original conceptual basis of the OKS but offered an option to perform additional analyses using pain and functional subscales. Further research should focus on providing further evidence on construct validity and responsiveness of the newly derived subscales. abstract_id: PUBMED:35313190 The potential of dividing the oxford knee score into subscales for predicting clinically meaningful improvements in pain and function of patients undergoing total knee arthroplasty. Background: Subdividing the Oxford Knee Score (OKS) into a pain component scale (OKS-PCS) and a function component scale (OKS-FCS) for predicting clinically meaningful improvements may provide a basis for identifying patients in need of enhanced support from health care professionals to manage pain and functional challenges following total knee arthroplasty. Aim: To assess the potential of dividing the OKS into subscales for predicting clinically meaningful improvements in pre- and postoperative pain and function by comparing two different versions of extracting pain and function derived from the OKS. Methods: This retrospective observational cohort study included 201 patients undergoing total knee arthroplasty. Multiple logistic regression analysis was applied for binary classification of whether patients achieved clinically meaningful improvements in pain and function. Results: The best overall version for predicting clinically meaningful improvements had an area under the receiver operating characteristic curve of 0.79 for both pain and function, whereas Nagelkerke's R2 was 0.322 and 0.334, respectively. Conclusion: The findings indicate that it is reasonable to subdivide the OKS into subscales for predicting clinically meaningful improvements in pain and function. However, more studies are needed to compare various types of classification algorithms in larger patient populations. abstract_id: PUBMED:28506300 Cross-cultural adaptation and psychometric analysis of the Arabic version of the oxford knee score in adult male with knee osteoarthritis. Background: There are varieties of self-assessment questionnaire used for the evaluation of pain, functional disability, and health related quality of life in individuals with knee osteoarthritis (OA). The present study intended to adapt and translate the oxford knee score into the Arabic and investigated its psychometric properties in adult male with knee OA. Methods: Ninety-seven adult male (mean age 57.55 ± 11.49 years) with knee OA participated. Patients were requested to complete the adapted Arabic version of the Oxford knee score (OKS-Ar), reduced "Western Ontario and McMaster Universities Index (WOMAC)", and the Visual analogue scale (VAS). Patients were requested to complete 2nd form of OKS-Ar at least 1 week apart to assess the reproducibility of the score. The OKS was adapted and translated into Arabic by two independent Arabic native speakers (one rehabilitation professional having experience of knee OA patients and another one a trained translator) according to the international guidelines. Results: All the participants completed the 2nd form of OKS-Ar (Response rate 100%). Reliability and internal consistency was high with an ICC of 0.97, and the Cronbach's alpha coefficient of 0.987, respectively. A significant relationship between the OKS-Ar and the WOMAC and VAS scores confirmed the construct validity (p < 0.001). The standard error of measurement (SEM) and the minimum detectable change (MDC) were 2.2 and 6.2, respectively. Conclusions: The adapted Arabic version of the OKS demonstrated acceptable psychometric properties, including reliability, internal consistency, and the validity. The present study indicates that the OKS-Ar is a suitable questionnaire to measure pain and physical function in the Arabic speaking adult male patients with knee OA. abstract_id: PUBMED:25820654 High correlation of the Oxford Knee Score with postoperative pain, but not with performance-based functioning. Purpose: The Oxford Knee Score (OKS) is a widely known patient-related outcome measure (PROM) to determine pain and knee functioning before and after total knee arthroplasty (TKA). Self-reported function is mainly influenced by change in pain; therefore, it was hypothesized that the OKS correlates more with pain than with performance-based functioning. Methods: In a prospective cohort of 88 patients, who had a cementless mobile-bearing TKA, included in a randomized clinical trial, the correlation between the overall OKS, and its subscales for pain (PCS) and function (FCS), with performance-based functioning using the DynaPort® Knee Score (DKS), visual analogue scale score for pain (VAS) and the Knee Society Score (KSS) was evaluated. All scores were measured preoperatively, 6 months and 1 year after surgery. Overall change in outcome over time was analysed until 5 years after surgery. Results: All scores improved over time. The DKS was influenced by sex, preoperative BMI and age. The internal consistency of the OKS PCS increased over time, whereas the OKS FCS remained the same. The mean postoperative OKS FCS showed moderate correlation with the DKS (r = 0.65, p < 0.001), and the mean postoperative OKS and OKS PCS showed high correlation with the VAS (r = -0.79 and r = -0.82, respectively, p < 0.001). The mean postoperative KSS showed high correlations with the OKS (r = 0.80, p < 0.001), the OKS PCS (r = 0.72 p < 0.001) and OKS FCS (r = 0.74, p < 0.001). Conclusion: The postoperative OKS and the OKS PCS showed high correlation with pain, but only the postoperative OKS FCS was well correlated with performance-based functioning. This suggests that the OKS is more related to pain and tells us less on postoperative functioning. This is important when the OKS as PROM is used to evaluate the quality of orthopaedic care of patients with TKA. Level Of Evidence: III. abstract_id: PUBMED:36609249 Knee function, quality of life, pain, and living conditions after distal femoral resection knee arthroplasty for non-tumor indications. Background: Distal femoral resection knee arthroplasty is a limb salvage procedure. The impact of distal femoral resection arthroplasty on patient function and health status is unknown. The aim of this study was to report knee function, quality of life, knee pain, and living conditions after distal femoral resection knee arthroplasty for non-tumor indications. Methods: Of 52 patients (52 knees) undergoing distal femoral resection knee arthroplasty in a single institution between 2012 and 2021, 22 were excluded as 3 patients had ≤90 days follow-up, 6 had died, and 13 declined or were unable to participate for unrelated reasons. Thus, 30 patients were included and interviewed by telephone in March 2021 (mean follow-up 3.5 years after surgery). Patient completed the Oxford Knee Score (0-48, 48 best), EQ-5D-5L, and the Copenhagen Knee ROM, and information on pain and living conditions was obtained. Results: The mean age was 67.9 years (SD 13.6), and 21 (70%) were female. Mean total Oxford Knee Score was 29.9 (SD 10.5), mean Copenhagen Knee ROM flexion was 116° (SD 21.6), and mean extension was - 2° (SD10.1). Mobility aids were used by 18 (60%) patients, i.e. a cane (30%), walker (26.7%) or wheelchair (3.3%). Mean EQ-5Dindex score was 0.70 (SD 0.22) and mean EQ-5D VAS score was 55.4 (SD 23.9). Nine (30%) patients used paracetamol or NSAID and 2 (6.7%) used opioids for knee pain. Mean VAS knee pain score was 1.30 (SD 2.2) at rest and 2.8 (SD 3.1) when walking. Most (90%) patients lived in their own home, with only 3 patients in nursing homes. Two-thirds (66.7%) required no home care, 5 (16.6%) received home care 1-2 times over 2 weeks, and 5 (16.6%) every day. Conclusion: Distal femoral resection knee arthroplasty appears to be a viable treatment option for non-tumor indications. Acceptable patient outcomes were achieved in terms of functional status and quality of life, especially considering treatment alternatives such as femoral amputation. abstract_id: PUBMED:28029533 Unexplained Pain Post Total Knee Arthroplasty With an Oxford Knee Score ≥20 at 6 Months Predicts Good 2-Year Outcome. Background: Total knee arthroplasty (TKA) is an effective procedure for end-stage osteoarthritis of the knee. Some patients experience persistent unexplained pain post-TKA despite normal investigations. The purpose of this study is to identify which of these patients are likely to improve without any surgical intervention. We hypothesize that patients with unexplained persistent pain and a poor 6-month Oxford knee score (OKS) post-TKA can improve at 2 years. Methods: Prospectively collected data for all primary unilateral TKA performed from June 2004 to January 2012 were analyzed to identify which patients with unexplained pain at 6 months will improve at 2 years. Patients were included if they had persistent pain and an OKS <27 at 6 months; normal radiological and clinical investigations; no infection identified; surgery performed for primary osteoarthritis. Two hundred sixty patients with OKS <27 at 6 months were analyzed. These patients were subdivided into 2 groups (group 1: 6-month OKS 20-26, group 2: 6-month OKS less than 20). Results: One hundred ninety-one out of 208 (92%) patients in group 1 experienced improvement in pain and outcome at 2 years. Most of the group 1 patients attained a minimal clinically important difference in OKS of at least 5 (P < .001) at 2 years. Group 1 patients also reported better Knee Society Functional Score and Short Form Survey 36 mean scores at 2 years. Conclusion: In patients with unexplained pain, an OKS of at least 20 at 6 months predicts good functional outcome at 2 years. abstract_id: PUBMED:25368370 Can pain and function be distinguished in the Oxford Hip Score in a meaningful way? : an exploratory and confirmatory factor analysis. Objective: The objective of this study was to explore dimensionality of the Oxford Hip Score (OHS) and examine whether self-reported pain and functioning can be distinguished in the form of subscales. Methods: This was a secondary data analysis of the UK NHS hospital episode statistics/patient-reported outcome measures dataset containing pre-operative OHS scores on 97 487 patients who were undergoing hip replacement surgery. Results: The proposed number of factors to extract depended on the method of extraction employed. Velicer's Minimum Average Partial test and the Parallel Analysis suggested one factor, the Cattell's scree test and Kaiser-over-1 rule suggested two factors. Exploratory factor analysis demonstrated that the two-factor OHS had most of the items saliently loading either of the two factors. These factors were named 'Pain' and 'Function' and their respective subscales were created. There was some cross-loading of items: 8 (pain on standing up from a chair) and 11 (pain during work). These items were assigned to the 'Pain' subscale. The final 'Pain' subscale consisted of items 1, 8, 9, 10, 11 and 12. The 'Function' subscale consisted of items 2, 3, 4, 5, 6 and 7, with the recommended scoring of the subscales being from 0 (worst) to 100 (best). Cronbach's alpha was 0.855 for the 'Pain' subscale and 0.861 for the 'Function' subscale. A confirmatory factor analysis demonstrated that the two-factor model of the OHS had a better fit. However, none of the one-factor or two-factor models was rejected. Conclusion: Factor analyses demonstrated that, in addition to current usage as a single summary scale, separate information on pain and self-reported function can be extracted from the OHS in a meaningful way in the form of subscales. Cite this article: Bone Joint Res 2014;3:305-9. abstract_id: PUBMED:36732785 Mapping of disease-specific Oxford Knee Score onto EQ-5D-5L utility index in knee osteoarthritis. Background: EQ5D is a generic measure of health. It provides a single index value for health status that can be used in the clinical and economic evaluation of healthcare. Oxford Knee Score (OKS) is a joint-specific outcome measure tool designed to assess symptoms and function in osteoarthritis patients after joint replacement surgery. Though widely used, it has the disadvantage of lacking health index value. To fill the gap between functional and generic questionnaires with economic value, we linked generic EQ-5D-5L to the specific OKS to give a single index value for health status in KOA patients. Questions/purposes: Developing and evaluating an algorithm to estimate EuroQoL generic health utility scores (EQ-5D-5L) from the disease-specific OKS using data from patients with knee osteoarthritis (KO). Patients And Methods: This is a cross-sectional study of 571 patients with KO. We used four distinct mapping algorithms: Cumulative Probability for Ordinal Data, Penalized Ordinal Regression, CART (Classification and Regression Trees), and Ordinal random forest. We compared the resultant models' degrees of accuracy. Results: Mobility was best predicted by penalized regression with pre-processed predictors, usual activities by random forest, pain/discomfort by cumulative probability with pre-processed predictors, self-care by random forest with RFE (recursive feature elimination) predictors, and anxiety/depression by CART with RFE predictors. Model accuracy was lowest with anxiety/depression and highest with mobility and usual activities. Using available country value sets, the average MAE was 0.098 ± 0.022, ranging from 0.063 to 0.142; and the average MSE was 0.020 ± 0.008 ranging from 0.008 to 0.042. Conclusions: The current study derived accurate mapping techniques from OKS to the domains of EQ-5D-5L, allowing for the computation of QALYs in economic evaluations. A machine learning-based strategy offers a viable mapping alternative that merits further exploration. abstract_id: PUBMED:37592170 Not all questions are created equal: the weight of the Oxford Knee Scores questions in a multicentric validation study. Background: The Oxford Knee Score (OKS) has been designed for patients with knee osteoarthritis and has a widespread use. It has 12 questions, with each question having the same weight for the overall score. Some authors have observed a significant ceiling effect, especially when distinguishing slight postoperative differences. We hypothesized that each questions' weight will depend significantly on the patient's sociodemographic data and lifestyle. Methods: In this international multicentric prospective study, we included patients attending a specialist outpatient knee clinic. Each patient filled out 3 questionnaires: (a) demographic data and data pertaining to the OKS, (b) the standard OKS, and (c) the patient gave a mark on the weight of the importance of each question, using a 5-point Likert scale (G OKS). Linear regression models were used for the analysis. Results: In total 203 patients (106 female and 97 male) with a mean age of 64.5 (±12.7) years and a mean body mass index (BMI) of 29.34 (±5.45) kg/m2 were included. The most important questions for the patients were the questions for pain, washing, night pain, stability, and walking stairs with a median of 5. In the regression models, age, gender, and driving ability were the most important factors for the weight of each of the question. Conclusion: The questions in the OKS differ significantly in weight for each patient, based on sociodemographic data, such as age, self-use of a car, and employment. With these differences, the Oxford Knee Score might be limited as an outcome measure. Adjustment of the OKS that incorporates the demographic differences into the final score might be useful if the ceiling effect is to be mitigated. Level Of Evidence: Level II prospective prognostic study. abstract_id: PUBMED:36474993 Oxford knee score 1 year after TKR for osteoarthritis with reference to a normative population: What can patients expect? Objectives: Total knee replacement (TKR) procedure is commonly carried out in patients with advanced osteoarthritis to reduce pain and increase mobility, with on average 84% rated satisfactory outcome, but some (some suggest 44%) continue to experience debilitating pain. The study aimed to investigate factors affecting pain and function outcomes (using Oxford Knee Score, OKS) one year after TKR, with normative comparison to a reference population. Design: We recruited TKR patients from one hospital (Nottinghamshire, UK); collected pre- and post-operative OKS; graded radiographs for severity of osteoarthritis (K-L grade) in a sub-group. We also collected OKS by postal survey from the local area, calculated age and sex specific normative scores and z-scores of post-operative OKS (Z-OKS). The associations between K-L grade, pre-operative OKS, age, sex, against change in OKS and Z-OKS were analysed. Results: There were 536 TKR cases, 91 in radiographic sub-group and 360 people in reference cohort. Post-operative Z-OKS was associated with K-L grade (β ​= ​0.368; p<0.001). Change in OKS was associated with K-L grade (β ​= ​0.247; p ​= ​0.003); pre-operative OKS (β ​= ​-0.449; p<0.001); age (β ​= ​0.276; p ​= ​0.001); and female sex protective (β ​= ​-0.213; p ​= ​0.008). On average TKR patients returned to 74% of their normative age and sex adjusted OKS, with younger women achieving worst outcomes. More severe radiographic osteoarthritis predicted greater improvement and better post-operative outcome when compared to normative population. Conclusion: This study identified factors and provided normative OKS data intended to guide clinicians in counselling patients regarding likely surgical outcomes. This could help manage patients' expectations, aid decision making and increase post-surgery satisfaction rate. Answer: Yes, pain and function can be distinguished in the Oxford Knee Score (OKS) in a meaningful way. This was confirmed through exploratory and confirmatory factor analysis, which suggested that both one- and two-factor solutions are meaningful, with some cross-loading occurring and item loadings being consistent across two factors. The creation of the 'OKS pain component' and the 'OKS functional component' subscales was possible, and both one- and two-factor models had acceptable fit indices (PUBMED:23526094). Further research is encouraged to provide more evidence on the construct validity and responsiveness of these newly derived subscales. Additionally, the potential of dividing the OKS into a pain component scale (OKS-PCS) and a function component scale (OKS-FCS) for predicting clinically meaningful improvements in pain and function has been explored. The findings indicate that it is reasonable to subdivide the OKS into subscales for this purpose (PUBMED:35313190). Moreover, the OKS has been adapted and validated in different languages, such as Arabic, and has demonstrated acceptable psychometric properties, including reliability, internal consistency, and validity, which indicates its suitability to measure pain and physical function in various populations (PUBMED:28506300). However, it is also noted that while the OKS and its pain subscale (OKS-PCS) show high correlation with pain, the function subscale (OKS-FCS) is better correlated with performance-based functioning, suggesting that the OKS is more related to pain and may provide less information on postoperative functioning (PUBMED:25820654). In conclusion, the evidence supports that pain and function can indeed be distinguished in the OKS in a meaningful way, allowing for more nuanced assessments of patient outcomes in knee-related conditions and interventions.
Instruction: Can ultrasound predict histological findings in regenerated cartilage? Abstracts: abstract_id: PUBMED:14585924 Can ultrasound predict histological findings in regenerated cartilage? Objective: To evaluate regenerated articular cartilage quantitatively by introducing an ultrasonic probe into the knee joint under arthroscopy and analysing the A-mode echogram by means of wavelet transformation. Methods: Three experimental rabbit models (spontaneous repair model, large cartilage defect model, treatment model) were examined using our ultrasonic evaluation system and a histological grading scale. From resulting wavelet map, the percentage of maximum magnitude was selected as the quantitative index of the ultrasonic evaluation system. Results: The percentage maximum magnitude in the spontaneous repair model was 61.1%, that in the large defect model was 29.8% and that in the treatment model was 36.3%. There was modest correlation between the percentage maximum magnitude and the histological grading scale (r = -0.594) Conclusion: Our findings indicate that ultrasound analysis can predict the microstructure of regenerated cartilage. abstract_id: PUBMED:29881210 How precisely does ultrasonographic evaluation reflect the histological status of the articular cartilage of the knee joint? The thickness and the grade of the articular cartilages of the knee of 34 patients who underwent total knee arthroplasty were evaluated by ultrasound (US) and by histology. The US grade correlated with the histological grade and the thickness of the articular cartilage measured by US. The thickness measured by US was significantly correlated with that measured by histology for the medial condyle. The US thickness was significantly less than the histological thickness for thicker articular cartilages. US grading and the thickness of the articular cartilages evaluated by US is sufficiently reliable to indicate their histological status. abstract_id: PUBMED:17255687 A new method of costal cartilage harvest for total auricular reconstruction: part II. Evaluation and analysis of the regenerated costal cartilage. Background: The authors established the hypothesis that chest wall deformity can be absolutely avoided by leaving the perichondrium completely intact at the donor site for cartilage regeneration. As described in Part I, on the new method of costal cartilage harvest, the authors were able to confirm regeneration of cartilage visually and with palpation at 6 months postoperatively. Methods: The regenerated cartilage of four patients at 6, 8, 10, and 12 months, respectively, after the first-stage operation was harvested for morphological and histological evaluation and analyses during the second-stage operation. Results: Morphological findings revealed the regenerated cartilage to be cylindrical in shape, and histological findings confirmed transformation of regenerated cartilage to normal costal (hyaline) cartilage with respect to time. Conclusions: The authors' findings revealed that not only is postoperative chest wall deformity avoided with the new method of costal cartilage harvest but the major scientific merit lies in the fact that the regenerated cartilage can be used as demonstrated during the second-stage operation. This gives hope to bilateral microtia patients requiring secondary auricular reconstruction, which was thought to be impossible because of the lack of costal cartilage for fabrication of a three-dimensional costal cartilage framework. abstract_id: PUBMED:37655826 Decreased Elastic Modulus of Knee Articular Cartilage Based on New Macroscopic Methods Accurately Represents Early Histological Findings of Degeneration. Objective: Ex vivo nanoindentation measurement has reported that elastic modulus decreases as cartilage degenerates, but no method has been established to macroscopically evaluate mechanical properties in vivo. The objective of this study was to evaluate the elastic modulus of knee joint cartilage based on macroscopic methods and to compare it with gross and histological findings of degeneration. Design: Osteochondral sections were taken from 50 knees with osteoarthritis (average age, 75 years) undergoing total knee arthroplasty. The elastic modulus of the cartilage was measured with a specialized elasticity tester. Gross findings were recorded as International Cartilage Repair Society (ICRS) grade. Histological findings were graded as Mankin score and microscopic cartilage thickness measurement. Results: In ICRS grades 0 to 2 knees with normal to moderate cartilage abnormalities, the elastic modulus of cartilage decreased significantly as cartilage degeneration progressed. The elastic modulus of cartilage was 12.2 ± 3.8 N/mm for ICRS grade 0, 6.3 ± 2.6 N/mm for ICRS grade 1, and 3.8 ± 2.4 N/mm for ICRS grade 2. Similarly, elastic modulus was correlated with Mankin score (r = -0.51, P < 0.001). Multiple regression analyses showed that increased Mankin score is the most relevant factor associated with decreased elastic modulus of the cartilage (t-value, -4.53; P < 0.001), followed by increased histological thickness of the cartilage (t-value, -3.15; P = 0.002). Conclusions: Mechanical properties of damaged knee cartilage assessed with new macroscopic methods are strongly correlated with histological findings. The method has potential to become a nondestructive diagnostic modality for early cartilage damage in the clinical setting. abstract_id: PUBMED:26392027 The effect of cartilage degeneration on ultrasound speed in human articular cartilage. Objectives: We investigated the effect of cartilage degeneration on ultrasound speed in human articular cartilage in vitro. Methods: Ultrasound speed was calculated by the time-of-flight method for 22 femoral condyle osteochondral blocks obtained from osteoarthritis patients. In parallel, histological evaluation of specimens was performed using the modified Mankin and OARSI scores. Results: The mean ultrasound speed was 1757 ± 109 m/s. Ultrasound speed showed significant negative correlation with OARSI score, and a decreasing tendency with high Mankin scores. Good correlation was found between the optically measured and the calculated cartilage thickness. Conclusion: Our results show that articular cartilage degeneration has relatively little influence on ultrasound speed. In addition, morphological evaluation of articular cartilage using a preset value of ultrasound speed seems to offer relatively accurate results. abstract_id: PUBMED:1096286 Quantitative histological study of the cartilage of human arthrotic femur head A quantitative histological study of the cartilage of the human arthrosic femoral head. In this work, samples of unsplit cartilage taken from 15 arthrosic femoral heads and 26 normal femoral heads from the same topographical areas, and of identical age, were studied by quantitative histological methods. This study was concerned with the thickness of the calcified and non-calcified cartilage, the cellular density, the average number of nuclei per lacuna, and the average surface area of the lacunae in the different layers of the cartilage. Comparison between the two groups of samples showed that the arthrosic cartilage differed from the normal cartilage by a diminution in the size of the superficial lacunae, a reduction in the number of nuclei per lacuna in the middle layer, a diminution in the cellularity of the deep layer and an augmentation in the layer of calcified cartilage. These modifications suggest that arthrosis of the hip is secondary to diffuse cellular alterations associating a diminution in metabolic activity and necrosis of the chondrocytes. abstract_id: PUBMED:12203033 Ultrasound detection of trypsin-treated articular cartilage: its association with cartilaginous proteoglycans assessed by histological and biochemical methods. We studied the correlation between histological imaging quantification and the biochemical assessment of proteoglycan (PG) content in articular cartilage in vitro, which served as a basis for the validation of ultrasound detection as a noninvasive tool in the assessment of PG changes in full-thickness articular cartilage. Articular cartilage of 14 intact fresh bovine femoral condyles was used for trypsin digestion. Full-thickness articular cartilage cylinders, 3 mm in diameter, were harvested at time intervals of 0.5, 1, 2, and 3 h after trypsin digestion. Each cartilage cylinder was then cut into two equal parts for either histomorphometric quantification of the PG area fraction stained with Safranine O or conventional biochemical assessment of uronic acid content. In addition, five fresh mature bovine patellae were used for the validation of an ultrasound compression system developed for testing the potential layered biomechanical properties of articular cartilage, including the equilibrium compressive modulus, i.e., the slope of the linear regression of the equilibrium stress-strain curve. Results showed that PG content in the articular cartilage was significantly decreased with increasing time of trypsin digestion, both histologically and biochemically, with a significant correlation of r = 0.502 ( P < 0.001). Ultrasound measurements demonstrated differences in the equilibrium compressive moduli of the digested zone, the undigested zone, and the entire articular cartilage layer, as well as a characteristically large ultrasound reflection signal detected in the interface of the trypsin digestion front of articular cartilage. The results of this study suggested that the histomorphometric quantification of PG content could be used to reflect not only PG quantity but also its spatial distribution; also, the ultrasound compression system might have potential for the non-invasive detection of pathological changes in articular cartilage. abstract_id: PUBMED:36355296 Histological Preparation and Evaluation of Cartilage Specimens. In this chapter, an introduction is given into histological techniques to research related to hyaline cartilage and subchondral bone. Emphasis is placed on the importance to investigate cartilage and bone as a unit, which includes the transition zone of the calcified cartilage and tidemark. Reasons for the appropriate selection of histological methods are presented such as when to use (decalcified) specimens for routine paraffin embedding including immunohistology, cryosections of cartilage alone, or non-decalcified specimens for embedding in polymethylmethacrylate with or without additional biomaterials. Appropriate staining methods are also outlined. Apart from detailed laboratory protocols for different embedding and staining methods including open communication about difficulties related to the various techniques, also practical instructions for state-of-the-art evaluation methods and their strengths and weaknesses are given. Sample figures for scoring methods are included. abstract_id: PUBMED:15199220 Which cartilage is regenerated, hyaline cartilage or fibrocartilage? Non-invasive ultrasonic evaluation of tissue-engineered cartilage. Objective: To investigate ultrasonic evaluation methods for detecting whether the repair tissue is hyaline cartilage or fibrocartilage in new cartilage regeneration therapy. Methods: We examined four experimental rabbit models: a spontaneous repair model (group S), a large cartilage defect model (group L), a periosteal graft model (group P) and a tissue-engineered cartilage regeneration model (group T). From the resulting ultrasonic evaluation, we used %MM (the maximum magnitude of the measurement area divided by that of the intact cartilage) as a quantitative index of cartilage regeneration. The results of the ultrasonic evaluation were compared with the histological findings and histological score. Results: The %MM values were 61.1 +/- 16.5% in group S, 29.8 +/- 15.1% in group L, 36.3 +/- 18.3% in group P and 76.5 +/- 18.7% in group T. The results showed a strong similarity to the histological scoring. Conclusion: The ultrasonic examination showed that all the hyaline-like cartilage in groups S and T had a high %MM (more than 60%). Therefore, we could define the borderline between the two types of regenerated cartilage by the %MM. abstract_id: PUBMED:35224298 Dominant role of in situ native cartilage niche for determining the cartilage type regenerated by BMSCs. Tissue-engineered cartilage regeneration by bone marrow stromal cells (BMSCs) is considered an ideal method. However, how to regulate BMSCs to regenerate specific types of cartilage remains unclear, which significantly limits its clinical translation and leads to suboptimal clinical effects. Herein, we systematically explored the role of native ear and articular cartilage niches on the differentiation fate of BMSCs and the type of regenerated cartilage. First, we prepared two types of acellular cartilage sheets (ACSs) and two types of chondrocytes. Then green fluorescent protein-labeled BMSCs were seeded on two types of ACSs with or without corresponding types of chondrocytes using a sandwich model and directed or cross-implanted them into native cartilage niches. After one year of in vivo culture, cell tracking and the results of histological results showed that the native cartilage niches were capable of regulating BMSCs regeneration into specific types of cartilage that were consistent with the cartilage types of the implanted sites. Furthermore, even when the type of niche formed by ACSs or the biomimetic cartilage niche constructed by specific types of ACSs and specific types of chondrocytes did not match with the native cartilage niche, the native cartilage niche continued to determine the type of cartilage regenerated by implanted BMSCs and chondrocytes. All our results provide sufficient evidence for specific types of cartilage regeneration using chondrogenic potential cells, such as mesenchymal stem cells and chondrocytes. Answer: Yes, ultrasound can predict histological findings in regenerated cartilage. Studies have shown that ultrasonic evaluation systems can quantitatively assess regenerated articular cartilage and that the results correlate with histological grading scales. For instance, an ultrasonic evaluation system introduced into the knee joint under arthroscopy and analyzed by wavelet transformation showed a modest correlation between the percentage maximum magnitude from the wavelet map and the histological grading scale (PUBMED:14585924). Additionally, ultrasonographic evaluation of the thickness and grade of articular cartilages of the knee correlated with histological grade and thickness, indicating that ultrasound grading and thickness measurements are reliable indicators of their histological status (PUBMED:29881210). Furthermore, non-invasive ultrasonic evaluation methods have been investigated to determine whether the repair tissue is hyaline cartilage or fibrocartilage in cartilage regeneration therapy. The results showed a strong similarity to the histological scoring, suggesting that ultrasound can distinguish between the two types of regenerated cartilage (PUBMED:15199220). Moreover, ultrasound speed in human articular cartilage has been found to have a significant negative correlation with OARSI score and a decreasing tendency with high Mankin scores, which are used for histological evaluation of cartilage degeneration (PUBMED:26392027). In summary, ultrasound has been demonstrated to be a useful tool for predicting the microstructure and histological status of regenerated cartilage, providing a non-invasive means to assess cartilage quality and potentially guide clinical decision-making (PUBMED:14585924, PUBMED:29881210, PUBMED:15199220, PUBMED:26392027).
Instruction: Are we able to correctly identify prostate cancer patients who could be adequately treated by focal therapy? Abstracts: abstract_id: PUBMED:21458310 Are we able to correctly identify prostate cancer patients who could be adequately treated by focal therapy? Introduction And Objective: Because of the improvements on detection of early stage prostate cancer over the last decade, focal therapy for localized prostate cancer (PC) has been proposed for patients with low-risk disease. Such treatment would allow the control of cancer, thereby diminishing side effects, such as urinary incontinence and sexual dysfunction, which have an enormous impact on quality of life. The critical issue is whether it is possible to preoperatively predict clinically significant unifocal or unilateral prostate cancer with sufficient accuracy. Our aim is to determine whether there is any preoperative feature that can help select the ideal patient for focal therapy. Material And Methods: A total of 599 patients who underwent transrectal ultrasound (TRUS)-guided prostate biopsy followed by radical prostatectomy to treat PC were examined in our laboratory between 2001 and 2009. We established very restricted criteria to select patients with very-low-risk disease for whom focal therapy would be suitable (only 1 biopsy core positive, tumor no larger than 80% of a single core, no perineural invasion, PSA serum level < 10 ng/ml, Gleason score < 7 and clinical stage T1c, T2a-b). We defined 2 groups of patients who would be either adequately treated or not treated by focal therapy. The primary endpoint was the evaluation of preoperative features in order to identify which parameters should be considered when choosing good candidates for focal therapy. Results: Fifty-six out of 599 patients met our criteria. The mean age was 59 years, and the mean number of biopsy cores was 14.4. Forty-seven (83.9%) were staged T1c, and 9 (16.1%) were staged T2a-b. Forty-four (78.6%) patients could be considered to have been adequately treated by focal therapy, and 12 (21.4%) could not. There was no statistical difference between the 2 groups considering age, clinical stage, PSA levels, Gleason score, and tumor volume in the biopsy. All 12 patients who could be considered inadequately treated had a bilateral, significant secondary tumor, 58.3% had Gleason ≥ 7, and 25% were staged pT3. Conclusion: Although focal therapy might be a good option for patients with localized prostate cancer, we are so far unable to select which of them would benefit from it based on preoperative data, even using very restricted criteria, and a considerable proportion of men would still be left undertreated. abstract_id: PUBMED:36359396 Risk Factors for Severe Erectile Dysfunction after Focal Therapy with High-Intensity Focused Ultrasound for Prostate Cancer. The present study aimed to analyze the effect of predisposing clinical factors for severe erectile dysfunction (ED) in patients treated with focal therapy using high-intensity focused ultrasound (HIFU) for localized prostate cancer (PC). Patients without severe ED (International Index of Erectile Function-5 [IIEF-5] score ≥ 8) before focal HIFU therapy were included. A total of 92 of the 240 patients met the inclusion criteria and were included. The rate of severe ED (IIEF-5 ≤ 7) was 36% 12 months after treatment. Multivariable logistic regression analysis showed that the pre-procedural lower IIEF-5 score (odds ratio [OR] 0.812, p = 0.005), the pre-procedural lower score of the sexual domain of the Expanded Prostate Cancer Index Composite (OR 0.960, p = 0.038), and the treatment of the edge of the peripheral zone (PZ) in proximity to the neurovascular bundle (NVB) [treated vs. untreated, OR 8.048, p = 0.028] were significant risk factors for severe ED at 12 months after treatment. In conclusion, pre-procedural lower erectile function and treatment of the part in proximity to the NVB were significant risk factors for severe ED after focal therapy. abstract_id: PUBMED:27809594 Outcomes and toxicities in patients treated with definitive focal therapy for primary prostate cancer: systematic review. Aim: This systematic review summarizes the clinical data on focal therapy (FT) when used alone as definitive therapy for primary prostate cancer (PCa). Methods: The protocol is detailed in the online PROSPERO database, registration No. CRD42014014765. Articles evaluating any form of FT alone as a definitive treatment for PCa in adult male patients were included. Results: Of 10,419 identified articles, 10,401 were excluded, and thus leaving 18 for analysis. In total, 2288 patients were treated using seven modalities. The outcomes of FT in PCa seem to be similar to those observed with whole gland therapy and with fewer side effects. Conclusion: Further research, including prospective randomized trials, is warranted to elucidate the potential advantages of focal radiation techniques for treating PCa. Prospero Registration Number: CRD42014014765. abstract_id: PUBMED:25247079 Radical prostatectomy specimens - a voice against focal therapy. Introduction: The main treatment methods of prostate carcinoma are surgery and radiation therapy, both having serious side effects. Because of these side effects, the idea of organ preserving therapy emerged. Rationale to perform focal therapy is to preserve the prostate gland, along with potency and continence, offering good cancer control with appropriate treatment. The idea of gland sparing therapy is quite controversial. Presently, EAU Guidelines propose focal therapy as experimental in the treatment of prostate carcinoma. Material And Methods: The aim of the study was to asses how many patients could be qualified for focal therapy, according to post prostatectomy pathological findings. 720 patients suspected of prostate cancer were biopsied. In 324 patients, prostate carcinoma was revealed, of which 81 were subjected to radical prostatectomy. Pre and post-operative pathological results were analyzed, according to possible qualification for focal treatment. Results: According to the clinical evaluation of all the patients referred to the treatment, 25% could be assumed to have unifocal disease and could be qualified to the focal treatment. Post-operative evaluation revealed pT2b cancer in 5%, pT2c disease in 65%, and pT3a-pT4a disease in 20% of these patients. Cancer was unilateral (pT2a-b) in only 15% of cases, and was suitable for focal treatment (small disease not extending to whole lobe- pT2a disease) in only 10%. Conclusions: It seems that with the use of current methods, proper T-staging of the disease and amount of neoplasmatic tissue inside the gland can not be reached with great certainty. In our opinion, focal therapy should not be used in patients with ≤pT2b and high risk disease. For them, radical treatment (surgery or radiation therapy) should be recommended. For the rest of the patients, with low risk disease, keeping in mind the large scale of possible overtreatment, active surveillance is a valid treatment option. Focal therapy can be an interesting therapeutic proposition for a small group of patients with pT2a cancer, but it is not possible to select them with big certainty with current methods of imaging medicine. abstract_id: PUBMED:33499743 Significant Impact of the Anterior Transition Zone Portion Treatment on Urinary Function After Focal Therapy with High-Intensity Focused Ultrasound for Prostate Cancer. Background: There is lack of evaluation of the effect of the treated area on the urinary function after focal therapy. The objectives of the study is to evaluate the effects of focal therapy on urinary function in the anterior portion of the transition zone (TZ) with transrectal high-intensity focused ultrasound (HIFU) for localized prostate cancer (PCa). Methods: From 2016 to 2018, patients who were diagnosed as having localized PCa and treated with focal therapy with HIFU, were included prospectively. The urinary function and complications were evaluated separately in the treated regions of the anterior TZ (TZ group) and other portions (other group) for 12 months. Before and after the treatment, the International Prostate Symptom Score (IPSS), IPSS Quality Of Life (QOL), Overactive Bladder Symptom Score (OABSS), and uroflowmetry were evaluated to assess the urinary function. Results: Ninety patients were included in the study. There was no significant differences in the patients' characteristics between the two groups. At 1 month after the treatment, IPSS (p = 0.011), IPSS QOL (p = 0.002), OABSS (p = 0.002), maximum flow rates (p = 0.011), and residual urine volume (p = 0.011) in TZ group were significantly deteriorated compared with the other group. Multivariate logistic regression analysis revealed that anterior TZ treatment (odds ratio, 3.386; p = 0.029) was an independent risk factor for the deterioration with ≥32% of preoperative status of maximum flow rates. Concerning complication, the rates of Grade 2 urinary retention and Grade 3 urethral stricture were 15.4% and 11.5% in the TZ group and 0% and 0% in the other group, respectively. Conclusions: There was a greater risk of urinary dysfunction with treatment in the anterior TZ portion than in the other portion at 1 month after focal therapy with HIFU. abstract_id: PUBMED:29948045 Surveillance after prostate focal therapy. Introduction: Long-term outcomes from large cohorts are not yet available upon which to base recommended follow-up protocols after prostate focal therapy. This is an updated summary of a 2015 SIU-ICUD review of the best available current evidence and expert consensus on guidelines for surveillance after prostate focal therapy. Methods: We performed a systematic search of the PubMed, Cochrane and Embase databases to identify studies where primary prostate focal therapy was performed to treat prostate cancer. Results: Multiparametric magnetic resonance imaging (mpMRI) should be performed at 3-6 months, 12-24 months and at 5 years after focal therapy. Targeted biopsy of the treated zone should be performed at 3-6 months and fusion biopsy of any suspicious lesion seen on mpMRI. Additionally, a systematic biopsy should be performed at 12-24 months and again at 5 years. In histological diagnosis, characteristic changes of each treatment modality should be noted and in indeterminate situations various immunohistochemical molecular markers can be helpful. Small volume 3 + 3 (Prognostic grade group [PGG] 1) or very small volume (< 0.2 cc or < 7 mm diameter) 3 + 4 (PGG 2) are acceptable in the treated zone at longitudinal follow-up. Significant volumes of 3 + 4 (PGG 2) or more within the treated zone should be treated. Any clinically significant cancer subsequently arising within the non-treated zone should be treated and handled in the same way as any de novo prostate cancer. Patients should be counseled regarding whole-gland and focal approaches to treating these new foci where appropriate. One or two well-delineated foci of significant cancer can be ablated to keep the patient in the 'active surveillance pool'. More extensive disease should be treated with traditional whole-gland techniques. Conclusion: Focal therapy remains a nascent field largely comprising single center cohorts with little long-term data. Our current post-focal therapy surveillance consensus recommendations represent the synthesis of the best available evidence as well as expert opinion. Further work is necessary to define the most oncologically safe and cost-effective way of following patients after focal therapy. abstract_id: PUBMED:30855018 Future of focal therapy for the treatment of prostate cancer- european section of urotechnology (ESUT) position. Introduction: Focal therapy (FT) is a treatment option for prostate cancer (PCa), which offers the possibility of an effective therapy in selected patients who have the localized disease, with a significant reduction in treatment related morbidity. Based on the current status of FT, our objective was to determine the most appropriate strategy to improve patient management. Materials And Methods: A literature review was done performed through the PubMed database and focused on the following topics: localised prostate cancer,MRI, prostate biopsies, ablative therapy and focal therapy. Results: Indications for FT were mainly patients with a localised PCa, a single lesion at Gleason score 7 (3+4) (Grade group 2) favourable in size. Precise identification of the tumour, currently based on multiparametric MRI data and targeted biopsy, was the cornerstone of FT success. New imaging modalities such as PET/MRI and multiparametric ultrasound have proven to be effectivein detecting and targeting the tumour. Several energy sources were reported for an effective tissue ablation. Non-thermal option should be investigated to further limit the risk of side effects with the same cancer control. Conclusion: Focal therapy is a new option in the armamentarium of PCa. Technological improvements and the development of novel energy sources should make it possible to treat lesions with even greater precision, while limiting the risk of side effects. In the future, we should probably be able to effectively expand the indications of this technique to include more aggressive tumours. abstract_id: PUBMED:24944577 Focal therapy in prostate cancer: the current situation. Unlabelled: Prostate cancer is one of the most significant pathologies in the field of urology. The adoption of screening strategies and improvements in biopsies have resulted in an increase in early-stage tumour detection. Radical global therapies provide very good oncological results in localised prostate cancer. However, excess treatment in low- and, in some cases, intermediate-risk groups affects the quality of life of these patients. In the case of localised prostate cancer, focal therapies offer a minimally invasive option with good results with respect to established treatments. Although this is currently not a standard treatment, it represents the therapeutic approach with the greatest potential. THIS LITERATURE REVIEW HAS THE FOLLOWING OBJECTIVES: to define selection criteria for patients who are candidates for focal therapy, to assess the current situation and results of the different therapeutic options, and to define procedures in cases of recurrence and for follow-ups. We concluded that focal therapy is a viable therapeutic alternative for localised prostate cancer, specifically cryosurgery and high-intensity targeted ultrasound, which have acceptable oncologic results and a lower comorbidity compared with global treatments. Studies with a high level of scientific evidence are still needed to validate these results. Acquisition Of Evidence: A search was carried out on the Medline (PubMed), EMBASE, Web of Science and Cochrane databases of all papers published before 31 July 2013. We included clinical studies and literature reviews that evaluated primary focal therapy for prostate cancer confirmed by biopsy and excluded focal rescue therapy studies. The keywords used were focal therapy and prostate cancer. Initially, we found 42 articles; 15 studies were excluded because they did not meet the minimum criteria for inclusion. A total of 1350 cases were treated throughout 27 studies. abstract_id: PUBMED:31579041 HIFU focal therapy for prostate cancer using intraoperatory contrast enhanced ultrasound. Objective: High-intensity focused ultrasound (HIFU) Focal therapy appears to have encouraging oncologic outcomes and urinary and erectile function. The control of the treated area can be done using contrast enhanced ultrasound with sulfur hexafluoride (Sonovue®) at the end of the procedure. We report oncological and functional outcomes in HIFU focal therapy (FT) for prostate cancer (PCa) management using sonovue. Methods: A total of 274 HIFU procedures were found in our registry in the period between June 2014 and July 2018. Prospective data of 59 consecutive patients after focal high-intensity focused ultrasound (HIFU) using Sonovue were collected. FT failure was defined as positive biopsy Gleason score (GS) ≥ 7 in- or out-field, local or systemic salvage treatment, PCa-metastasis or PCa-specific death. Results: A total of 59 patients submitted to HIFU with median follow-up of 18 months were included in the analysis. Median age was 66.7 yr (IQR 59.1-74.3). Median preoperative prostate-specific antigen (PSA) was 7.6 ng/ml (IQR 5-10.2) and preoperative biopsies GS 6, 7(3+4), 7(4+3) were found in 26 (44%), 30 (50.8%) and 3 (5%), respectively. Failure was found in 16 (27.1%) patients. Failure-free survival (FFS) in 2 and 4yr was 83% and 74% respectively (Figure 1). No PCa-specific death was registered in the period of study. Median nadir PSA after FT was 2.67 ng/ml. Sexual potency was achieved in 75% of previous potent patients and urinary continence in 93.4% of patients at 3 months. Fourteen (23%) patients presented with complications. Four (6.7%) patients have presented complications grade 1 and 10 (16.9%) patients have presented complications grade 2. Six (10.1%) patients have presented acute urinary retention. Conclusions: Our study shows that the use of Sonovue after HIFU FT was safe. Patients present a significant proportion of failure after HIFU FT but with good functional outcomes and without incidence of severe complications. abstract_id: PUBMED:34322516 Focal Therapy for Prostate Cancer: Complications and Their Treatment. Focal therapy is a modern alternative to selectively treat a specific part of the prostate harboring clinically significant disease while preserving the rest of the gland. The aim of this therapeutic approach is to retain the oncological benefit of active treatment and to minimize the side-effects of common radical treatments. The oncological effectiveness of focal therapy is yet to be proven in long-term robust trials. In contrast, the toxicity profile is well-established in randomized controlled trials and multiple robust prospective cohort studies. This narrative review summarizes the relevant evidence on complications and their management after focal therapy. When compared to whole gland treatments, focal therapy provides a substantial benefit in terms of adverse events reduction and preservation of genito-urinary function. The most common complications occur in the peri-operative period. Urinary tract infection and acute urinary retention can occur in up to 17% of patients, while dysuria and haematuria are more common. Urinary incontinence following focal therapy is very rare (0-5%), and the vast majority of patients recover in few weeks. Erectile dysfunction can occur after focal therapy in 0-46%: the baseline function and the ablation template are the most important factors predicting post-operative erectile dysfunction. Focal therapy in the salvage setting after external beam radiotherapy has a significantly higher rate of complications. Up to one man in 10 will present a severe complication. Answer: Based on the provided abstracts, it appears that while focal therapy for prostate cancer has been proposed as a treatment option, especially for patients with low-risk disease, there are challenges in correctly identifying patients who could be adequately treated by this approach. One study found that even with very restricted criteria for selecting patients with very-low-risk disease, a considerable proportion of men would still be left undertreated (PUBMED:21458310). This indicates that preoperative data alone, even when using strict selection criteria, may not be sufficient to accurately predict which patients would benefit from focal therapy. Another study highlighted that focal therapy with high-intensity focused ultrasound (HIFU) can lead to severe erectile dysfunction in certain patients, with risk factors including pre-procedural lower erectile function and treatment of parts in proximity to the neurovascular bundle (PUBMED:36359396). This suggests that patient selection for focal therapy must consider not only cancer control but also potential side effects and the impact on quality of life. A systematic review summarized that the outcomes of focal therapy in prostate cancer seem to be similar to those observed with whole gland therapy and with fewer side effects, but further research, including prospective randomized trials, is warranted (PUBMED:27809594). This indicates that while focal therapy shows promise, more evidence is needed to establish its effectiveness and safety. Another study indicated that proper T-staging of the disease and the amount of neoplastic tissue inside the gland cannot be reached with great certainty using current methods, suggesting that focal therapy should not be used in patients with ≤pT2b and high-risk disease (PUBMED:25247079). This further supports the notion that accurately identifying suitable candidates for focal therapy remains a challenge. In conclusion, while focal therapy for prostate cancer is a promising treatment option that may offer fewer side effects, the current evidence suggests that we are not yet able to correctly identify all patients who could be adequately treated by focal therapy based on preoperative data alone. More research and improved methods for patient selection are needed to ensure that focal therapy is applied to those who will truly benefit from it.
Instruction: Does Introducing Imprecision around Probabilities for Benefit and Harm Influence the Way People Value Treatments? Abstracts: abstract_id: PUBMED:26304064 Does Introducing Imprecision around Probabilities for Benefit and Harm Influence the Way People Value Treatments? Background: Imprecision in estimates of benefits and harms around treatment choices is rarely described to patients. Variation in sampling error between treatment alternatives (e.g., treatments have similar average risks, but one treatment has a larger confidence interval) can result in patients failing to choose the option that is best for them. The aim of this study is to use a discrete choice experiment to describe how 2 methods for conveying imprecision in risk influence people's treatment decisions. Methods: We randomized a representative sample of the Canadian general population to 1 of 3 surveys that sought choices between hypothetical treatments for rheumatoid arthritis based on different levels of 7 attributes: route and frequency of administration, chance of benefit, serious and minor side effects and life expectancy, and imprecision in benefit and side-effect estimates. The surveys differed in the way imprecision was described: 1) no imprecision, 2) quantitative description based on a range with a visual graphic, and 3) qualitative description simply describing the confidence in the evidence. Results: The analyzed data were from 2663 respondents. Results suggested that more people understood imprecision when it was described qualitatively (88%) versus quantitatively (68%). Respondents who appeared to understand imprecision descriptions placed high value on increased precision regarding the actual benefits and harms of treatment, equivalent to the value placed on the information about the probability of serious side effects. Both qualitative and quantitative methods led to small but significant increases in decision uncertainty for choosing any treatment. Limitations included some issues in defining understanding of imprecision and the use of an internet survey of panel members. Conclusions: These findings provide insight into how conveying imprecision information influences patient treatment choices. abstract_id: PUBMED:35599144 Treatments for Metastatic Hormone-sensitive Prostate Cancer: Systematic Review, Network Meta-analysis, and Benefit-harm assessment. Context: Multiple treatments for metastatic, hormone-sensitive prostate cancer (mHSPC) are available, but their effects on health-related quality of life (HRQoL) and benefit-harm balance remain unclear. Objective: To assess clinical effectiveness regarding survival and HRQoL, safety, and benefit-harm balance of mHSPC treatments. Evidence Acquisition: We searched MEDLINE, EMBASE, CENTRAL, and ClinicalTrials.gov until March 1, 2022. Randomized controlled trials (RCTs) comparing docetaxel, abiraterone, enzalutamide, apalutamide, darolutamide, and radiotherapy combined with androgen deprivation therapy (ADT) mutually or with ADT alone were eligible. Three reviewers independently performed screening, data extraction, and risk of bias assessment in duplicate. Evidence Synthesis: Across ten RCTs, we found relevant survival benefits for ADT + docetaxel (high certainty according to the Grading of Recommendations, Assessment, Development and Evaluation [GRADE]), ADT + abiraterone (moderate certainty), ADT + enzalutamide (low certainty), ADT + apalutamide (high certainty), and ADT + docetaxel + darolutamide (high certainty) compared with ADT alone. ADT + radiotherapy appeared effective only in low-volume de novo mHSPC. We found a short-term HRQoL decrease lasting 3-6 mo for ADT + docetaxel (moderate certainty) and a potential HRQoL benefit for ADT + abiraterone up to 24 mo of follow-up (moderate certainty) compared with ADT alone. There was no difference in HRQoL for ADT + enzalutamide, ADT + apalutamide, or ADT + radiotherapy over ADT alone (low-high certainty). Grade 3-5 adverse effect rates were increased with all systemic combination treatments. A benefit-harm assessment showed high probabilities (>60%) for a net clinical benefit with ADT + abiraterone, ADT + enzalutamide, and ADT + apalutamide, while ADT + docetaxel and ADT + docetaxel + darolutamide appeared unlikely (<40%) to be beneficial. Conclusions: Despite substantial survival benefits, no systemic combination treatment showed a clear HRQoL improvement compared with ADT alone. We found evidence for a short-term HRQoL decline with ADT + docetaxel and a higher net clinical benefit with ADT + abiraterone, ADT + apalutamide and ADT + enzalutamide. While individualized decision-making remains important and economic factors need to be considered, the evidence may support a general preference for the combination of ADT with androgen receptor axis-targeted therapies over docetaxel-containing strategies. Patient Summary: We assessed different combination treatments for metastatic hormone-sensitive prostate cancer. While survival was better with all systemic combination treatments, there was no clear improvement in health-related quality of life compared with androgen deprivation therapy alone. Novel hormonal combination treatments had a more favorable benefit-harm balance than combination treatments that include chemotherapy. abstract_id: PUBMED:28892015 The Road to Hell Is Paved with Good Intentions: Why Harm-Benefit Analysis and Its Emphasis on Practical Benefit Jeopardizes the Credibility of Research. It is our concern that European Union Directive 2010/63/EU with its current project evaluation of animal research in the form of a harm-benefit analysis may lead to an erosion of the credibility of research. The HBA assesses whether the inflicted harm on animals is outweighed by potential prospective benefits. Recent literature on prospective benefit analysis prioritizes "societal benefits" that have a foreseeable, positive impact on humans, animals, or the environment over benefit in the form of knowledge. In this study, we will argue that whether practical benefits are realized is (a) impossible to predict and (b) exceeds the scope and responsibility of researchers. Furthermore, we believe that the emphasis on practical benefits has the drawback of driving researchers into speculation on the societal benefit of their research and, therefore, into promising too much, thereby leading to a loss of trust and credibility. Thus, the concepts of benefit and benefit assessment in the HBA require a re-evaluation in a spirit that embraces the value of knowledge in our society. The generation of scientific knowledge has been utilised to great benefit for humans, animals, and the environment. The HBA, as it currently stands, tends to turn this idea upside down and implies that research is of value only if the resulting findings bring about immediate societal benefit. abstract_id: PUBMED:32389227 Informing Patient-Centered Care Through Stakeholder Engagement and Highly Stratified Quantitative Benefit-Harm Assessments. Objectives: In a previous project aimed at informing patient-centered care for people with multiple chronic conditions, we performed highly stratified quantitative benefit-harm assessments for 2 top priority questions. In this current work, our goal was to describe the process and approaches we developed and to qualitatively glean important elements from it that address patient-centered care. Methods: We engaged patients, caregivers, clinicians, and guideline developers as stakeholder representatives throughout the process of the quantitative benefit-harm assessment and investigated whether the benefit-harm balance differed based on patient preferences and characteristics (stratification). We refined strategies to select the most applicable, valid, and precise evidence. Results: Two processes were important when assessing the balance of benefits and harms of interventions: (1) engaging stakeholders and (2) stratification by patient preferences and characteristics. Engaging patients and caregivers through focus groups, preference surveys, and as co-investigators provided value in prioritizing research questions, identifying relevant clinical outcomes, and clarifying the relative importance of these outcomes. Our strategies to select evidence for stratified benefit-harm assessments considered consistency across outcomes and subgroups. By quantitatively estimating the range in the benefit-harm balance resulting from true variation in preferences, we clarified whether the benefit-harm balance is preference sensitive. Conclusions: Our approaches for engaging patients and caregivers at all phases of the stratified quantitative benefit-harm assessments were feasible and revealed how sensitive the benefit-harm balance is to patient characteristics and individual preferences. Accordingly, this sensitivity can suggest to guideline developers when to tailor recommendations for specific patient subgroups or when to explicitly leave decision making to individual patients and their providers. abstract_id: PUBMED:31598209 Net benefit of statins for primary prevention of cardiovascular disease in people 75 years or older: a benefit-harm balance modeling study. Background: We determined the risk thresholds above which statin use would be more likely to provide a net benefit for people over the age of 75 years without history of cardiovascular disease (CVD). Methods: An exponential model was used to estimate the differences in expected benefit and harms in people treated with statins over a 10-year horizon versus not treated. The analysis was repeated 100,000 times to consider the statistical uncertainty and produce a distribution of the benefit-harm balance index from which we determined the 10-year CVD risk threshold where benefits outweighed the harms. We considered treatment estimates from trials and observational studies, baseline risks, patient preferences, and competing risks of non-CVD death, and statistical uncertainty. Results: Based on average preferences, statins were more likely to provide a net benefit at a 10-year CVD risk of 24% and 25% for men aged 75-79 years and 80-84 years, respectively, and 21% for women in both age groups. However, these thresholds varied significantly depending on differences in individual patient preferences for the statin-related outcomes, with interquartile ranges of 21-33% and 23-36% for men aged 75-79 years and 80-84 years, respectively, as well as 20-32% and 21-32% for women aged 75-79 years and 80-84 years, respectively. Conclusions: Statins would more likely provide a net benefit for primary prevention in older people taking the average preferences if their CVD risk is well above 20%. However, the thresholds could be much higher or lower depending on preferences of individual patients, which suggests more emphasis should be placed on individual-based decision-making, instead of recommending statins for everyone based on a single or a small number of thresholds. abstract_id: PUBMED:29482333 Understanding experiences of the self-harm of others: A qualitative exploration of the views of young people with complex mental health needs. As adolescent self-harm is a growing public health concern, more research is needed to identify potential risk factors. Studies have highlighted that exposure to the self-harm of others may be a risk factor associated with engagement in self-harm. However, research investigating young people's experiences of the self-harm of others has been limited. This qualitative study aimed to explore young people's experiences of the self-harm of others and interviewed a total of eight young people (five females and three males; aged between 13 and 18 years) resident at one of two adolescent mental health inpatient units in the North of England. The interviews were analysed using Interpretative Phenomenological Analysis and four themes were identified: 'Pre-admission exposure to self-harm', 'Exposure on the inside: An unpleasant environment', 'Helper vs helped' and 'Separation from the attention seekers: competing for authenticity'. Prevention efforts to reduce the social transmission and stigma surrounding self-harm among young people are discussed. abstract_id: PUBMED:25891234 Peering into the literature: A systematic review of the roles of people who inject drugs in harm reduction initiatives. Background: People who inject drugs have been central to the development of harm reduction initiatives. Referred to as peer workers, peer helpers, or natural helpers, people with lived experience of drug use leverage their personal knowledge and skills to deliver harm reduction services. Addressing a gap in the literature, this systematic review focuses on the roles of people who inject drugs in harm reduction initiatives, how programs are organized, and obstacles and facilitators to engaging people with lived experience in harm reduction programs, in order to inform practice and future research. Methods: This systematic review included searches for both peer reviewed and gray literature. All titles and abstracts were screened by two reviewers. A structured data extraction tool was developed and utilized to systematically code information concerning peer roles and participation, program characteristics, obstacles, and facilitators. Results: On the basis of specific inclusion criteria 164 documents were selected, with 127 peer-reviewed and 37 gray literature references. Data extraction identified key harm reduction program characteristics and forms of participation including 36 peer roles grouped into five categories, as well as obstacles and facilitators at systemic, organizational, and individual levels. Conclusions: Research on harm reduction programs that involve people with lived experience can help us better understand these approaches and demonstrate their value. Current evidence provides good descriptive content but the field lacks agreed-upon approaches to documenting the ways peer workers contribute to harm reduction initiatives. Implications and ten strategies to better support peer involvement in harm reduction programs are identified. abstract_id: PUBMED:32785593 Harm-Benefit Analyses Can Be Harmful. Harm-benefit analyses (HBAs) are becoming de rigueur with some governmental regulatory agencies and popular with local institutional animal care and use committees (or their equivalents), the latter due, in part, to the adoption of HBAs as an international accreditation standard. Such analyses are employed as an attempt to balance potential or actual pain or distress imposed on laboratory animals against scientists' justifications for those impositions. The outcomes of those analyses are then supposed to be included in an official assessment of whether a given animal protocol should be approved as proposed. While commendable in theory as a means to avoid or minimize animal suffering, HBAs come with a flawed premise. Establishing an accurate prediction of benefit, especially for so-called "basic" research (vs "applied" research, such as in vivo testing for product development or batch release), is often impossible given the uncertain nature of experimental outcomes and the eventual value of those results. That impossibility, in turn, risks disapproving a legitimate research proposal that might have yielded important new knowledge if it had been allowed to proceed. Separately, the anticipated harm to which the animal would be subjected should similarly be scrutinized with an aim to refine that harm regardless of purported benefits if the protocol is approved. The intentions of this essay are to reflect on the potential harm and benefit of the HBA itself, highlight how HBAs may be helpful in advancing refinements, and propose alternative approaches to both parts of the equation in the assessment process. abstract_id: PUBMED:27496928 Smallest worthwhile effect of exercise programs to prevent falls among older people: estimates from benefit-harm trade-off and discrete choice methods. Background: the smallest worthwhile effect (SWE) of an intervention is the smallest treatment effect that justifies the costs, risks and inconveniences associated with that health intervention. Objective: to estimate the SWE of exercise programs designed to prevent falls among older people and to compare estimates derived by two methodological approaches. Study Design And Setting: discrete choice experiment (n = 220) and benefit-harm trade-off (subsample n = 66) methods were used. Participants: community-dwelling older people who reported a past fall or a mobility limitation answered online or face-to-face questionnaires. Results: a substantial proportion of participants (82% in the discrete choice experiment and 50% in the benefit-harm trade-off study) did not consider that participation in the proposed exercise programs would be worthwhile, even if it reduced their risk of falling to 0%. Among remaining participants, the average SWE of participation in an exercise program was an absolute reduction in the risk of falling of 35% (standard deviation [SD] = 13) in the discrete choice experiment and 16% (SD = 11) in the benefit-harm trade-off study. Conclusions: many participants did not consider the hypothetical falls' risk reduction of the proposed exercise programs to be worth the associated costs and inconveniences. Greater community awareness of the fall prevention effects of exercise for older people is required. abstract_id: PUBMED:31471435 Balance of benefits and harms of different blood pressure targets in people with multiple chronic conditions: a quantitative benefit-harm assessment. Objective: Recent studies suggest that a systolic blood pressure (SBP) target of 120 mm Hg is appropriate for people with hypertension, but this is debated particularly in people with multiple chronic conditions (MCC). We aimed to quantitatively determine whether benefits of a lower SBP target justify increased risks of harm in people with MCC, considering patient-valued outcomes and their relative importance. Design: Highly stratified quantitative benefit-harm assessment based on various input data identified as the most valid and applicable from a systematic review of evidence and based on weights from a patient preference survey. Setting: Outpatient care. Participants: Hypertensive patients, grouped by age, gender, prior history of stroke, chronic heart failure, chronic kidney disease and type 2 diabetes mellitus. Interventions: SBP target of 120 versus 140 mm Hg for patients without history of stroke. Primary And Secondary Outcome Measures: Probability that the benefits of a SBP target of 120 mm Hg outweigh the harms compared with 140 mm Hg over 5 years (primary) with thresholds >0.6 (120 mm Hg better), <0.4 (140 mm Hg better) and 0.4 to 0.6 (unclear), number of prevented clinical events (secondary), calculated with the Gail/National Cancer Institute approach. Results: Considering individual patient preferences had a substantial impact on the benefit-harm balance. With average preferences, 120 mm Hg was the better target compared with 140 mm Hg for many subgroups of patients without prior stroke, especially in patients over 75. For women below 65 with chronic kidney disease and without diabetes and prior stroke, 140 mm Hg was better. The analyses did not include mild adverse effects, and apply only to patients who tolerate antihypertensive treatment. Conclusions: For most patients, a lower SBP target was beneficial, but this depended also on individual preferences, implying individual decision-making is important. Our modelling allows for individualised treatment targets based on patient preferences, age, gender and co-morbidities. Answer: Yes, introducing imprecision around probabilities for benefit and harm does influence the way people value treatments. A study found that when imprecision was described qualitatively, more people (88%) understood it compared to when it was described quantitatively (68%). Respondents who appeared to understand imprecision descriptions placed high value on increased precision regarding the actual benefits and harms of treatment, equivalent to the value placed on the information about the probability of serious side effects. Both qualitative and quantitative methods led to small but significant increases in decision uncertainty for choosing any treatment (PUBMED:26304064).
Instruction: Can a double-row anchorage technique improve tendon healing in arthroscopic rotator cuff repair? Abstracts: abstract_id: PUBMED:17452513 Can a double-row anchorage technique improve tendon healing in arthroscopic rotator cuff repair?: A prospective, nonrandomized, comparative study of double-row and single-row anchorage techniques with computed tomographic arthrography tendon healing assessment. Background: Increasing the rate of watertight tendon healing has been suggested as an important criterion for optimizing clinical results in rotator cuff arthroscopic repair. Hypothesis: A double-row anchorage technique for rotator cuff repair will produce better clinical results and a better rate of tendon healing than a single-row technique. Study Design: Cohort study; Level of evidence, 2. Methods: We compared 31 patients undergoing surgery with a double-row anchorage technique using Panalok anchors and Cuff Tack anchors and 35 patients with rotator cuff tear undergoing surgery with a single-row anchorage arthroscopic technique using Panalok anchors. We compared pre- and postoperative Constant score and tendon healing, as evaluated by computed tomographic arthrography 6 months after surgery, in these 2 groups. Results: The Constant score increased significantly in both groups, with no difference between the 2 groups (P = .4). Rotator cuff healing was judged anatomic in 19 patients with double-row anchorage and in 14 patients with single-row anchorage; this difference between the groups was significant (P = .03). Conclusion: In this first study comparing double- and single-row anchorage techniques, we found no significant difference in clinical results, but tendon healing rates were better with the double-row anchorage. Improvements in the double-row technique might lead to better clinical and tendon healing results. abstract_id: PUBMED:30943087 The Effect of Tendon Delamination on Rotator Cuff Healing. Background: While patient age, tear size, and muscle fatty infiltration are factors known to affect the rate of tendon healing after rotator cuff repair, the effect of tendon delamination is less known. Purpose: To assess the effect of tendon delamination on rotator cuff healing after arthroscopic single-row (SR) repair. Study Design: Cohort study; Level of evidence, 3. Methods: Consecutive patients (N = 117) with chronic full-thickness rotator cuff tears underwent arthroscopic SR repair with the tension-band cuff repair. The mean ± SD age at the time of surgery was 60 ± 8 years. There were 25 small, 63 medium, and 29 large tears. Tendon delamination was assessed intraoperatively under arthroscopy with the arthroscope placed in the lateral portal. Patients were divided into 2 groups: those with nondelaminated (n = 80) and delaminated (n = 37) cuff tears. The 2 groups were comparable for age, sex, body mass index, preoperative pain, strength, and a Constant-Murley score. Repair integrity was evaluated with sonography (mean, 24 months after surgery; range, 6-62 months) and classified into 3 categories: type A, indicating complete, homogeneous, and thick coverage of the footprint; type B, partial coverage with a thin tendon; and type C, no coverage of the footprint. Results: The prevalence of tendon delamination observed under arthroscopy was 32% (37 of 117), which increased with tear size and retraction: from 15% in small tears to 32% in medium tears and 45% in large tears ( P = .028). Postoperatively, 83 patients had complete coverage of footprint (type A = 71%) and the cuff was considered healed, whereas 26 had partial coverage or a thin tendon (type B = 22%) and 8 had no coverage (type C = 7%). Overall, the rate of complete healing was 78% in nondelaminated cuff tears and 57% in the case of tendon delamination ( P = .029). In large retracted tears, the healing rate dropped from 81% in the absence of delamination to 39% when the tendons were delaminated ( P = .027). Conclusion: Tendon delamination increases with tear size and retraction. Patients with chronic delaminated and retracted rotator cuff tears (stage 2 or 3) are at risk of failure after SR cuff repair, whereas patients with small delaminated rotator cuff tears (stage 1) involving only the supraspinatus can be treated with an SR cuff repair with a high chance of tendon healing. These results suggest that SR cuff repair may be insufficient to treat delaminated chronic cuff tears. To improve the anatomic outcomes of rotator cuff repairs, surgeons should consider treating delaminated tears with a double-row or double-layer repair. abstract_id: PUBMED:27733881 Arthroscopic Double-Row Transosseous Equivalent Rotator Cuff Repair with a Knotless Self-Reinforcing Technique. Background: Rotator cuff tears are a significant cause of shoulder morbidity. Surgical techniques for repair have evolved to optimize the biologic and mechanical variables critical to tendon healing. Double-row repairs have demonstrated superior biomechanical advantages to a single-row. Methods: The preferred technique for rotator cuff repair of the senior author was reviewed and described in a step by step fashion. The final construct is a knotless double row transosseous equivalent construct. Results: The described technique includes the advantages of a double-row construct while also offering self reinforcement, decreased risk of suture cut through, decreased risk of medial row overtensioning and tissue strangulation, improved vascularity, the efficiency of a knotless system, and no increased risk for subacromial impingement from the burden of suture knots. Conclusion: Arthroscopic knotless double row rotator cuff repair is a safe and effective method to repair rotator cuff tears. abstract_id: PUBMED:25430714 Arthroscopic single-row versus double-row technique for repairing rotator cuff tears: a systematic review and meta-analysis. Objective: The purpose of this study was to systematically review published reports that compare the outcomes of single-row and double-row suture anchor fixation in arthroscopic rotator cuff repair. Methods: Combined data regarding relevant patient characteristics, rotator cuff pathology, surgical techniques, postoperative rehabilitation regimens, University of California at Los Angeles (UCLA) Scores, Constant scores, American Shoulder and Elbow Society (ASES) scores, tendon healing, incidence of recurrent rotator cuff defects, shoulder muscle strength, range of motion, surgical time and patient satisfaction were assessed. Results: Seven eligible randomized controlled studies and four prospective cohort studies were identified. There were no significant differences between the single-row and double-row groups in terms of Constant or ASES scores. However, UCLA scores, tendon healing, abduction shoulder strength index (SSI), external rotation SSI and internal rotation SSI were significantly better for double-row technique than for single-row technique. A statistically significant advantage for double-row repair according to UCLA score and external rotation SSI was found in the subgroup with tears greater than 3 cm. Conclusion: No definite conclusion could be drawn about differences in overall outcomes of double- and single-row techniques for repairing for small to medium (<3 cm) or large to massive (>3 cm) rotator cuff tears, even though some measures of clinical outcome showed significant differences between these two techniques. abstract_id: PUBMED:28942685 Arthroscopic Single-Row Versus Double-Row Suture Bridge Technique for Rotator Cuff Tears in Patients Younger Than 55 Years: A Prospective Comparative Study. Background: When arthroscopic rotator cuff repair is performed on a young patient, long-lasting structural and functional tendon integrity is desired. A fixation technique that potentially provides superior tendon healing should be considered for the younger population to achieve long-term clinical success. Hypothesis/Purpose: The purpose was to compare the radiological and clinical midterm results between single-row and double-row (ie, suture bridge) fixation techniques for arthroscopic rotator cuff repair in patients younger than 55 years. We hypothesized that a double-row technique would lead to improved tendon healing, resulting in superior mid- to long-term clinical outcomes. Study Design: Cohort study; Level of evidence, 2. Methods: A consecutive series of 66 patients younger than 55 years with a medium to large full-thickness tear of supraspinatus and infraspinatus tendons who underwent arthroscopic single-row or double-row (ie, suture bridge) repair were enrolled and prospectively observed. Thirty-four and 32 patients were assigned to single-row and double-row groups, respectively. Postoperatively, tendon integrity was assessed by MRI following Sugaya's classification at a minimum of 12 months, and clinical outcomes were assessed with the Constant score and the University of California, Los Angeles (UCLA) score at a minimum of 2 years. Results: Mean follow-up time was 46 months (range, 28-50 months). A higher tendon healing rate was obtained in the double-row group compared with the single-row group (84% and 61%, respectively [ P < .05]). Although no difference in outcome scores was observed between the 2 techniques, patients with healed tendon demonstrated superior clinical outcomes compared with patients who had retorn tendon (UCLA score, 34.2 and 27.6, respectively [ P < .05]; Constant score, 94 and 76, respectively [ P < .05]). Conclusion: The double-row repair technique potentially provides superior tendon healing compared with the single-row technique. Double-row repair should be considered for patients younger than 55 years with medium to large rotator cuff tears. abstract_id: PUBMED:26948164 Posterosuperior rotator cuff repair by an independent double-row technique. Technical note and radiological and clinical results. Unlabelled: The present study reports an original independent double-row technique for arthroscopic posterosuperior rotator cuff repair. Clinical and ultrasound results were assessed at 6-month follow-up in a single-center series of 78 patients with a mean age of 57years. Constant score improved from 55 to 73 (P<0.05). Seven patients (9%) had re-tear: 6 tendon avulsions from the bone and 1 tear at the myotendinous junction. This repair technique provided a high rate of tendon healing on the bone, with a lower rate of complications than with traditional double-row repair techniques. Level Of Evidence: II. abstract_id: PUBMED:24012360 Ultrasound evaluation of arthroscopic full-thickness supraspinatus rotator cuff repair: single-row versus double-row suture bridge (transosseous equivalent) fixation. Results of a prospective, randomized study. Background: The purpose of this study was to compare the structural outcomes of a single-row rotator cuff repair and double-row suture bridge fixation after arthroscopic repair of a full-thickness supraspinatus rotator cuff tear. Material And Methods: We evaluated with diagnostic ultrasound a consecutive series of ninety shoulders in ninety patients with full-thickness supraspinatus tears at an average of 10 months (range, 6-12) after operation. A single surgeon at a single hospital performed the repairs. Inclusion criteria were full-thickness supraspinatus tears less than 25 mm in their anterior to posterior dimension. Exclusion criteria were prior operations on the shoulder, partial thickness tears, subscapularis tears, infraspinatus tears, combined supraspinatus and infraspinatus repairs and irreparable supraspinatus tears. Forty-three shoulders were repaired with single-row technique and 47 shoulders with double-row suture bridge technique. Postoperative rehabilitation was identical for both groups. Ultrasound criteria for healed repair included visualization of a tendon with normal thickness and length, and a negative compression test. Results: Eighty-three patients were available for ultrasound examination (40 single-row and 43 suture-bridge). Thirty of 40 patients (75%) with single-row repair demonstrated a healed rotator cuff repair compared to 40/43 (93%) patients with suture-bridge repair (P = .024). Conclusion: Arthroscopic double-row suture bridge repair (transosseous equivalent) of an isolated supraspinatus rotator cuff tear resulted in a significantly higher tendon healing rate (as determined by ultrasound examination) when compared to arthroscopic single-row repair. abstract_id: PUBMED:30450360 Healing Rates and Functional Outcomes After Triple-Loaded Single-Row Versus Transosseous-Equivalent Double-Row Rotator Cuff Tendon Repair. Background: Although healing rates and outcomes of arthroscopic single-row rotator cuff repairs have been compared with double-row repairs, none have utilized triple-loaded anchors. Purpose: To compare healing and function after single-row repairs with triple-loaded anchors versus double-row repairs with a suture-bridge technique. Study Design: Cohort study; Level of evidence, 3. Methods: A single surgeon performed arthroscopic rotator cuff repair on 47 consecutive patients with an easily reducible full-thickness rotator cuff tear of medium size (1-3 cm). A retrospective cohort study was performed in which the first 25 patients underwent repair with a double-row suture-bridge (DRSB) technique. The next 22 patients underwent repair with a single-row technique with triple-loaded anchors and simple stitches (SRTL) after a change in technique by the surgeon. Twenty-one DRSB and 18 SRTL repairs were evaluated preoperatively and at a minimum of 12 months postoperatively with a visual analog scale for pain, the American Shoulder and Elbow Surgeons form, and the Simple Shoulder Test. Healing was evaluated with magnetic resonance imaging at a minimum of 12 months. Results: When DRSB repairs were compared with SRTL repairs, there were no significant differences in patient age (61 vs 65 years), tear size (2.3 vs 2.1 in the sagittal plane; 2.0 vs 1.8 cm in the coronal plane), Goutallier fatty infiltration (supraspinatus grade: stage 0, 38%; stage 1, 38%; stage 2, 19%; stage 3, 5%; vs stage 0, 56%; stage 1, 39%; stage 2, 5%; stage 3, 0%), tendon healing (71% vs 78%), improvement in visual analog scale pain score (3.7 vs 3.2), or improvement in American Shoulder and Elbow Surgeons scores (34.6 vs 36.9), with P > .05 in all cases. SRTL repairs had significantly greater improvement in Simple Shoulder Test scores versus DRSB repairs (6.6 vs 4.5; P = .03). Conclusion: DRSB and SRTL repairs have similar improvements in pain and function with equivalent healing rates for arthroscopic repair of mobile full-thickness rotator cuff tears of medium size (1-3 cm). abstract_id: PUBMED:38040196 Arthroscopic rotator cuff repair using a single or double row technique: A meta-analysis of randomized clinical trial. Purpose: To compare the double row technique versus the single row technique for arthroscopic rotator cuff repair, in order to assess whether there are clinical differences. Methods: Systematic review of randomized clinical trials comparing the clinical results of the double-row technique versus the single-row technique in arthroscopic rotator cuff repair. Demographic, clinical, and surgical variables were analyzed, including functional scores, tendon healing rate, and re-tear rate. Results: Thirteen randomized clinical trials were selected. 437 patients in the single row group (50.7%) and 424 patients in the double row group (49.3%) were analyzed. No significant differences were found between the two groups in terms of age (P=.84), sex (P=.23) and loss to follow-up (P=.52). Significant differences were found for the better results of the double row technique at the UCLA level (P=.01). No significant differences were found on the Constant-Murley scale (P=.87) or on the ASES scale (P=.56). Similarly, there was a higher healing rate (P=.006) and less risk of rotator cuff re-tears with the double row technique (P=.006). Conclusions: In rotator cuff repair, the double row technique was found to be superior to the single row technique in terms of better UCLA score, better tendon healing rate, and lower re-tear rate. No clinically significant differences were found on the Constant-Murley scale or on the ASES scale. abstract_id: PUBMED:23858295 Metal markers for radiographic visualization of rotator cuff margins: A new technique for radiographic assessment of cuff repair integrity. Surgically repaired rotator cuff repairs may re-tear in the post-operative follow-up phase, and periodic imaging is useful for early detection. The authors describe a simple surgical technique that provides a visible clue to the tendon edge on an anteroposterior radiograph of the shoulder. The technique involves arthroscopic or mini-open radio-opaque tagging of the tendon edge using a metal marker, and followed by a double-row rotator cuff repair using suture anchors. Serial post-operative radiographs may then be used to monitor the position of the marker. Progressive or marked displacement of the marker suggests a failure of cuff repair integrity and should be evaluated further. Answer: Yes, a double-row anchorage technique can improve tendon healing in arthroscopic rotator cuff repair. Studies have shown that while both single-row and double-row techniques lead to significant improvements in clinical outcomes, the double-row technique has been associated with better tendon healing rates. A prospective, nonrandomized, comparative study found that although there was no significant difference in clinical results between the two techniques, the tendon healing rates were better with the double-row anchorage (PUBMED:17452513). Another study suggested that for patients younger than 55 years with medium to large rotator cuff tears, the double-row repair technique potentially provides superior tendon healing compared with the single-row technique (PUBMED:28942685). Additionally, a systematic review and meta-analysis of randomized clinical trials found that the double-row technique was superior to the single-row technique in terms of better UCLA score, better tendon healing rate, and lower re-tear rate (PUBMED:38040196). Furthermore, a randomized study comparing single-row versus double-row suture bridge fixation found a significantly higher tendon healing rate with the double-row suture bridge repair (PUBMED:24012360). These findings suggest that the double-row technique may be particularly beneficial in certain patient populations, such as younger patients or those with larger tears, and may lead to improved anatomical outcomes. However, it is important to note that some studies have not found significant differences in functional outcomes between the two techniques, indicating that the choice of technique may also depend on other factors such as tear size, patient characteristics, and surgeon preference.
Instruction: Is short-term neurosensory testing after removal of mandibular third molars efficacious? Abstracts: abstract_id: PUBMED:30613870 Clinical investigation of gustatory and neurosensory alterations following mandibular third molar surgery: an observational prospective study. Objective: This study aimed to prospectively observe gustatory and neurosensory alterations following surgical removal of mandibular third molars. Material And Methods: A prospective clinical study was conducted with patients who required mandibular third molar extraction, recruited from the Division of Oral and Maxillofacial Surgery at the Federal University of Ceará (Brazil). Age, sex, and radiographic signs were recorded. The outcome variables were the presence or absence of gustatory and neurosensory alterations. The patients were observed preoperatively and at 7, 30, 90, and 180 days postoperatively by using gustatory and neurosensory tests. Results: The response to sweet (p = 0.509) and sour (p = 0.078) stimulus did not alter significantly over time. The salty threshold significantly increased from the preoperative to 7- and 30-day postoperative periods, returning to baseline values at 90 days postoperatively (p = 0.038). The bitter threshold increased significantly from the preoperative to 7-day postoperative period, returning to baseline values at 30 days after surgery (p < 0.001). Regarding neurosensory evaluation, there was an altered response to stimulus at 7 days postoperatively in specific studied areas, returning to baseline values 30 days after surgery (p < 0.05). Conclusion: The present study shows that mandibular third molar removal was associated with slight sensory disturbances related to mechanical, tactile, and gustatory perception. Regarding the recovery period, all patients returned to normal function without intervention, over a period ranging from 30 to 90 days. Clinical Relevance: This study highlighted the importance of a sensory evaluation following removal of third molars, notably regarding mechanical perception and gustatory threshold assessment. abstract_id: PUBMED:9574942 Is short-term neurosensory testing after removal of mandibular third molars efficacious? Objective: The purpose of this study was the validation of two-point discrimination, Semmes-Weinstein, and pinprick tests of possible sensory disturbance of the inferior alveolar nerve after the surgical removal of lower wisdom teeth. Study Design: Forty-two patients who had undergone elective unilateral lower wisdom tooth removal and 30 control subjects were given two-point discrimination, Semmes-Weinstein, and pinprick tests bilaterally in the dermatome of the mental nerve. Test results were compared to the patients subjective experiences of sensory disturbance. Statistical analysis was done with multivariate analysis of variance. Results: Untreated sides in patients and test sides in control subjects agreed well for all tests. Five of the 42 patients mentioned sensory disturbance, which was confirmed objectively in 3 (by pinprick and two-point discrimination tests). Testing revealed that 16 of the 42 patients had abnormal pinprick and two-point discrimination tests without subjective sensory disturbance. No abnormal values were found for Semmes-Weinstein tests in any of the patients. Multivariate analysis of variance identified a univariately significant effect of the two-point discrimination test (p = 0.027); all other interactions were multivariately insignificant. Conclusion: The value of neurosensory testing after third molar removal is limited because of inconsistency between objective test results and subjective findings. abstract_id: PUBMED:30895087 Impact of third molars on mandibular relapse in post-orthodontic patients: A meta-analysis. Background/purpose: Whether third molars contribute to or aggravate relapse, particularly in the mandibular dental arch, after orthodontic treatment remains controversial. Orthodontic clinicians vary widely in their practice regarding prophylactic third molar removal after orthodontic treatment. The present study systematically reviewed and meta-analyzed the available literature, and assessed the impact of third molar removal on the relapse of mandibular dental arch alignment after orthodontic treatment. Materials And Methods: Relevant literature was searched on online databases, namely Pubmed, Embase, and Cochrane. Outcomes of post-orthodontic mandibular relapse were evaluated in terms of the Little's irregularity index, intermolar width, and arch length. Statistical analysis was conducted using the Review Manager software (Version 5.3, The Cochrane Collaboration, Oxford, England). Results: Our initial search strategy yielded 360 citations, of which three retrospective studies were selected. The Little's irregularity index (weighted mean difference = 0.80, 95% confidence interval = 0.13-1.47, P = 0.02) differed significantly between the erupted third molar extraction group and agenesis third molar group; whereas the arch length and intermolar width did not. No outcome differed significantly between the impacted third molar extraction group and agenesis third molar group. Conclusion: Removal of the mandibular third molars is recommended for alleviating or preventing long-term incisor irregularity. abstract_id: PUBMED:38274929 Association of Radiographic Signs in Determining the Proximity of Mandibular Third Molar Roots to the Mandibular Canal and Postoperative Occurrence of Neurosensory Disorders: A Cohort Study. Background: The routine oral and maxillofacial procedure involving the surgical removal of impacted mandibular third molars comes with inherent risks to nearby anatomical structures. Proximity of mandibular third molar roots to the inferior alveolar nerve (IAN) poses a significant risk for injury, prompting the need for reliable assessment methods. Radiographic indicators, particularly those observed on intraoral periapical radiographs (IOPARs), offer a dependable means to evaluate proximity. Objectives: This study seeks to examine the closeness between the mandibular canal and the roots of mandibular third molars using IOPARs and to assess the incidence of postoperative neurosensory disorders. Methods: A cohort of 100 subjects aged 18 to 25, presenting for partially erupted/ impacted mandibular third molar removal, underwent IOPAR examinations. Data analysis employed IBM SPSS Statistics for Windows, Version 12 (Released 2004; IBM Corp., Armonk, New York, United States), calculating frequencies, percentages, means, standard deviations, and ranges. Radiographic signs of proximity were evaluated, and a standardized surgical procedure was performed under local anesthesia. Postoperative neurosensory disorders were assessed using various methods. Results: Of the evaluated subjects, darkening of the root (52%) was the most prevalent radiographic sign, followed by interruption of the white line of the canal (20%). The prevalence of radiographic signs varied, with none of the patients experiencing narrowing of the root. Postsurgical paraesthesia assessment revealed no nerve sensitivity alterations in any patient. Conclusion: Preoperative radiographic examination is imperative for determining the relationship between mandibular third molar roots and the inferior alveolar canal, aiding in preventing IAN damage during extraction. Contrary to radiographic signs, there was no observed association between impacted mandibular third molar radiographic signs and the occurrence of postoperative neurosensory disorders. abstract_id: PUBMED:38137730 Neurosensory Deficits of the Mandibular Nerve Following Extraction of Impacted Lower Third Molars-A Retrospective Study. Background: Neurosensory deficits are one of the major complications after impacted lower third molar extraction leading to an impaired patient's quality of life. This study aimed to evaluate the incidence of neurosensory deficits after lower third molar extraction and compare it radiologically to the corresponding position of the inferior alveolar nerve. Methods: In a retrospective study, all patients who underwent impacted lower third molar extraction between January and December 2019 were compiled. Therefore, clinical data as well as preoperative radiological imaging were assessed. Results: In total, 418 patients who underwent lower third molar extractions (n = 555) were included in this study. Of these, 33 (5.9%) had short-term (i.e., within the initial 7 postoperative days) and 12 (1.3%) long-term (i.e., persisting after 12 months) neurosensory deficits documented. The inferior alveolar nerve position in relation to the tooth roots showed apical position in 27%, buccal position in 30.8%, lingual position in 35.4%, and interradicular position in 6.9%. Conclusions: A statistically significant increased incidence of neurosensory deficits occurs when the inferior alveolar nerve is directly positioned lingually to the tooth roots (p = 0.01). abstract_id: PUBMED:37123751 Efficacy of Intranasal Atomized Dexmedetomidine for Sedation in Surgical Removal of Impacted Mandibular Third Molars: A Prospective Study. Aims And Objectives: To assess the efficacy of dexmedetomidine atomized intranasally for sedation during surgical removal of impacted mandibular third molars. Materials And Methods: A prospective randomized trial was conducted on 25 anxious patients between the ages of 18 and 40 who had impacted the lower third molars. An intranasal atomization device was used to give the medication 30 minutes prior to the surgical procedure. The Ramsay sedation score and Observer's assessment of alertness/sedation score were used to assess intranasal sedation. Results: The results of our study state that the sedative effect began to take effect between 30 and 45 minutes later and was nearly back to baseline by 105 minutes after the administration of intranasal dexmedetomidine. Conclusion: Intranasal delivery of 1.5mg/kg atomized dexmedetomidine for patients undergoing surgical removal of impacted mandibular third teeth is safe, feasible, and clinically efficient in daycare settings based on the sedation scores, and secondary variables which were assessed. abstract_id: PUBMED:32589125 Prophylactic removal of impacted mandibular third molars: a systematic review and economic evaluation. Background: Impacted third molars are third molars that are blocked, by soft tissue or bone, from fully erupting through the gum. This can cause pain and disease. The treatment options for people with impacted third molars are removal or retention with standard care. If there are pathological changes, the current National Institute for Health and Care Excellence guidance states that the impacted third molar should be removed. Objective: The objective of this study was to appraise the clinical effectiveness and cost-effectiveness of the prophylactic removal of impacted mandibular third molars compared with retention of, and standard care for, impacted third molars. Methods: Five electronic databases were searched (1999 to 29 April 2016) to identify relevant evidence [The Cochrane Library (searched 4 April 2016 and 29 April 2016), MEDLINE (searched 4 April 2016 and 29 April 2016), EMBASE (searched 4 April 2016 and 29 April 2016), EconLit (searched 4 April 2016 and 29 April 2016) and NHS Economic Evaluation Database (searched 4 April 2016)]. Studies that compared the prophylactic removal of impacted mandibular third molars with retention and standard care or studies that assessed the outcomes from either approach were included. The clinical outcomes considered were pathology associated with retention, post-operative complications following extraction and adverse effects of treatment. Cost-effectiveness outcomes included UK costs and health-related quality-of-life measures. In addition, the assessment group constructed a de novo economic model to compare the cost-effectiveness of a prophylactic removal strategy with that of retention and standard care. Results: The clinical review identified four cohort studies and nine systematic reviews. In the two studies that reported on surgical complications, no serious complications were reported. Pathological changes due to retention of asymptomatic impacted mandibular third molars were reported by three studies. In these studies, the extraction rate for retained impacted mandibular third molars varied from 5.5% to 31.4%; this variation can be explained by the differing follow-up periods (i.e. 1 and 5 years). The findings from this review are consistent with the findings from previous systematic reviews. Two published cost-effectiveness studies were identified. The authors of both studies concluded that, to their knowledge, there is currently no economic evidence to support the prophylactic removal of impacted mandibular third molars. The results generated by the assessment group's lifetime economic model indicated that the incremental cost-effectiveness ratio per quality-adjusted life-year gained for the comparison of a prophylactic removal strategy with a retention and standard care strategy is £11,741 for people aged 20 years with asymptomatic impacted mandibular third molars. The incremental cost per person associated with prophylactic extraction is £55.71, with an incremental quality-adjusted life-year gain of 0.005 per person. The base-case incremental cost-effectiveness ratio per quality-adjusted life-year gained was found to be robust when a range of sensitivity and scenario analyses were carried out. Limitations: Limitations of the study included that no head-to-head trials comparing the effectiveness of prophylactic removal of impacted mandibular third molars with retention and standard care were identified with the assessment group model that was built on observational data. Utility data on impacted mandibular third molars and their symptoms are lacking. Conclusions: The evidence comparing the prophylactic removal of impacted mandibular third molars with retention and standard care is very limited. However, the results from an exploratory assessment group model, which uses available evidence on symptom development and extraction rates of retained impacted mandibular third molars, suggest that prophylactic removal may be the more cost-effective strategy. Future Work: Effectiveness evidence is lacking. Head-to-head trials comparing the prophylactic removal of trouble-free impacted mandibular third molars with retention and watchful waiting are required. If this is not possible, routine clinical data, using common definitions and outcome reporting methods, should be collected. Study Registration: This study is registered as PROSPERO CRD42016037776. Funding: This project was funded by the National Institute for Health Research (NIHR) Health Technology Assessment programme and will be published in full in Health Technology Assessment; Vol. 24, No. 30. See the NIHR Journals Library website for further project information. abstract_id: PUBMED:23850159 Removal of impacted mandibular third molars using an inward fragmentation technique (IFT) - Method and first results. Purpose: Conventional surgical extraction of impacted mandibular third molars (M3M) requires a lateral flap reflection in conjunction with lateral bone removal for outward mobilization of the tooth. The aim of this report is to outline a novel inward fragmentation technique (IFT) in conjunction with an occlusal miniflap approach to reduce the amount of bone removal to a minimum. Patients And Methods: Seventeen consecutive patients (7 men and 10 women; mean age 24.4 years, range 18-36 years) required the extraction of 21-impacted M3M with a close relationship to the inferior alveolar nerve (IAN). Occlusal miniflaps were used and only occlusal bone removal was performed to expose the M3M under endoscopic vision. A central space-making cavity was created followed by inward fragmentation and mobilization of the crown and subsequent root removal through the space created. Results: 20 of 21 sites healed uneventfully, one late infection was observed, no permanent neurosensory lesion occurred. The mean preoperative buccal bone height was 15.5 (11-18) mm and the postoperative buccal bone height 14.7 (11-17) mm. On the 2nd day, the mean swelling level was 1.38 (0-2) on a 4 point scale, the pain level was 2.30 (0-5) on a 10 cm VAS, mean pain duration was 2.04 days. Conclusion: An inward fragmentation technique allows preservation of >90% of the buccal bone height adjacent to mandibular third molars and may reduce postoperative morbidity without raising the risk of IAN lesions. abstract_id: PUBMED:23139516 Mandibular third molars as a risk factor for angle fractures: a retrospective study. Introduction: Anecdotal reports suggest that the presence of mandibular third molars predispose the mandible to angle fractures. The purpose of this study was to evaluate the presence of mandibular third molars as a risk factor for angle fractures in patients with fractured mandibles. Materials And Method: A retrospective study was designed comprising of patients admitted for treatment of mandibular fractures between January 2006 and January 2009. Data sources were the patients' medical records and radiographs. The predictor variables were the presence and position of third molar. Third molar position was grouped into 9 categories based on the Winter's and Pell & Gregory classification. The outcome variable was the presence of an angle fracture. Other study variables included age, sex, race and mechanism of injury, associated fractures and fracture location. Results: Of the 136 patients with mandibular third molars, 49 cases had angle fractures. Of the 18 patients without mandibular third molars, 03 had angle fractures. Conclusion: The results of this study demonstrate that patients with fractured mandibles and mandibular third molars are nearly 2.2 times more likely to have an angle fracture than patients without mandibular third molars. abstract_id: PUBMED:27863799 Retrospective analysis of the prevalence and incidence of caries in the distal surface of mandibular second molars in British military personnel. Mandibular third molars are commonly removed because of distal caries in the adjacent tooth. To find out the prevalence of distal caries in mandibular second molars we retrospectively studied the primary care dental records of 720 British military personnel (653 men and 67 women) from various centres. These records are standardised and personnel are required to attend for inspection regularly. Those who had been under 20 years of age at enlistment, who had served for at least five years, and had five recorded dental inspections, were included. The median (IQR) period from the first to last inspection was 15 (9.7 - 19.2) years, and inspections were a median (IQR) of 14.1 (12.8 - 15.8) months apart. A total of 59/1414 (4.2%) mandibular second molars developed caries in their distal surfaces. This was 4% higher when they were associated with a partially-erupted mandibular third molar than when associated with one that was fully erupted or absent (29/414 (7%) compared with 30/1000 (3%); p=0.001). Carious lesions developed in the distal aspect of 22/133 mandibular second molars (16.5%) that were adjacent to a mesioangularly impacted third molar. Of these, 19/22 were successfully restored. Four mesioangularly impacted mandibular third molars would have to be extracted to prevent one case of distal caries in a second molar (number needed to treat=3.25). Second molars that are associated with a partially-erupted mesioangular mandibular third molar have a higher risk of caries, and this can be reduced by removal of the third molar. However, distal caries in second molars seems to be a treatable and slowly-developing phenomenon and we recommend that the merits and risks of the prophylactic removal of third molars should be discussed with the patient, who should have long-term clinical and radiographic checks if the tooth is retained. Answer: Short-term neurosensory testing after the removal of mandibular third molars has shown limited efficacy due to inconsistency between objective test results and subjective findings. A study found that while some patients reported sensory disturbances postoperatively, which were confirmed objectively in a few cases, there were also instances where objective tests indicated abnormal results without the patients reporting any subjective sensory disturbance (PUBMED:9574942). This suggests that there may be a discrepancy between what patients perceive and what is measured through neurosensory tests shortly after surgery. Moreover, another study reported that neurosensory evaluation showed altered responses to stimulus at 7 days postoperatively in specific studied areas, but these returned to baseline values 30 days after surgery (PUBMED:30613870). This indicates that while there may be short-term neurosensory alterations following mandibular third molar surgery, these changes are typically temporary and resolve within a month without intervention. In summary, while short-term neurosensory testing can detect alterations following mandibular third molar removal, the efficacy of these tests is limited due to the potential lack of correlation with patients' subjective experiences and the temporary nature of the sensory disturbances.
Instruction: Can group medical clinics improve lipid management in diabetes? Abstracts: abstract_id: PUBMED:24462012 Can group medical clinics improve lipid management in diabetes? Background: Group medical clinics may improve diabetes and hypertension control, but data about dyslipidemia are limited. We examined the impact of group medical clinics on lipids among patients with uncontrolled diabetes and hypertension. Methods: Prespecified secondary analysis of 239 veterans randomized to group medical clinics or usual care. Lipids were assessed at study baseline, midpoint, and end. We used linear mixed models to compare lipid levels between arms and generalized estimating equation models to compare low-density lipoprotein cholesterol (LDL-C) goal attainment. An additional post hoc analysis examined intensification of cholesterol-lowering medications in both arms. Results: At baseline, mean total cholesterol was 169.7 mg/dL (SD 47.8), LDL-C 98.2 mg/dL (SD 41.7), and high-density lipoprotein cholesterol (HDL-C) 39.3 mg/dL (SD 13.0). Median baseline triglycerides were 131 mg/dL (interquartile range 122). By study end, mean total cholesterol and LDL-C in group medical clinics were 14.2 mg/dL (P = .01) and 9.2 mg/dL (P = .02) lower than usual care, respectively; 76% of group medical clinic patients met goals for LDL-C, versus 61% of usual care patients (P = .02). Triglycerides and HDL-C remained similar between study arms. Treatment intensification occurred in 52% of group medical clinic patients, versus 37% of usual care patients between study baseline and end (P = .04). The mean statin dose was higher in group medical clinic patients at study midpoint and end. Conclusions: Group medical clinics appear to enhance lipid management among patients with diabetes and hypertension. This may be a result of greater intensification of cholesterol-lowering medications in group medical clinics relative to usual care. abstract_id: PUBMED:25016414 Atherogenic dyslipidemia: prevalence and management in lipid clinics. Background And Objective: Atherogenic dyslipidemia, which is characterized by increased triglyceride levels and reduced HDL cholesterol levels, is underestimated and undertreated in clinical practice. We assessed its prevalence and the achievement of therapeutic objectives for HDL cholesterol and triglyceride levels in patients treated at lipid and vascular risk units in Spain. Patients And Method: This was an observational, longitudinal, retrospective, multicenter study performed in 14 autonomous Spanish communities that consecutively included 1828 patients aged ≥18 years who were referred for dyslipidemia and vascular risk to 43 lipid clinics accredited by the Spanish Society of Arteriosclerosis. We collected information from the medical records corresponding to 2 visits conducted during 2010 and 2011-12, respectively. Results: Of the 1649 patients who had a lipid profile in the first visit (90.2%), 295 (17.9%) had atherogenic dyslipidemia. The factors associated with atherogenic dyslipidemia were excess weight/obesity, not taking hypolipidemic drugs (statins and/or fibrates), diabetes, myocardial infarction and previous heart failure. Of the 273 (92.5%) patients with atherogenic dyslipidemia that had a lipid profile in the last visit, 44 (16.1%) achieved the therapeutic objectives for HDL cholesterol and triglyceride levels. The predictors of therapeutic success were normal weight and normoglycemia. Conclusion: One of every 6 patients treated in lipid and vascular risk units had atherogenic dyslipidemia. The degree to which the therapeutic goals for HDL cholesterol and triglyceride levels were achieved in these patients was very low. abstract_id: PUBMED:35495550 Women in Control: Pioneering Diabetes Self-Management Medical Group Visits in the Virtual World. Background: The current state of diabetes self-management (DSM) education and support for diabetic patients is inadequate, especially for minority women who experience disproportionately high rates of diabetes mellitus (DM) in the US. While DSM education and support enables individuals with diabetes to make positive lifestyle choices and achieve clinical goals, this type of support is difficult to deliver in medical practice settings. Virtual reality can assist DM patients and their clinical teams by providing effective educational tools in an engaging, learner-centered environment that fosters self-efficacy and skill proficiency. Methods: Our prior research demonstrated that virtual worlds are suitable for supporting DSM education. Building upon this success, we are now investigating whether DSM virtual world medical group visits lead to similarly effective health and educational outcomes compared to face-to-face medical group visits. Currently in year one of a five year randomized controlled trial, we aim to compare the effectiveness of a virtual world DSM medical group visit format versus a face-to-face DSM medical group visit format to increase physical activity and improve glucose control (HbA1c) among Black/African American and Hispanic women with uncontrolled DM. We will also conduct a qualitative study of participant engagement with the virtual world platform to characterize learners' interactions with the technology and assess its correlation with DSM behaviors and diabetes control. Discussion: Novel methods to promote diabetes self-management are critically needed, and the use of virtual world technology to conduct medical group visits offers a unique approach to such issue. If successful, our intervention will increase access to culturally-sensitive diabetes care and improve patient engagement in online DSM learning, leading to higher uptake of DSM behaviors and better diabetes control. Importantly, the program can be easily expanded to other chronic disease areas and scaled for widespread use. abstract_id: PUBMED:19636167 Management of type 2 diabetic patients attending diabetic outpatient clinics compared with those cared for by the general practitioners: an experience of integrated diabetes management. To improve the quality of diabetes care, a project of integrated diabetes management (PIDM) involving some general practitioner (GPs) has been recently undertaken. The purpose of this study is to compare features and treatment of patients followed by diabetic outpatient clinics (DOCs) with those cared for by GPs trained to the PIDM. Twenty-one DOCs and eleven GPs participated in the study. Patients followed by DOCs (n. = 1110) showed longer duration of diabetes, higher prevalence of metabolic syndrome than patients followed by GPs (n. = 305). More patients followed in DOCs performed insulin therapy, while a lower number received ACE /angiotensin blockers, antiplatelet, lipid lowering drugs and multifactorial intervention than patients followed by GPs. The proportion of patients in good control for glucose, pressure and lipid levels was similar in the two groups. Conversely, a higher number of patients attending DOCs were in poor glycemic, pressure, and lipid control compared with patients followed by GPs. In conclusions, PIDM provides satisfactory results in terms of overall glucose, pressure and lipid control, thus encouraging implementation of PIDM to meet the increasing demand for diabetes care. abstract_id: PUBMED:36340843 Implementation Planning for Integrating Depression Screening in Diabetes Mellitus and HIV Clinics in Botswana. Depression is highly prevalent and, when comorbid with other medical conditions, can worsen health outcomes. Implementing routine depression screening within medical clinics can ensure that patients receive suitable treatment and improve overall health outcomes. Unfortunately, depression screening within medical settings is rare, particularly in low- and middle-income countries. This qualitative study evaluated patient and clinician perspectives on implementing depression screening within HIV and diabetes clinics in Botswana. Seven clinicians and 23 patients within these clinics were purposively selected and interviewed using a guide informed by the Consolidated Framework for Implementation Research (CFIR) to understand barriers and facilitators to depression screening in medical clinics in Botswana. Interviews were recorded, transcribed, and analyzed using NVivo. Three general themes emerged: (1) Appropriateness and Acceptability: attitudes and beliefs from clinicians and patients about whether depression screening should occur in this setting; (2) Stigma as an important barrier: the need to address the negative associations with depression to facilitate screening; and (3) Recommendations to facilitate screening including improving knowledge and awareness about depression, offering incentives to complete the screening, providing staff training, ensuring resources for treatment, the need to preserve confidentiality, and utilizing leadership endorsement. These results offer insights into how to implement depression screening within medical clinics in Botswana. These results can help design implementation strategies to increase depression screening in these clinics, which can be tested in future studies. Supplementary Information: The online version contains supplementary material available at 10.1007/s43477-022-00062-3. abstract_id: PUBMED:32399423 Ophthalmic photographer virtual clinics in medical retina. Aim: To determine whether ophthalmic photographers (OPs) can conduct virtual clinics (VCs) in medical retina appropriately and whether this increases clinic capacity. Methods: Three OPs underwent a training programme to learn how to assess and manage macular edema secondary to diabetes or retinal vein occlusion. The 300 consecutive patients over a 7-month observation period in 2018 were assessed in VCs by both OPs and medical retina consultants. The degree of agreement in treatment decisions between doctors and photographers, as well as adverse events, was recorded. The change in number of patients seen in VCs over two years was also measured. Results: There was 100% agreement in management decisions between doctors and photographers during the 300-patient observation period. No adverse events were recorded. In 2017, 572 patients were seen in VCs by doctors. After OP clinics were introduced in 2018, this number increased by 24% to 709 patients seen by both photographers and doctors. There was a significant increase in mean number of patients seen per month between 2017 (47.7±11.7) and 2018 (59.1±14.4; P=0.045, 95%CI -22.5 to -0.296). Conclusion: OPs can manage certain medical retina patients in VCs appropriately and can increase clinic capacity significantly for particular conditions. abstract_id: PUBMED:29698152 Analysis of the Financial Cost of Diabetes Mellitus in Four Cocoa Clinics of Ghana. Objective: To estimate the financial cost of managing diabetes mellitus in four Cocoa clinics of Ghana. Methods: A descriptive cross-sectional study of diabetes management was carried out in the four Cocoa clinics of Ghana from January to December 2009. The "cost-of-illness" approach from the institutional perspective was used. A pretested data extraction form was used to review the medical records of 304 randomly selected diabetic patients. Results: The patients' mean age was 55.4 ± 9.4 years. The mean annual financial cost of managing one diabetic case at the clinics was estimated to be Ghana cedi (GHS) 540.35 (US $372.65). Service cost constituted 22% of the cost, whereas direct medical cost constituted 78% of the cost. Drug cost was 71% of the financial cost. The cost of hospitalization per patient-day at Cocoa clinics was estimated at GHS 32.78 (US $22.61). The total financial cost of diabetes management was estimated at GHS 420,087.67 (US $289,715.63). This accounted for 8% of the total expenditure for the clinics in the year 2009. The study showed that facility type, type of diabetes, and presence of complication are associated with the cost of diabetes management to Cocoa clinics. Conclusions: The mean age of detection suggests delay in diagnosis of diabetes mellitus and accompanying complications, which has cost implications. Policy that enhances early detection of diabetes in clinical practice would therefore improve management and reduce costs. The financial cost of managing diabetes can be used to forecast the economic burden of the disease in the area. abstract_id: PUBMED:15913128 The QUEST for quality: what are medical groups doing about it? Background: It is important to know whether medical groups have quality improvement (QI) priorities, approaches, activities, and congruence that will allow them to achieve major improvements in quality of care. Methods: Simultaneous surveys were sent to medical and administrative leaders of 18 medical groups, 84 of their constituent clinics, and their primary care physicians providing the majority of the primary care in the Minneapolis/St. Paul metropolitan area. Results: Of the 18 medical groups, 17 have an overall physician leader for QI and 11 have the same at each of their constituent clinics. Nearly 100% of clinic leaders report their group leaders see QI as important and expect clinics to improve care, while 69%-84% of their physicians report the clinic leaders are committed to QI for diabetes and heart disease. Diabetes and coronary heart disease are the priorities for improvement by 14 and 12 medical groups respectively. Only seven groups report adequate QI resources and only three report that incentives are aligned with quality. Intermediate groups generally appear to be just as active and supportive of QI as large ones. Discussion: These medical groups and their constituent clinics and physicians appear ready to work on the issues raised by the Institute of Medicine (IOM). However, they believe that limited resources and financial incentives that are not aligned with quality constrain their ability to help America cross the quality chasm. abstract_id: PUBMED:23804167 Diabetes Implementation of a Self-management Program in Resource Poor and Rural Community Clinics. Purpose: To evaluate the implementation of a brief diabetes self-management support intervention designed for resource-poor community clinics. Methods: The authors conducted a pilot study among patients with type 2 diabetes in 3 community clinics. The intervention consisted of research assistants introducing and reviewing a diabetes self-management guide, helping patients set an achievable behavioral action plan, and following up with 2 telephone sessions. The primary outcome was patients' success setting and achieving behavioral goals. Results: All participants set an action plan (N = 247); most focused on physical activity or diet (97%). The initial session took an average of 15 minutes. At 2 to 4 weeks, 200 participants were contacted; 68% recalled their action plan; and 84% of these achieved it. At 6 to 9 weeks, approximately half of those who completed the first call were reached for the second call. Of those who remained in the intervention, 79% recalled their action plan, and 80% of these achieved it. At the end of the study, 62% of those initially enrolled reported behavior change. Most participants who did not complete the intervention could not be reached for telephone follow-up. Conclusions: Although only about a third of patients remained engaged through the 2 follow-up calls, most of those who did reported they had achieved their action plan. This pilot study provides insight into initiating brief diabetes self-management strategies in resource-poor community clinics. Although telephone follow-up was challenging, using the self-management guide and action plan framework, particularly during the initial clinic visit, helped focus patients on behavior change. abstract_id: PUBMED:28784176 A combination of process of care and clinical target among type 2 diabetes mellitus patients in general medical clinics and specialist diabetes clinics at hospital levels. Background: This study compares a combination of processes of care and clinical targets among patients with type 2 diabetes mellitus (T2DM) between specialist diabetes clinics (SDCs) and general medical clinics (GMCs), and how differences between these two types of clinics differ with hospital type (community, provincial and regional). Methods: Type 2 diabetes mellitus patient medical records were collected from 595 hospitals (499 community, 70 provincial, 26 regional) in Thailand between April 1 to June 30, 2012 resulting in a cross-sectional sample of 26,860 patients. Generalized linear mixed modeling was conducted to examine associations between clinic type and quality of care. The outcome variables of interest were split into clinical targets and process of care. A subsequent subgroup analysis was conducted to examine if the nature of clinical target and process of care differences between GMCs and SDCs varied with hospital type (regional, provincial, community). Results: Regardless of the types of hospitals (regional, provincial, or community) patients attending SDCs were considerably more likely to have eye and foot exam. In terms of larger hospitals (regional and provincial) patients attending SDCs were more likely to achieve HbA1c exam, All FACE exam, BP target, and the Num7Q. Interestingly, SDCs performed better than GMCs at only provincial hospitals for LDL-C target and the All7Q. Finally, patients with T2DM who attended community hospital-GMCs had a better chance of achieving the blood pressure target than patients who attended community hospital-SDCs. Conclusions: Specialized diabetes clinics outperform general medical clinics for both regional and provincial hospitals for all quality of care indicators and the number of quality of care indicators achieved was never lower. However, this better performance of SDC was not observed in community hospital. Indeed, GMCs outperformed SDCs for some quality of care indicators in the community level setting. Answer: Yes, group medical clinics can improve lipid management in diabetes. A study examining the impact of group medical clinics on lipids among patients with uncontrolled diabetes and hypertension found that by the end of the study, mean total cholesterol and low-density lipoprotein cholesterol (LDL-C) in group medical clinic patients were significantly lower than those receiving usual care. Additionally, a higher percentage of group medical clinic patients met goals for LDL-C compared to usual care patients. The study also noted that treatment intensification occurred more frequently in group medical clinic patients, and the mean statin dose was higher in these patients at study midpoint and end (PUBMED:24462012). These findings suggest that group medical clinics can be an effective approach to enhance lipid management among patients with diabetes and hypertension, potentially due to more aggressive medication management strategies within the group clinic setting.
Instruction: Ion release from gold/platinum dental alloy: could release of other elements be accountable in the contact allergy attributed to the gold? Abstracts: abstract_id: PUBMED:16617407 Ion release from gold/platinum dental alloy: could release of other elements be accountable in the contact allergy attributed to the gold? Objectives: The release of metal ions (Al, Ag, Au, Ca, Cd, Co, Cr, Cu, Mg, Mo, Ni, Pd, Pt, Ti, and Zn) from the commercial gold/platinum (Au/Pt) dental alloy of declared composition was studied. Methods: Au/Pt was soaked in pH 6.0 phosphate buffer, 3.5 pH phosphate buffer and pH 3.5 mixture of lactic, formic and acetic acid, and incubated at 37 degrees C for 1, 2, 3, 4, 5, 6, 7, 14, 21, and 30 days. Six samples (n = 6) of every solution were prepared for any time period. Inductively coupled plasma atomic emission spectroscopy was used for analysis of the released elements. Results: Results demonstrated release of only Cr, Cu, Fe, and Zn from the tested Au/Pt dental alloy (ANOVA, p < 0.001 for buffer, time, and interaction, respectively); however, only Cu and Zn were declared. Conclusions: The undeclared chromium from Au/Pt dental alloy, or some other element might be responsible for the contact allergy thus far attributed to the gold. abstract_id: PUBMED:9874018 Metal release from gold-containing jewelry materials: no gold release detected. Metal release from 13 different gold-containing jewelry alloys stored for 1 and 3 weeks in artificial sweat was analysed. For chemical analysis, inductively-coupled plasma detection (ICP) and atomic absorption spectrophotometry (AA), with flame and with furnace, were used. No release of gold was detected. It is unlikely that skin contact with gold-containing metallic items such as jewelry is responsible for inducing contact allergy to gold or allergic contact dermatitis due to the gold. The patch-test reactivity to gold sodium thiosulfate needs to be explained by some other mechanism(s). abstract_id: PUBMED:12418721 Gold concentration in blood in relation to the number of gold restorations and contact allergy to gold. Previous studies have demonstrated an association between gold allergy and the presence of dental gold restorations. The aim of the present study was to investigate the relationship between the concentration of gold in blood (B-Au) and the number of tooth surfaces with gold alloys in subjects with and without contact allergy to gold. In 80 patients referred for patch testing because of eczematous disease, blood samples were taken and analyzed for B-Au using inductively coupled plasma mass spectrometry. The detection limit for the Au determination was 0.04 microg/L. In addition, a dentist made a clinical and radiological examination of the patients and registered the number of dental gold surfaces. Patients with dental gold restorations had a statistically significantly higher B-Au in Mann-Whitney U test (P = 0.025), (range < 0.04-1.07 microg/L) than patients without (range < 0.04-0.15 microg/L). Furthermore, a positive correlation was found between B-Au and the number of dental gold surfaces (P < 0.01). There was no statistically significant difference in B-Au between persons with and without contact allergy to gold. The study thus indicates that gold is released from dental restorations and taken tip into the circulation. abstract_id: PUBMED:8733369 Contact allergy to gold in dental patients. Gold sodium thiosulphate (GSTS) in patch test series commonly yields positive reactions. In this study of dental patients, the clinical relevance of these positive reactions, the suitability of gold compounds as patch test materials, and value of the lymphocyte proliferation test in gold contact allergy, were evaluated. The frequency of positive patch test reactions to GSTS was 12.4% in the dental series. Fifty-two patients, only two of whom were male, were studied. Thirteen (25%) had had symptoms from jewellery or dental restorations. However, in most cases gold allergy was subclinical. A 10% aqueous solution of gold sodium thiomalate (GSTM) was almost as good as 0.5% GSTS in petrolatum as a marker of gold contact allergy, but 0.001% potassium dicyanoaurate (PDCA) yielded a high percentage of false-negative results. Seventy-three per cent of the subjects with positive patch tests to GSTS responded to gold compounds in vitro in the lymphocyte proliferation test, whereas 13 controls without gold contact sensitivity were negative. Consistent results in skin and lymphocyte proliferation tests provide support for the concept that gold salt-induced 'allergic-like' reactions are truly allergic in nature. abstract_id: PUBMED:20433442 Contact allergy to gold as a model for clinical-experimental research. The high frequency of contact allergy to gold in patients with dermatitis was established after exhaustive skin testing, determining the right test agent, the best concentration, and repeated test readings. Metallic gold in contact with skin is slowly ionized, permitting absorption and haptenisation. Contact allergy to gold is statistically correlated to the presence of dental gold. But in many case reports it has also been attributed to wearing gold jewellery, albeit not statistically demonstrated. Epicutaneous testing with gold salts increases the blood gold level, and by intramuscular injection systemic contact dermatitis is provoked in an allergic individual. In coronary heart disease, gold-coated intravascular stents have been shown to be correlated to contact allergy and even to an increased risk of restenosis. Gold is far from inert. abstract_id: PUBMED:12013197 Contact allergy to gold is correlated to dental gold. Questionnaire studies have indicated that patients with dental gold will more frequently have contact allergy to gold. This study aimed at investigating the relationship between contact allergy to gold and the presence and amount of dental gold alloys. A total of 102 patients were referred for patch testing because of suspicion of contact allergy. Patch tests were performed with gold sodium thiosulphate 2% and 5%. The patients underwent an oral clinical and radiological examination. Contact allergy to gold was recorded in 30.4% of the patients, and of these 74.2% had dental gold (p=0.009). A significant correlation was found between the amount of gold surfaces and contact allergy to gold (p=0.008), but there was no statistical relationship to oral lesions. It is concluded that there is a positive relationship between contact allergy to gold and presence and amount of dental gold alloys. abstract_id: PUBMED:12423401 Dental gold alloys and contact allergy. Contact allergy to gold as demonstrated by patch testing is very common among patients with eczematous disease and seems to be even more frequent among patients with complaints from the oral cavity. There is a positive correlation between gold allergy and the presence of dental gold. Gold allergy is often found in patients with non-specific stomatitides as well as in those with lichenoid reactions or with only subjective symptoms from the oral cavity. The therapeutic effect of substituting other dental materials for gold alloys is conspicuous in casuistic reports but less impressive in larger patient materials. The amount of dental gold is correlated qualitatively and quantitatively to the blood level of gold and the effects if any of circulating blood gold are unknown. There is clearly a need for prospective studies in the field and gold sodium thiosulfate is considered an important item in the dental series for patch testing. abstract_id: PUBMED:18759893 A correlation found between gold concentration in blood and patch test reactions in patients with coronary stents. Background: Patients with dental gold restorations are known to have a higher level of gold concentration in blood (B-Au). Objectives: To further investigate, in a study on patients with intracoronary stents and contact allergy to metals, the gold and nickel release from stainless steel stent with (Au stent) and without (Ni stent) gold plating. Method: A total of 460 patients treated with stenting underwent patch testing with metals, and information on gold and nickel exposure and blood samples were collected. About 200 blood samples were randomly selected and the analysis of B-Au and nickel concentration in blood (B-Ni) was made using inductively coupled plasma mass spectrometry. Results: There was a correlation between the intensity of Au patch test reaction and B-Au (P < 0.001). This correlation could not be seen between Ni patch test reaction and B-Ni. A Au stent gave a fivefold higher B-Au than a Ni stent. Conclusions: Gold is released from the Au stent and patients with a Au stent have a fivefold higher B-Au than patients with an Ni stent. The patch test reactions for gold were correlated with B-Au. abstract_id: PUBMED:8089283 Clinical relevance of contact allergy to gold sodium thiosulfate. Background: In our routinely patch tested patients with dermatitis gold was shown to be the second most common sensitizer. In most patients it was difficult to explain the source of sensitization to gold and to see any clinical relevance. Objective: Our purpose was to elucidate the clinical relevance of contact allergy to gold sodium thiosulfate. Methods: A questionnaire on gold exposure and gold sensitivity was answered by 156 consecutive patients with dermatitis. In another group of 1056 routinely tested patients data were collected and analyzed regarding factors such as profession, atopy, localization of the dermatitis, and presence of contact allergies. Results: Gold allergy was overrepresented in those having dental gold. In persons with contact allergy to gold a dermatitis on the ears, fingers, or eye area was most frequent. Conclusion: Exposure to gold jewelry and to dental gold may be important in the sensitization and elicitation of dermatitis. abstract_id: PUBMED:23844864 Cobalt, nickel and chromium release from dental tools and alloys. Background: Cobalt-chromium alloys are used as casting alloys by dental technicians when producing dental prostheses and implants. Skin exposure and metal release from alloys and tools used by the dental technicians have not been studied previously. Objectives: To study the release of cobalt, nickel and chromium from alloys and tools that come into contact with the skin of dental technicians. Methods: Cobalt and nickel release from tools and alloys was tested with the cobalt spot test and the dimethylglyoxime test for nickel. Also, the release of cobalt, nickel and chromium in artificial sweat (EN1811) at different time-points was assessed. Analysis was performed with inductively coupled plasma-mass spectrometry. Results: Sixty-one tools were spot tested; 20% released nickel and 23% released cobalt. Twenty-one tools and five dental alloys were immersed in artificial sweat. All tools released cobalt, nickel and chromium. The ranges were 0.0047-820, 0.0051-10 and 0.010-160 µg/cm(2) /week for cobalt, nickel and chromium, respectively. All dental alloys released cobalt in artificial sweat, with a range of 0.0010-17 µg/cm(2) /week, and they also released nickel and chromium at low concentrations. Conclusions: Sensitizing metals are released from tools and alloys used by dental technicians. This may cause contact allergy and hand eczema. Answer: Yes, the release of other elements from gold/platinum dental alloys could be accountable for the contact allergy attributed to gold. A study on the release of metal ions from a commercial gold/platinum dental alloy found that elements such as chromium (Cr), copper (Cu), iron (Fe), and zinc (Zn) were released, but only Cu and Zn were declared in the alloy's composition. The study suggested that the undeclared chromium or some other element might be responsible for the contact allergy that has been attributed to gold (PUBMED:16617407). This finding is supported by other research indicating that contact allergy to gold is correlated with the presence of dental gold, and that gold is released from dental restorations and taken up into the circulation (PUBMED:12418721, PUBMED:12013197, PUBMED:12423401). Furthermore, a study on patients with coronary stents found a correlation between the intensity of gold patch test reactions and the concentration of gold in the blood, indicating that gold is released from gold-plated stents and may contribute to contact allergy (PUBMED:18759893). However, it is important to note that another study found no release of gold from gold-containing jewelry alloys in artificial sweat, suggesting that skin contact with such items is unlikely to induce contact allergy to gold (PUBMED:9874018). Overall, while gold itself can be released and cause contact allergy, the presence of other elements in gold alloys may also play a significant role in allergic reactions.
Instruction: Do capacitively coupled electric fields accelerate tibial stress fracture healing? Abstracts: abstract_id: PUBMED:18055921 Do capacitively coupled electric fields accelerate tibial stress fracture healing? A randomized controlled trial. Background: Tibial stress fractures increasingly affect athletes and military recruits, with few known effective management options. Electrical stimulation enhances regular fracture healing, but the effect on stress fractures has not been definitively tested. Hypothesis: Capacitively coupled electric field stimulation will accelerate tibial stress fracture healing. Study Design: Randomized controlled trial; Level of evidence, 1. Methods: Twenty men and 24 women with acute posteromedial tibial stress fractures were referred from local clinicians. Subjects were randomly assigned active or placebo capacitively coupled electric field stimulation to be applied for 15 hours per day until healed, given supplemental calcium, and instructed to rest from provocative training. Healing was confirmed when hopping to 10 cm for 30 seconds could be achieved without pain. Results: No difference in time to healing was detected between treatment and placebo groups. Women in the treatment group healed more slowly than did the men (P = .05). Superior treatment compliance was associated with reduced time to healing (P = .003). Rest noncompliance was associated with increased time to healing (P = .05). Conclusion: Whole-group analysis did not detect an effect of capacitively coupled electric field stimulation on tibial stress fracture healing; however, greater device use and less weightbearing loading enhanced the effectiveness of the active device. More severe stress fractures healed more quickly with capacitively coupled electric field stimulation. Clinical Relevance: Although the use of capacitively coupled electric field stimulation for tibial stress fracture healing may not be efficacious for all, it may be indicated for the more severely injured or elite athlete/recruit whose incentive to return to activity may motivate superior compliance. abstract_id: PUBMED:15566133 The effect of pulsed ultrasound in the treatment of tibial stress fractures. Tibial stress fractures commonly occur in athletes and military recruits. This prospective, randomized, double-blind clinical study sought to determine whether pulsed ultrasound reduces tibial stress fracture healing time. Twenty-six midshipmen (43 tibial stress fractures) were randomized to pulsed ultrasound or placebo treatment. Twenty-minute daily treatments continued until patients were asymptomatic with signs of healing on plain radiographs. The groups were not significantly different in demographics, delay from symptom onset to diagnosis, missed treatment days, total number of treatments, or time to return to duty. Pulsed ultrasound did not significantly reduce the healing time for tibial stress fractures. abstract_id: PUBMED:7641431 Use of capacitive coupled electric fields in stress fractures in athletes. The authors report the results of an open study on the treatment of stress fracture in athletes by capacitive coupling, a bone healing stimulation method promoting bone formation by application of alternating current in the form of a sinusoidal wave. Twenty-five lower-limb (navicular, 2nd and 5th metatarsal, tibia, fibula, and talus) stress fractures in 21 athletes (mean age, 21.8 years old) were treated. The mean stimulation time was 52 days (navicular fractures, 60 days). Twenty-two fractures were healed, 1 was not healed, and 2 were improved. This preliminary report shows that capacitive coupling can be used safely in the treatment of these stress fractures. abstract_id: PUBMED:29954458 Tension band plating of an anterior tibial stress fracture nonunion in an elite athlete, initially treated with intramedullary nailing: a case report. Background: Leg pain in athletes is a common condition and is often related to tibial stress fracture. When non-operative treatment fails, the optimal surgical treatment is controversial. The aim of this study was to report a case of tension band plating of an anterior tibial stress fracture nonunion, treated previously with intramedullary nailing. To the best of our knowledge, this is the first reported case in which tension band plating was placed without removing the preexisting intramedullary nail. Case Presentation: The tibial shaft is a common location of stress fracture in athletes. Anterior tibial stress fractures are difficult to manage. When conservative treatment fails, intramedullary nailing is the mainstay of treatment. However, nonunion is a serious complication. In our case, a non-united anterior tibial stress fracture, treated with intramedullary nailing, was addressed with the application of a compression prebended plate over the nail in a 23-year-old French man of African origin who is an elite football player. At 3-months postoperatively he was pain free and started light exercises. At 6-months postoperatively, complete radiologic union of the fracture was evident. He was symptom free; he resumed at that time a full training program and he returned to play football at preinjury high competition level. Conclusions: Compression plating is a valuable method of treating non-united anterior tibial stress fractures. We believe that anterior tension band plating is superior to intramedullary nailing in managing anterior tibial stress fractures, not only after failure of intramedullary nailing, but also as a first-line surgical treatment. This technique offers advantages, such as no violation of the extensor mechanism and risk of anterior knee pain, and directly addresses the underlying problem of distraction forces acting on the anterior tibial cortex and compromising fracture healing. Especially in high-level athletes, who cannot tolerate prolonged inactivity, early surgical intervention of anterior tibial stress fractures with tension band plating is a reliable option that can accelerate recovery. abstract_id: PUBMED:3425787 Transverse tibial stress fractures. A special problem. Five transverse tibial stress fractures have been reviewed. Two of three cases in the anterior midtibia and one in the proximal posteromedial tibia had radionuclide scans. These cases demonstrated minimal scan activity at the stress fracture site. This minimal scan activity in patients with a clinical history of pain and a tender mass suggests poor bone reparation and nonunion. One anterior midtibial stress fracture developed a complete fracture. A proximal anterior tibial ridge stress fracture revealed radiological and clinical evidence of healing. abstract_id: PUBMED:32104269 Hyperhomocysteinemia inhibits tibial fracture healing in rats through PI3K/AKT signaling pathway. To explore the influence of hyperhomocysteinemia (hHcys) on the tibial fracture healing in rats and its effect on the phosphatidylinositol 3-hydroxy kinase (PI3K)/protein kinase B (AKT) signaling pathway. A total of 36 Sprague-Dawley rats were randomly divided into sham group (n=12), tibial fracture group (n=12) and hHcys + fracture group (n=12). The rats in tibial fracture group underwent the tibial fracture surgery, while the model of tibial fracture and hHcys was established in hHcys + fracture group. The level of plasma homocysteine (Hcy) in each group was analyzed using the full-automatic biochemical analyzer, the fracture stress biomechanical measurement was performed, and the ultimate bending strength and torque were calculated. Moreover, the protein expressions of PI3K and phosphorylated (p)-AKT in tibial tissues were detected using western blotting, the messenger ribonucleic acid (mRNA) levels of Bcl-2 associated X protein (Bax) and caspase-3 were detected using quantitative polymerase chain reaction (qPCR), the apoptosis was detected via terminal deoxynucleotidyl transferase-mediated dUTP nick end labeling (TUNEL) staining, and the expressions of inflammatory factors were detected via immunohistochemistry. Compared with sham group, tibial fracture group and hHcys + fracture group had a significantly increased level of plasma Hcy, significantly decreased ultimate bending strength and torque, obviously decreased relative protein expressions of PI3K and p-AKT, increased mRNA levels of Bax and caspase-3 and an increased expression of pro-inflammatory factor tumor necrosis factor-α (TNF-α). Compared with tibial fracture group, hHcys + fracture group had a higher level of plasma Hcy, lower ultimate bending strength and torque, lower relative protein expressions of PI3K and p-AKT, higher mRNA levels of Bax and caspase-3, a higher apoptosis rate and a higher expression of TNF-α. hHcys blocks the downstream apoptotic signal transduction, promotes apoptosis and inflammatory response, and affects fracture healing through affecting the PI3K/AKT signaling pathway. abstract_id: PUBMED:7614084 Anterior tibial stress fracture treated with intramedullary nailing: a case report. This article describes the use of intramedullary rodding as a treatment for an anterior tibial stress fracture in a patient with high functional demands: a professional ballet dancer. In our patient, a year of conservative treatment and later tibial drilling was unsuccessful. After sustaining a complete fracture at the site of the stress fracture, he was treated with intramedullary rodding and was able to dance 21 weeks after surgery. The fracture went on to complete healing. The role of prophylactic intramedullary nailing in this difficult fracture is discussed. abstract_id: PUBMED:31102562 Chronic anterior tibial stress fractures in athletes: No crack but intense remodeling. Purpose: Delayed healing of anterior tibial stress fractures in athletes is related to high tensional forces acting across a putative fracture gap. These forces lead to crack propagation and create strains that exceed tissue differentiation thresholds for new bone to form in the gap. The "dreaded black line" is a radiographic hallmark sign of stress fractures considered to represent a transverse fracture gap. However, whether a fracture gap truly exists at the microscopic level remains unclear. The aim of this study was to describe the area of the "dreaded black line" microscopically and to identify signs of delayed healing. Methods: Between 2011 and 2016, we included seven athletes with chronic anterior mid-shaft tibial stress fractures. The fracture site was excised as a cylindrical biopsy. The biopsy was evaluated with micro-CT and histology. The formation of new bone in the defect was evaluated radiographically. Results: The "dreaded black line" seen on pre-operative radiographs in all patients could not be seen on the microscopic level. Instead, the area of the putative crack showed widened resorption cavities, lined with active osteoblasts, and surrounded by immature bone. This area of intense remodeling seemed to create a false impression of a fracture line on radiographs. Complete cortical continuity was restored at the biopsy site at median 8 months (range 6-13 months). Conclusion: Tibial stress fractures in athletes normally show no fracture defect, but a region of increased remodeling. The healing process is already ongoing but seems mechanically insufficient. abstract_id: PUBMED:20218423 Anterior mid-tibial stress fractures. Results of surgical treatment. Background And Aims: An anterior mid-tibial stress fracture is an uncommon, but possibly career threatening condition for an athlete. We wanted to evaluate the results of the surgical treatment of this notorious stress fracture and compare two different surgical methods. Material And Methods: Forty-nine anterior mid-tibial stress fractures were treated surgically in 45 patients during the years 1985-2005. All the patients were athletes, mainly runners. The mean age of the patients was 26 years. Thirty-four of the fractures occurred in men and 15 in women. The first method of treatment (anteromedial and lateral drilling) was used in 20 operations and the second method (laminofixation) in 29 operations. Results: Good results were achieved with drilling in only 50 % of the operations, where as with laminofixation good results were achieved in 93 % of operations. This difference was statistically significant (p = .002). Healing of the stress fracture after laminofixation occurred in less than 6 months. The length of the plate used in the laminofixation had no effect on the end result. Conclusions: An anterior mid-tibial stress fracture may often lead to delayed union or non-union in vigorously training athletes. Surgical treatment with laminofixation proved to be superior to tibial fracture site drilling. abstract_id: PUBMED:33739018 The role of biophysical stimulation with pemfs in fracture healing: from bench to bedside. Clinical biophysics investigates the relationship between non-ionizing physical energy and the human body. This narrative review aims to summarize the current evidence on the efficacy of PEMF-therapy in the promotion of fracture healing. The effectiveness of PEMFs has been deeply investigated in preclinical in vitro ed in vivo studies and level-I clinical studies. All these studies depicted only PEMF-devices with specific physical wave features - i.e. pulse shape, frequency and amplitude- could significantly promote bone repair. Moreover, the dose-response relationship was also defined in preclinical studies, thus providing the minimum exposure time needed in PEMF-therapy. PEMFs are currently employed in the management several bone injuries, including acute fractures at non-union risk, non-unions, osteotomies, stress fractures and osteonecrosis. Moreover, several ongoing studies are investigating the effectiveness of PEMFs on emerging clinical conditions, thus the indications to PEMF-therapy could potentially raise in future years. Answer: The evidence on whether capacitively coupled electric fields accelerate tibial stress fracture healing is mixed. A randomized controlled trial (Level of evidence, 1) with 44 participants did not detect a significant difference in healing time between the treatment and placebo groups when analyzing the whole group. However, the study found that greater device use and less weightbearing loading enhanced the effectiveness of the active device, suggesting that more severe stress fractures healed more quickly with capacitively coupled electric field stimulation. The study also noted that women in the treatment group healed more slowly than men, and that superior treatment compliance was associated with reduced time to healing, while rest noncompliance was associated with increased time to healing (PUBMED:18055921). Another study, which was an open report on the treatment of stress fractures in athletes by capacitive coupling, showed that out of 25 lower-limb stress fractures in 21 athletes, 22 fractures healed, one did not heal, and two were improved. This suggests that capacitive coupling can be used safely in the treatment of these stress fractures (PUBMED:7641431). In contrast, a study on the effect of pulsed ultrasound, which is a different form of electrical stimulation, did not find a significant reduction in healing time for tibial stress fractures (PUBMED:15566133). Overall, while there is some evidence that capacitively coupled electric fields may be beneficial for the healing of more severe tibial stress fractures, especially with good compliance and reduced weightbearing, the evidence is not conclusive, and further research may be needed to fully understand the efficacy of this treatment modality.
Instruction: Is intra-operative gamma probe detection really necessary for inguinal sentinel lymph node biopsy? Abstracts: abstract_id: PUBMED:11120546 Is intra-operative gamma probe detection really necessary for inguinal sentinel lymph node biopsy? Context: Sentinel node (SN) biopsy has changed the surgical treatment of malignant melanoma. The literature has emphasized the importance of gamma probe detection (GPD) of the SN. Objective: Our objective was to evaluate the efficacy of patent blue dye (PBD) and GPD for SN biopsy in different lymphatic basins. Design: Patients with cutaneous malignant melanoma in stages I and II were submitted to biopsy of the SN, identified by PBD and GPD, as part of a research project. Setting: Patients were seen at Hospital São Paulo by a multidisciplinary group (Plastic Surgery Tumor Branch, Nuclear Medicine and Pathology). Patients: 64 patients with localized malignant melanoma were studied. The median age was 46.5 years. The primary tumor was located in the neck, trunk or extremities. Interventions: Preoperative lymphoscintigraphy, lymphatic mapping with PBD and intraoperative GPD was performed on all patients. The SN was examined by conventional and immunohistochemical staining. If the SN was not found or contained micrometastases, only complete lymphadenectomy was performed. Main Measurements: The SN was identified by PBD if it was blue-stained, and by GPD if demonstrated activity five times greater than the adipose tissue of the neighborhood. Results: Seventy lymphatic basins were explored. Lymphoscintigraphy showed ambiguous drainage in 7 patients. GPD identified the SN in 68 basins (97%) and PBD in 53 (76%). PBD and GPD identified SN in 100% of the inguinal basins. For the remaining basins both techniques were complementary. A metastatic SN was found in 10 basins. Three patients with negative SN had recurrence (median follow-up = 11 months). Conclusion: Although both GPD and PBD are useful and complementary, PBD alone identified the SN in 100% of the inguinal lymphatic basins. abstract_id: PUBMED:14690575 Intra-operative radionuclide lymphatic mapping in sentinel lymph node biopsy of breast cancer Objective: To study intra-operative radionuclide lymphatic mapping in sentinel lymph node (SLN) biopsy of breast cancer. Methods: Forty-two breast cancer (diameter </= 5.0 cm) patients with clinically negative axillary node were analyzed. 18.5 MBq/0.5 ml technetium-99m labeled sulfur colloid (labeling yield > 98%, size of colloid granule 100 approximately 200 nm) was injected into the mammary tissue around the tumor or biopsy site at four points. SLN in breast cancer was detected and dissected with the help of lymphoscintigraphy and intra-operative gamma probe. Routine lymph node dissection was performed for all patients. Results: The detection rate was 88.1% in lymphoscintigraphy and 97.6% in intra-operative gamma probe detection for SLN in breast cancer. The sensitivity, accuracy, false negative and specificity of SLN biopsy were 93.3% (14 in 15), 97.6% (40 in 41), 6.7% (1 in 15) and 100% (26 in 26). Conclusion: Sentinel lymph node in breast cancer, detected by preoperative lymphoscintigraphy combined with intra-operative gamma probe, is able to predict regional lymph node metastasis. The successful rate of SLN biopsy can be raised by improvement in the quality of nuclear imaging agent, technic of injection and method of measurement. abstract_id: PUBMED:33469855 Development and characterization of an all-in-one gamma probe with auto-peak detection for sentinel lymph node biopsy based on NEMA NU3-2004 standard. Background: A gamma probe is a handheld device used for intraoperative interventions following interstitial injection of a radiotracer to locate regional lymph nodes through the external detection of radiation. This work reports on the design and performance evaluation of a novel fully integrated gamma probe (GammaPen), recently developed by our group. Materials And Methods: GammaPen is an all-in-one pocket gamma probe with low weight and adequate dimensions, consisting of a detector, a control unit and output all together. The detector module consists of a cylindrical Thallium-activated Cesium Iodide [CsI (Tl)] crystal optically coupled to a Silicon photomultiplier (SiPM), shielded using Tungsten housing on side and back faces. The electronics of the probe consists of two small boards to handle signal processing and analog peak detection tasks. A number of parameters, including probe sensitivity in air/water, spatial resolution in air/water, angular resolution in air/water, and side and back shielding effectiveness, were measured to evaluate the performance of the probe based on NEMA NU3-2004 standards. Results: The sensitivity of the probe in air at distances of 10, 30, and 50 mm is 18784, 3500, and 1575 cps/MBq. The sensitivity in scattering medium was also measured at distances of 10, 30, and 50 mm as 17,680, 3050, and 1104 cps/MBq. The spatial and angular resolutions in scattering medium were 47 mm and 87 degree at 30 mm distance from the probe, while they were 40 mm and 77 degree in air. The detector shielding effectiveness and leakage sensitivity are 99.91% and 0.09%, respectively. Conclusion: The performance characterization showed that GammaPen can be used effectively for sentinel lymph node localization. The probe was successfully used in several surgical interventions by an experienced surgeon confirming its suitability in a clinical setting. abstract_id: PUBMED:20414573 Brazilian gamma detection device for sentinel lymph node biopsy Objective: To test the effectiveness of an intra-operative gamma detection Brazilian device (IPEN) on sentinel lymph node biopsy (SLNB) procedures. Methods: Forty melanoma or breast cancer patients with indication for undergoing SLNB were studied. Lymphoscintigraphy was done 2 to 24 hours prior to surgery. Lymphatic mapping with vital dye and gamma detection were performed intraoperatively. For gamma detection Neoprobe 1500 was used followed by IPEN (equipment under test) in the first 20 patients and for the remaining half IPEN was used first to verify its ability to locate the sentinel node (SN). Measurements were taken from the radiopharmaceutical product injection site, from SN (in vivo and ex vivo) and from background. It was recorded if the SN was stained or not and if it was found easily by surgeon. Results: There were 33 (82.5%) breast cancer and 7 (17.5%) melanoma patients. Ages varied from 21 to 68 year-old (median age of 46). Sex distribution was 35 (87.5%) women and 5 (12.5%) men. Sentinel node was found in all but one patient. There was no statistical difference between the reasons ex vivo/ background obtained with the measures of both equipments (p=0, 2583-ns). The SN was easily found by the surgeon with both devices. Conclusion: The SLNB was successfully performed using either equipment. It was possible to do SLNB with the Brazilian device developed by IPEN without prejudice for the patient. abstract_id: PUBMED:16491216 Vital dye is enough for inguinal sentinel lymph node biopsy in melanoma patients. Purpose: The importance of gamma probe detection (GPD) combined with vital dye for sentinel node (SN) biopsy is well accepted. We evaluated the efficacy of patent blue dye (PBD) in identifying inguinal SN. Methods: Ninety-four cutaneous melanoma patients with inferior extremity lesions were submitted to SNB according to a established protocol. Patients were randomized in two groups: Blue group, where SN was identified by PPD and Probe group, where SN was identified by GPD. The median age was 44.2 years and median Breslow thickness was 2.1 mm. Preoperative lymphoscintigraphy, lymphatic mapping with PBD and intra-operative GPD was performed on all patients. Histological examination of SN consisted of hematoxylin-eosin and immunohistochemical staining. If micrometastases were present complete lymphadenectomy was performed. The SN was considered as identified by PBD if it was blue and identified by GPD if it demonstrated at least ten times greater radioactivity than background. Results: It was explored 94 inguinal lymphatic basins, 145 SN were excised (70 guided primarily by blue dye and 75 guided primarily by probe). All SN identified by preoperative lymphoscintigraphy were excised. In the Blue group PPD identified all SN and all of them were hot. In the Probe group all SN were identified by probe and were blue. The coincidence of PPD and GPD was 100%. Conclusion: Patent blue dye is enough to identify superficial inguinal SN in cutaneous melanoma. abstract_id: PUBMED:16264354 Use of the gamma probe in sentinel lymph node biopsy in patients with prostate cancer. Objective: To describe the reproducibility of the sentinel lymph node technique in patients with prostate cancer and verify if there is improved accuracy over modified lymphadenectomy. Material And Methods: Twenty-three patients with biopsy proven prostate cancer were enrolled in this study. Lymphoscintigraphy was performed after the transrectal administration of Tc sulfur colloid guided by ultrasound, with one injection in each prostate lobe. Images were obtained 15 and 180 min after injection. Sentinel lymph node was harvested during surgery using a gamma probe, followed by extended lymphadenectomy. Results: The mean age of the patients in this study was 66 years. An average of 3.36 sentinel lymph nodes was found for each patient. Radioactive lymph nodes were identified by the gamma probe in 21 out of 23 patients. In one of the patients there was no radiopharmaceutical migration from the injection site and in another the sentinel lymph node was visualized by lymphoscintigraphy but was not found during surgery. Three patients had lymph node metastasis; in one of these patients the sentinel lymph node was the only positive node and was found outside the modified lymphadenectomy region (obturator fossa and the external iliac). Conclusion: Sentinel lymph node biopsy in prostate cancer adds important information to the staging of patients, not always attained through the lymphadenectomy restricted to the obturator fossa and external iliac. Such information is essential for the choice of the best treatment to be applied. abstract_id: PUBMED:12352400 Lymphatic mapping and gamma probe guided laparoscopic biopsy of sentinel lymph node in patients with clinical stage I testicular tumor. Purpose: We attempted to detect lymphatic drainage and sentinel lymph node with radioactive tracer in patients with testicular tumor. We then tried to determine if sentinel lymph node biopsy with gamma probe guided laparoscopic procedure was feasible as a staging tool for patients with clinical stage I testicular tumor. Materials And Methods: Technetium-labeled phytate was injected around the tumor in 15 consecutive patients with clinical stage I testicular tumor. Lymphatic drainage and sentinel lymph nodes were imaged by a gamma camera. Localization of the sentinel lymph node was confirmed with a handheld gamma probe. After we confirmed that sentinel lymph nodes were detected in the initial 10 patients, gamma probe guided laparoscopic sentinel lymph node biopsy was performed after routine orchiectomy in the next 4. To confirm whether the radioactive node was really a sentinel lymph node the final patient in this series underwent laparoscopic retroperitoneal lymph node dissection with a unilateral template. Results: Sentinel lymph nodes were detected in all patients by lymphoscintigraphy and handheld gamma probe, and each node varied. Right tumors in sentinel lymph node were detected at the inter-aortocaval, paracaval or common iliac region. Para-aortic lymph nodes were detected as sentinel lymph node in cases of left tumor. In the 4 patients who underwent gamma probe navigated laparoscopic procedure sentinel lymph nodes were easily detected and safely removed for pathological examination. In the last patient who underwent laparoscopic lymph node dissection micrometastasis was found only at the sentinel lymph node. Conclusions: Sentinel lymph node can be detected by lymphoscintigraphy and handheld gamma probe. Gamma probe guided laparoscopic biopsy of sentinel lymph node is technically possible. These techniques may have a role in the management of clinical stage I testicular tumor but further trials are required for establishment of the concept of sentinel lymph node in testicular tumor. abstract_id: PUBMED:11289748 Gamma probe and ultrasonographically-guided fine-needle aspiration biopsy of sentinel lymph nodes in breast cancer patients. Aim: The purpose of the present study was to evaluate the usefulness of gamma probe and ultrasonographically-guided fine-needle aspiration biopsy (FNAB) in the pre-operative detection of sentinel node (SN) metastasis in breast cancer patients. Methods: Sentinel node biopsy (SNB) was performed in patients with stage I or II breast cancer with clinically negative nodes using dye and radio-isotope. Axillas of 60 patients in whom a hot spot was detected by gamma probe were examined by ultrasonography. Pre-operative diagnosis of SN metastasis by gamma probe and ultrasonographically-guided FNAB was compared with the histological results of SN. Results: The sensitivity, specificity and overall accuracy of ultrasonography in the diagnosis of SN metastasis were 50.0%, 92.1% and 76.7%, respectively. SNs were visualized by ultrasonography in 29 of 60 patients. Of 14 patients with positive results by ultrasonography, four had positive and two had negative cytology. The combination of ultrasonography and ultrasonographically-guided FNAB for visualized nodes had a sensitivity of 78.5%, specificity of 93.3% and overall accuracy of 86.2%. Blind FNAB in the hot spot was not useful in the detection of SN metastasis in patients whose SNs failed to be detected by ultrasonography. Conclusions: Gamma probe and ultrasonographically-guided FNAB is a potentially useful method for pre-operative detection of SN metastasis. In patients with positive SNs, SNB is not indicated and complete axillary lymph-node dissection can be performed as a primary procedure. abstract_id: PUBMED:23077465 Inguinal or inguino-iliac/obturator lymph node dissection after positive inguinal sentinel lymph node in patients with cutaneous melanoma. Background: The aim of the study was to determine whether the presence of inguinal sentinel lymph node (SLN) metastases smaller than 2 mm (micrometastases) subdivided according to the number of micrometastases predicts additional, non-sentinel inguinal, iliac or obturator lymph node involvement in completion lymph node dissection (CLND). PATIENTS AND METHODS.: Positive inguinal SLN was detected in 58 patients (32 female, 26 male, median age 55 years) from 743 consecutive and prospectively enrolled patients with primary cutaneous melanoma stage I and II who were treated with SLN biopsy between 2001 and 2007. Results: Micrometastases in inguinal SLN were detected in 32 patients, 14 were single, 2 were double, and 16 were multiple. Twenty-six patients had macrometastases. Conclusions: No patient with any micrometastases or a single SLN macrometastasis in the inguinal region had any iliac/obturator non-sentinel metastases after CLND in our series. Furthermore, no patient with single SLN micrometastasis in the inguinal region had any non-sentinel metastases at all after CLND in our series. In these cases respective CLND might be omitted. abstract_id: PUBMED:15278236 Sentinel lymph node in melanoma The procedure of sentinel node biopsy (SNB) has emerged as an important advance especially with respect to staging of malignant melanoma. Elective (prophylactic) lymph node dissection that had been practiced in primary melanoma with a suspected increased risk of (clinically occult) lymphatic metastasis has been replaced by SNB. Patients with proven metastatic involvement of the sentinel node (12-25%) can be specifically selected for regional lymph node dissection. Metastatic involvement of the sentinel node (SN) is a significant independent prognostic factor. The value of detecting metastasis by highly sensitive diagnostic tools such as RT-PCR is just as uncertain as is the therapeutic benefit of operative or conservative therapies in sentinel node-positive patients with respect to improving prognosis and is currently under study. Answer: Intra-operative gamma probe detection (GPD) has been a standard technique for identifying sentinel lymph nodes (SLNs) during surgeries for various cancers, including melanoma and breast cancer. However, the necessity of GPD specifically for inguinal sentinel lymph node biopsy has been questioned. Studies have shown that patent blue dye (PBD) alone identified the sentinel lymph node (SN) in 100% of the inguinal lymphatic basins, suggesting that GPD may not be necessary in this specific area (PUBMED:11120546). Another study confirmed that PBD was sufficient to identify superficial inguinal SN in cutaneous melanoma, with a 100% coincidence rate between PBD and GPD (PUBMED:16491216). These findings indicate that for inguinal sentinel lymph node biopsy, particularly in melanoma patients, the use of PBD alone could be adequate. However, it is important to note that in other lymphatic basins, the combination of GPD and PBD was found to be complementary, enhancing the detection of SNs (PUBMED:11120546). Additionally, in breast cancer, intra-operative gamma probe detection has been shown to have a high detection rate and is able to predict regional lymph node metastasis effectively (PUBMED:14690575). The use of a gamma probe has also been reported to add important information to the staging of prostate cancer patients (PUBMED:16264354). In conclusion, while intra-operative gamma probe detection is a valuable tool in sentinel lymph node biopsy for various cancers, its necessity specifically for inguinal sentinel lymph node biopsy in melanoma patients may not be as critical, given that PBD alone has been shown to be effective in identifying SNs in this region. However, the decision to use GPD should be based on the surgeon's experience, the type of cancer, and the location of the lymphatic basins being explored.
Instruction: Results of laparoscopic radiofrequency ablation for HCC. Could the location of the tumour influence a complete response to treatment? Abstracts: abstract_id: PUBMED:25545319 Results of laparoscopic radiofrequency ablation for HCC. Could the location of the tumour influence a complete response to treatment? A single European centre experience. Introduction: In selected patients, radiofrequency ablation (RFA) is a well-established treatment for hepatocellular carcinoma (HCC). However, subcapsular or lesions close to adjacent viscera preclude a percutaneous approach. In this setting laparoscopic-RFA (LRFA) is a potential alternative. The aim of this study was to analyse the safety and feasibility of LRFA in patients with HCC. Patients And Methods: Retrospective study of patients with HCC meeting strict inclusion criteria who underwent LRFA at a single Institution from December 2000 to March 2013. Results: Forty-one patients underwent 42 LRFA of 51 nodules. The median size of the nodule was 2.5 (range 1.2-4.7) cm. Thirty-one tumours were subcapsular and 17 located near the gallbladder. Major complications occurred in 17 patients. The initial complete response (ICR) rate was 94% and was lower among tumours located adjacent to the gallbladder. At the end of the follow-up period, the sustained complete response (SCR) rate was 70% and was lower in tumours adjacent to the gallbladder while increased for subcapsular tumours. The 1-, 3- and 5-year overall survival rate was 92.6%, 64.5% and 43%, respectively. Conclusion: LRFA of HCC is safe, feasible and achieves excellent results in selected patients. LRFA should be the first-line technique for subcapsular lesions as it minimizes the risk of tumoural seeding and improves ICR. Proximity to gallbladder interferes in treatment efficacy (lower rate of ICR and lower rate of SCR). abstract_id: PUBMED:35116332 The effects of ultrasound-guided radiofrequency ablation and laparoscopic hepatectomy in the treatment of small hepatocellular carcinoma: a retrospective analysis. Background: Hepatocellular carcinoma (HCC) is a commonly-seen cancer in the clinical setting. Laparoscopic hepatectomy (LH) and radiofrequency ablation (RFA) are used for the treatment of HCC, yet the effects and safety difference of LH and RFA for HCC treatment need further investigation. Methods: This study was a retrospective study design. HCC patients treated with LH or RFA in our hospital were identified. All the patients underwent 2-year long follow-up. The characteristics and details of LH and RFA groups during hospitalization were collected and compared. The Kaplan-Meier method was used to calculate the cumulative survival, and the survival curve was drawn and compared. Results: A total of 94 HCC patients were included. The duration of surgery, estimated intraoperative blood loss, pain score on the first day after surgery, time to get out of bed after operation, time to oral eating, aspartate transaminase (AST) on the second day after surgery, C-reactive protein (CRP) on the second day after surgery, total medical cost in RFA group were significantly less than that of LH group (all P<0.05). The incidence of abdominal infection and biliary fistula in RFA group were significantly less than that of LH group (all P<0.05), and there was no significant difference in the incidence of bleeding and pleural effusion between two groups (all P>0.05). The 2-year overall survival and recurrence-free survival of the two groups had no significant statistical difference (all P>0.05). Conclusions: RFA and LH have similar effects in the treatment of small HCC. And RFA has the advantages of less trauma, shorter operation duration, and quick postoperative recovery. abstract_id: PUBMED:36947227 Outcomes of laparoscopic radiofrequency ablation versus percutaneous radiofrequency ablation for hepatocellular carcinoma. Background: Few studies have compared the therapeutic outcomes in patients with HCC who underwent laparoscopic radiofrequency ablation (LRFA) versus percutaneous radiofrequency ablation (PRFA) for hepatocellular carcinoma (HCC). Therefore, this study compared the recurrence and survival outcomes of the two RFA methods in patients with HCC. Methods: Recurrence and overall survival outcomes were evaluated in 307 patients who underwent LRFA (n = 151) or PRFA (n = 156) as a treatment method for de novo HCC. Inverse probability of treatment weighting (IPTW) analysis was performed to reduce the impact of treatment selection bias. Results: There were no significant differences in major baseline characteristics between the LRFA and PRFA groups. However, the proportion of cirrhotic patients was higher in the LRFA group, whereas the LRFA group had more tumors and a more advanced tumor-node-metastasis stage. Moreover, the mean tumor size was significantly larger in the LRFA group than in the PRFA group. In a multivariate analysis, serum albumin level, more than three tumors, and the RFA method were identified as significant predictors of recurrence-free survival. Moreover, for the overall survival of HCC patients, serum albumin levels, days of hospital stay during RFA, and the RFA method were independent predictors. In the IPTW-adjusted analysis, the LRFA group showed significantly higher recurrence-free survival and overall survival. Conclusions: Our study revealed that compared with PRFA, LRFA was associated with longer recurrence-free survival and favorable overall survival in patients with HCC. Therefore, LRFA should be considered the primary therapy in patients with HCC eligible for RFA. abstract_id: PUBMED:34691297 Efficacy of radiofrequency ablation versus laparoscopic liver resection for hepatocellular carcinoma in China: a comprehensive meta-analysis. Introduction: Hepatocellular carcinoma (HCC) has been the second leading cause of cancer-related death in China. Radiofrequency ablation is a relatively novel treatment that may improve the treatment of HCC. Aim: To evaluate and compare the efficacy and safety of radiofrequency ablation (RFA) versus laparoscopic liver resection (LLR) in the treatment of HCC. Material And Methods: We searched for relevant published studies in English (PubMed, Cochrane Library, EMBASE) and in Chinese (CBM, CNKI and Wanfang) from their inception until September 23, 2019. The quality of included studies was evaluated by the Newcastle-Ottawa Scale. Results: A total of 19 retrospective studies including 2038 patients were eligible for the meta-analysis. The results of the meta-analysis demonstrated that LLR was superior to RFA in terms of 3-year overall survival rate (OR = 0.62), 1 to 3-year disease-free survival rates (OR = 0.57; OR = 0.41, respectively) and local recurrence rates (OR = 2.71). Conclusions: The meta-analysis demonstrates that laparoscopic liver resection should be preferred in tumors of size 3-5 cm, while for < 3 cm the long term results are equal. abstract_id: PUBMED:35209837 Balloon-Occluded Radiofrequency Ablation as Bridge to TACE in the Treatment of Advanced HCC with Arterioportal Shunt. Background: Transarterial chemoembolization is the most widely used palliative treatment for unresectable hepatocellular carcinoma; however, arterioportal shunt represents a contraindication to this treatment. Objective: The study aims to assess the feasibility of balloon-occluded radiofrequency ablation in the transitory resolution of an extensive arterioportal shunt in patients with advanced hepatocellular carcinoma as a bridge to safe and effective transarterial chemoembolization. Methods: 12 consecutive patients advanced multinodular unilobar unresectable hepatocellular carcinoma with a target lesion larger than 5 cm (mean diameter 7.7 ± 1.4 cm), not suitable to transarterial chemoembolization due to extensive arterioportal shunt, were recruited. Balloon-occluded radiofrequency ablation of the hepatic area surrounding the shunt during occlusion of the artery supplying the shunt was performed, followed by lobar conventional chemoembolization. Intra/periprocedural complications were evaluated. Technical success was defined by the result of radiofrequency ablation in terms of immediate disappearance, reduction, or persistence of the shunt. Local efficacy of chemoembolization was evaluated at 1-month computed tomography according to m-RECIST criteria. Results: Technical success was achieved in all patients. No major complications were observed. 1- month follow-up showed a mean necrotic diameter of 6.3 cm (range: 3.8-8.7 cm), with an acceptable procedural result and persistence of the shunt. An overall response rate was obtained in all patients, with 25% complete response and 75% partial response. Conclusion: Balloon-occluded radiofrequency ablation of an arterioportal shunt in patients with advanced hepatocellular carcinoma can temporarily reduce shunting, allowing to perform safe and therapeutically useful chemoembolization, with satisfactory control of tumor growth. abstract_id: PUBMED:23396827 Laparoscopic radiofrequency ablation of hepatocellular carcinoma: A critical review from the surgeon's perspective. Background: The treatment of choice for hepatocellular carcinoma (HCC) is surgical resection but only a small percentage of patients are operative candidates. Percutaneous radiofrequency interstitial thermal ablation (RFA) has proved to be effective in the treatment of unresectable HCC. However, there is a sub-group of patients who may benefit from a laparoscopic rather than a percutaneous approach. Laparoscopic RFA offers the combined advantages of improved tumor staging based on the intracorporeal ultrasound examination and safer access to liver lesions that are difficult or impossible to treat with a percutaneous approach. The aim of our review was to evaluate the advantages and limitations of the laparoscopic approach, according to the criteria of evidence-based medicine. Conclusions: Laparoscopic RFA of HCC proved to be a safe and effective technique, at least in terms of the short- and mid-term results. This technique may be indicated in selected cases of HCC when percutaneous RFA is very difficult or contraindicated. abstract_id: PUBMED:34843653 Transcatheter chemoembolization plus percutaneous radiofrequency ablation versus laparoscopic radiofrequency ablation: improved outcome for inoperable hepatocellular carcinoma. Aims: To retrospectively compare the efficacy of transcatheter chemoembolization (TACE) plus percutaneous radiofrequency ablation (PRFA) (hereafter, TACE + PRFA) and laparoscopic radiofrequency ablation (LRFA) in the treatment of inoperable hepatocellular carcinoma (HCC). Methods: From July 2014 to December 2017, 132 consecutive patients with inoperable HCC were treated with TACE + PRFA (n = 86) or LRFA (n = 46). Overall survival (OS) and recurrence-free survival (RFS) were analyzed using log-rank test and Cox regression analysis. Propensity score matched (PSM) analyses based on patient and tumor characteristics were also conducted. Additionally, we performed exploratory analyses to determine the effectiveness of TACE + PRFA and LRFA in clinically relevant subsets. Results: The baseline characteristics of TACE + PRFA patients displayed relatively inferior liver status and a higher rate of BCLC-B disease. For unmatched patients, median OS (55.0 vs. 42.0 months; p = .019) and RFS (20.0 vs. 11.0 months; p < .001) were significantly longer in TACE + PRFA group than that in the LRFA group. After PSM, 39 matched pairs were identified. The difference in median OS (60.0 vs. 44.0 months; p = .009) and RFS (27.0 vs. 11.0 months; p < .001) between the two groups remained significant. Multivariate analysis in matched patients showed that treatment modality and response to initial treatment were significant predictors of OS and RFS, while recurrence after resection was an independent prognostic factor of OS. The benefits of TACE + PRFA were consistent across all the subgroups examined. The different treatments had shared a similar complication rate. Conclusions: Compared to LRFA, TACE + PRFA results in improved OS and RFS in patients not amenable to resection. abstract_id: PUBMED:30309001 Tumor Location Influences Oncologic Outcomes of Hepatocellular Carcinoma Patients Undergoing Radiofrequency Ablation. Radiofrequency ablation (RFA) is recommended as a first-line therapy for small hepatocellular carcinoma (HCC). Tumor location is a potential factor influencing the procedure of RFA. To compare oncologic outcomes of RFA for different tumor locations, this retrospective study enrolled 194 patients with small HCC who had undertaken RFA. The HCC nodules were classified as peri-hepatic-vein (pHV) or non-pHV, peri-portal-vein (pPV) or non-pPV, and subcapsular or non-subcapsular HCC. The regional recurrence-free survival (rRFS), overall survival (OS), recurrence-free survival (recurrence in any location, RFS) and distant recurrence-free survival (dRFS) were compared. Operation failures were recorded in five pPV HCC patients, which was more frequent than in non-pPV HCC patients (p = 0.041). The 1-, 3-, and 5-year rRFS was 68.7%, 53.7%, and 53.7% for pHV patients and 85.1%, 76.1%, and 71.9% for non-pHV patients, respectively (p = 0.012). After propensity score matching, the 1-, 3-, and 5-year rRFS was still worse than that of non-pHV patients (p = 0.013). The OS, RFS, and dRFS were not significantly different between groups. Conclusions: A pHV location was a risk factor for the regional recurrence after RFA in small HCC patients. The tumor location may not influence OS, RFS, and dRFS. Additionally, a pPV location was a potential high-risk factor for incomplete ablation. abstract_id: PUBMED:32458219 Successful treatment of hepatocellular carcinoma by laparoscopic radiofrequency ablation in a patient with hemophilia A. Percutaneous radiofrequency ablation (RFA) is a good indication for hepatocellular carcinoma (HCC) in cases involving ≦ 3 tumors of ≦ 30 mm in size, many hepatologists are hesitant to perform the procedure for patients with hemorrhagic disorders. We herein report the successful treatment of HCC by laparoscopic RFA in a patient with hemophilia A. A 48-year-old man with moderate form of hemophilia A had a single HCC at segment 8. To perform laparoscopic RFA safely, recombinant factor VIII (rFVIII) was administered to maintain factor VIII activity (FVIII:C) > 80% on the operation day and > 40% for 6 days after the operation in accordance with the guidelines. A total of 23,000 units of rFVIII was used. Laparoscopic RFA was completed with an operation time of 105 min and < 10 mL of blood loss. As a result, blood transfusion was not required. At 2 years after the initial treatment, HCC recurred at segment 7. Under rFVIII supplementation, we performed a second laparoscopic RFA without any events. Although partial hepatectomy is the main procedure used to treat HCC in patients with hemophilia, we could reduce in total use of rFVIII, blood and operation time by laparoscopic RFA compared with those in partial hepatectomy. abstract_id: PUBMED:23993049 Laparoscopic radiofrequency ablation of solitary small hepatocellular carcinoma in the caudate lobe. Objective: Surgical resection in the treatment of hepatocellular carcinoma (HCC) originating in the caudate lobe is challenging because of its deep location in the liver and possibly worse prognosis. We evaluated the overall survival of patients with solitary caudate small HCC who underwent laparoscopic radiofrequency ablation (RFA). Methods: This is a retrospective study on patients who underwent laparoscopic RFA (RFA) for solitary small HCC. Results: Twenty-seven (27) patients underwent laparoscopic caudate lobe RFA for solitary small HCC. The average tumor size was 2.8 cm. The overall survival rates were 96.3%, 88.9%, 74.1%, 74.1% and 62.9% at 1, 2, 3, 4 and 5 years respectively. The disease-free survival after RFA was 92.6%, 52.9%, 44.4%, 33.3% and 33.3% at 1, 2, 3, 4 and 5 years respectively. Most common postoperative complication was pleural effusion (7/27, 25.9%), and followed by transient hemoglobinuria (2/27, 7.4%). Conclusions: Laparoscopic RFA for caudate lobe small HCC is a safe and feasible procedure without perioperative mortality. Through a systematic review of other therapeutic options on caudate HCC, its overall outcome is comparable to that of surgical resection. Answer: Yes, the location of the tumor can influence the complete response to treatment when using laparoscopic radiofrequency ablation (LRFA) for hepatocellular carcinoma (HCC). According to the results of a single European centre experience, the initial complete response (ICR) rate was lower among tumors located adjacent to the gallbladder, and the sustained complete response (SCR) rate was also lower in tumors adjacent to the gallbladder. However, the SCR rate increased for subcapsular tumors, suggesting that LRFA should be the first-line technique for subcapsular lesions as it minimizes the risk of tumor seeding and improves ICR (PUBMED:25545319). Additionally, a study on tumor location influencing oncologic outcomes of HCC patients undergoing radiofrequency ablation (RFA) found that peri-hepatic-vein (pHV) location was a risk factor for regional recurrence after RFA in small HCC patients. The regional recurrence-free survival (rRFS) was worse for pHV patients compared to non-pHV patients. Moreover, a peri-portal-vein (pPV) location was a potential high-risk factor for incomplete ablation (PUBMED:30309001). These findings indicate that the location of HCC tumors can indeed affect the outcomes of LRFA, with certain locations such as adjacent to the gallbladder or peri-hepatic-vein posing challenges for achieving a complete response and potentially leading to higher recurrence rates.
Instruction: Value of fluorodeoxyglucose positron emission tomography before radiotherapy for head and neck cancer: does the standardized uptake value predict treatment outcome? Abstracts: abstract_id: PUBMED:19626409 Value of fluorodeoxyglucose positron emission tomography before radiotherapy for head and neck cancer: does the standardized uptake value predict treatment outcome? Purpose: The aim of this study was to determine if the standardized uptake value (SUV) of fluorodeoxyglucose positron emission tomography (FDG-PET) for head and neck cancer can predict the outcome of radiotherapy and if the SUV is correlated with histological grade, mitosis, and apoptosis. Materials And Methods: The study included 45 head and neck cancer patients who underwent FDG-PET scanning before radiotherapy. The maximum SUV (SUVmax) of their primary lesions were measured. Biopsy was performed in all patients to determine the histological diagnosis. Altogether, 14 biopsy specimens were available for mitotic and apoptotic cell counts. Results: The mean SUVmax of T3 tumors was significantly higher than that of T1 (P = 0.01) and T2 (P = 0.011) tumors. The mean SUVmax of stage II disease was significantly lower than that of stage III (P = 0.028) and stage IV (P = 0.007) disease. There was a tendency toward a better locoregional control rate and disease-free survival for the lower SUV group using a cutoff value of 5.5. For 41 patients with squamous cell carcinoma or undifferentiated carcinoma, SUVmax did not reflect the histological grade. There was no correlation between the SUVmax and the mitotic/apoptotic status. Conclusion: SUVmax may correlate with the T classification and stage, but there was no predictive value for outcome of radiation therapy. Neither histological grading nor mitotic/apoptotic status is correlated with SUVmax. abstract_id: PUBMED:23790108 Clinicopathological determinants of positron emission tomography computed tomography fluorodeoxyglucose standardised uptake value in head and neck carcinoma. Background: Although positron emission tomography computed tomography has proven diagnostic and staging value in head and neck carcinoma, it does not have optimal sensitivity or specificity. The positron emission tomography computed tomography fluorodeoxyglucose standardised uptake value has been shown to be associated with carcinoma stage. This study evaluated the impact of major clinicopathological factors on the standardised uptake value at the primary site and at neck lymph node metastases. Subjects And Methods: Two hundred and forty-three oral cavity and laryngopharyngeal carcinoma patients who underwent positron emission tomography computed tomography were included. Correlation between the positron emission tomography computed tomography standardised uptake value and various clinicopathological factors was analysed. Results: A positive correlation was found between the standardised uptake value and the size and depth of tumour infiltration, and lymph node positivity. Higher standardised uptake values were seen for more advanced tumour stages. The presence of perineural invasion, lymphatic invasion and extracapsular spread were all associated with increased standardised uptake values. Conclusion: Most of the clinicopathological features of head and neck carcinoma which are well known to be poor prognostic factors have a significant impact on positron emission tomography computed tomography fluorodeoxyglucose standardised uptake value. abstract_id: PUBMED:25524466 Pretreatment maximum standardized uptake value of (18)F-fluorodeoxyglucose positron emission tomography as a predictor of distant metastasis in adenoid cystic carcinoma of the head and neck. Background: The purpose of this study was to determine whether the maximum standardized uptake value (SUVmax) of the primary tumor on pretreatment (18)F-fluorodeoxyglucose positron emission tomography ((18)F-FDG PET) has prognostic significance in patients with adenoid cystic carcinoma (ACC) of the head and neck. Methods: A retrospective review was carried out on 34 patients with ACC of the head and neck who underwent pretreatment (18)F-FDG PET imaging from June 2005 through July 2009. All patients underwent surgery with curative intent, and 26 of them received adjuvant radiotherapy (RT). Results: When subjects were stratified into 2 groups according to a cutoff value for SUVmax of 4.15, the risk of distant metastasis was significantly high in patients with high SUVmax (p = .014). Multivariate analysis showed that high SUVmax and histologic grade 3 were independent poor prognostic factors for distant metastasis-free and disease-free survival. Conclusion: Pretreatment SUVmax of the primary tumor is an independent prognostic factor in patients with ACC of the head and neck. abstract_id: PUBMED:30768363 Multicenter Trial of [18F]fluorodeoxyglucose Positron Emission Tomography/Computed Tomography Staging of Head and Neck Cancer and Negative Predictive Value and Surgical Impact in the N0 Neck: Results From ACRIN 6685. Purpose: The objective of this study was to determine the negative predictive value (NPV) of positron emission tomography (PET)/computed tomography (CT) for the clinically N0 neck on the basis of neck dissection. Methods: Participants with newly diagnosed, first-time, head and neck squamous cell carcinoma (HNSCC) and at least one clinically N0 neck side for which dissection was planned were included. A total of 287 participants were prospectively enrolled from 23 American College of Radiology Imaging Network-qualified institutions. PET/CT was compared with findings at neck dissection. Results: PET/CT scans and pathology findings were available for 270 N0 neck sides from 212 participants. For visual assessment, the NPV specific to the clinical-N0 sides was 0.868 (95% CI, 0.803 to 0.925). For dichotomized maximum standardized uptake value, the NPVs specific to the nodal basins were 0.940 (95% CI, 0.928 to 0.952) and 0.937 (95% CI, 0.925 to 0.949) at prespecified cutoffs of 2.5 and 3.5, respectively. The optimal cutoff maximum standardized uptake value was determined to be 1.8, with an NPV of 0.942 (95% CI, 0.930 to 0.953). The PET/CT-informed surgical treatment plan was changed in 51 of 237 participants (22%) compared with the PET/CT-blinded surgical plan. In 34 participants (14%), this led to planned dissection of additional nodal levels. In 12 participants (5%), this led to fewer planned dissected nodal levels. Negative PET/CT scans in N0 necks was true negative in 87% and false negative in 13%. Conclusion: [18F]fluorodeoxyglucose-PET/CT has high NPV for the N0 neck in T2 to T4 HNSCC. The surgical treatment plans on the basis of PET/CT findings may be changed in approximately 22% of this group. These findings suggest that [18F]fluorodeoxyglucose-PET/CT may assist the clinician in deciding on the best therapy for the clinically N0 neck in HNSCC. Well-designed clinical trials should be performed to test the outcome of omitting neck dissection by using PET/CT. abstract_id: PUBMED:10605586 The diagnostic value of positron emission tomography (PET) with radiolabelled fluorodeoxyglucose (18F-FDG) in head and neck cancer. Positron emission tomography (PET) scanning has recently been introduced into clinical practice but its usefulness in the management of head and neck cancer is not well defined. The aim of this prospective preliminary study was to examine the clinical value of fluorodeoxyglucose (FDG)--PET in patients with head and neck cancer treated by radiotherapy with surgery in reserve by (i) relating quantitative uptake of isotope to tumour type and histological grade and (ii) comparing the imaging findings of PET and magnetic resonance imaging (MRI) in post-radiotherapy assessment of tumour response. Twenty-one patients had pre-treatment PET and MRI scans and these were repeated four and eight months after treatment if there was no clinical relapse. Pre-treatment uptake of FDG using tumour to cerebellar ratio parameters was significantly related to the histological grade of squamous cancer (p = 0.04) but not to tumour type. Discordance of post-treatment PET/MRI findings in one case indicates a possible role for PET in the early detection of tumour recurrence. Other potential uses of PET scanning in the management of head and neck cancer are discussed. abstract_id: PUBMED:24976927 Application of fluorodeoxyglucose positron emission tomography in the management of head and neck cancers. The use of fluorodeoxyglucose positron emission tomography (FDG PET) scan technology in the management of head and neck cancers continues to increase. We discuss the biology of FDG uptake in malignant lesions and also discuss the physics of PET imaging. The various parameters described to quantify FDG uptake in cancers including standardized uptake value, metabolic tumor volume and total lesion glycolysis are presented. PET scans have found a significant role in the diagnosis and staging of head and neck cancers. They are also being increasingly used in radiation therapy treatment planning. Many groups have also used PET derived values to serve as prognostic indicators of outcomes including loco-regional control and overall survival. FDG PET scans are also proving very useful in assessing the efficacy of treatment and management and follow-up of head and neck cancer patients. This review article focuses on the role of FDG-PET computed tomography scans in these areas for squamous cell carcinoma of the head and neck. We present the current state of the art and speculate on the future applications of this technology including protocol development, newer imaging methods such as combined magnetic resonance and PET imaging and novel radiopharmaceuticals that can be used to further study tumor biology. abstract_id: PUBMED:31169226 Prognostic value of response assessment fluorodeoxyglucose positron emission tomography-computed tomography scan in radically treated squamous cell carcinoma of head and neck: Long-term results of a prospective study. Objective: The objective of this study is to evaluate the diagnostic and prognostic ability of fluorodeoxyglucose positron emission tomography-computed tomography (FDG PET-CT) scan in patients with squamous cell carcinoma of the head and neck treated with chemoradiotherapy or radiotherapy only. Materials And Methods: Fifty-nine patients with HNSCC planned for radical nonsurgical treatment were randomized to receive either three-dimensional conformal radiotherapy or intensity-modulated radiation therapy. In addition to routine clinical examination and staging investigations, patients had a FDG PET-CT scan at baseline and on the first follow-up for response assessment. No evidence of clinicopathological disease for at least 6 months after the completion of treatment was considered confirmation of complete response. The presence or absence of disease during the follow-up period was used to calculate the sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) of PET-CT for the primary site and node. Results: At a median follow-up of 52.5 months, 55.6% of patients were alive and disease free. Response assessment PET-CT was done at a median of 9 weeks (range: 5-18 weeks). PET-CT assessment of the primary had sensitivity, specificity, PPV, and NPV of 81.8%, 93%, 75%, and 95.2%, respectively; the corresponding figures at the node were 44.4%, 95.6%, 66.7%, and 89.6%. The median baseline maximum standardized uptake value (SUVmax) at primary and node was 14.9 and 8.1, respectively. When PET-CT was done after 10 weeks, no false-positive or false-negative findings were seen. Patients with negative PET at the first follow-up had a significantly better progression-free and overall survival. Conclusions: Disease evaluation using PET-CT has an overall accuracy of 80%. High baseline SUVmax correlates with worse clinical outcomes. Negative PET-CT at the first follow-up is a predictor for survival. abstract_id: PUBMED:25043882 The relative prognostic utility of standardized uptake value, gross tumor volume, and metabolic tumor volume in oropharyngeal cancer patients treated with platinum based concurrent chemoradiation with a pre-treatment [(18)F] fluorodeoxyglucose positron emission tomography scan. Objectives: This study compared the relative prognostic utility of the Gross Tumor Volume (GTV), maximum Standardized Uptake Value (SUVmax), and Metabolic Tumor Volume (MTV) in a uniform cohort of oropharyngeal squamous cell carcinoma (OPSCC) patients treated with platinum-based concurrent chemoradiation therapy (CCRT). Methods And Materials: One-hundred OPSCC with a pretreatment [(18)F] fluorodeoxyglucose (FDG) positron emission tomography-computed tomography (PET-CT) were treated with CCRT. Kaplan-Meier curves and Cox proportional hazard models were generated. Results: When dichotomized by the median, a smaller MTV correlated with improved 5year locoregional control (LRC) (98.0% versus 87.0%, p=0.049), freedom from distant metastasis (FDM) (91.7% versus 65.0%, p=0.005), progression-free survival (PFS) (80.3% versus 56.7%, p=0.015), and overall survival (OS) (84.1% versus 57.8%, p=0.008), whereas a smaller GTV correlated with improved PFS (80.3% versus 57.4%, p=0.040) and OS (82.1% versus 60.1%, p=0.025). SUVmax failed to correlate with any outcome. On multivariate analysis, when adjusted for GTV, T-stage, and N-stage a smaller MTV remained independently correlated with improved FDM, PFS, and OS. GTV failed to reach significance in the multivariate model. Conclusions: A smaller MTV correlates with improved LRC, FDM, PFS, and OS in OPSCC patients undergoing platinum-based CCRT. abstract_id: PUBMED:31899125 Prognostic value of 18F-FDG PET/CT parameters and histopathologic variables in head and neck cancer. Introduction: 18F-fluorodeoxyglucose positron emission tomography/computed tomography parameters such as; maximum standardized uptake values, standard metabolic tumor volume and otal lesion glycosis are important prognostic biomarkers in cancers. Objective: To investigate the prognostic value of these parameters in patients with head and neck cancers. Methods: We performed a retrospective study including 47 patients with head and neck cancer who underwent18F-fluorodeoxyglucose positron emission tomography/computed tomography prior to treatment. Standard metabolic tumor volume, otal lesion glycosis and standardized uptake were measured for each patient. The prognostic value of quantitative 18F-fluorodeoxyglucose positron emission tomography/computed tomography parameters and clinicopathologic variables on disease free survival and overall survival were analyzed. Results: The median (range) standard metabolic tumor volume and otal lesion glycosis and standardized uptake were 7.63cm3 (0.6-34.3), 68.9g (2.58-524.5g), 13.89 (4.89-33.03g/mL), respectively. Lymph node metastases and tumour differentiation were significant variables for disease free survival and overall survival, however, all 18F-fluorodeoxyglucose positron emission tomography/computed tomography parameters were not associated with disease- free survival and overall survival. Conclusion: Pretreatment quantities positron emission tomography parameters did not predict survival in head and neck cancer. abstract_id: PUBMED:32448741 Benefits of positron emission tomography scans for the evaluation of radiotherapy. The assessment of tumour response during and after radiotherapy determines the subsequent management of patients (adaptation of treatment plan, monitoring, adjuvant treatment, rescue treatment or palliative care). In addition to its role in extension assessment and therapeutic planning, positron emission tomography combined with computed tomography provides useful functional information for the evaluation of tumour response. The objective of this article is to review published data on positron emission tomography combined with computed tomography as a tool for evaluating external radiotherapy for cancers. Data on positron emission tomography combined with computed tomography scans acquired at different times (during, after initial and after definitive [chemo-]radiotherapy, during post-treatment follow-up) in solid tumours (lung, head and neck, cervix, oesophagus, prostate and rectum) were collected and analysed. Recent recommendations of the National Comprehensive Cancer Network are also reported. Positron emission tomography combined with computed tomography with (18F)-labelled fluorodeoxyglucose has a well-established role in clinical routine after chemoradiotherapy for locally advanced head and neck cancers, particularly to limit the number of neck lymph node dissection. This imaging modality also has a place for the evaluation of initial chemoradiotherapy of oesophageal cancer, including the detection of distant metastases, and for the post-therapeutic evaluation of cervical cancer. Several radiotracers for positron emission tomography combined with computed tomography, such as choline, are also recommended for patients with prostate cancer with biochemical failure. (18F)-fluorodeoxyglucose positron emission tomography combined with computed tomography is optional in many other circumstances and its clinical benefits, possibly in combination with MRI, to assess response to radiotherapy remain a very active area of research. Answer: The standardized uptake value (SUV) from fluorodeoxyglucose positron emission tomography (FDG-PET) has been investigated to determine if it can predict the outcome of radiotherapy in head and neck cancer. However, the evidence suggests that the predictive value of SUV for treatment outcomes is not clear-cut. One study found that the maximum SUV (SUVmax) may correlate with the T classification and stage of head and neck cancer, but it did not predict the outcome of radiation therapy. Additionally, SUVmax did not reflect the histological grade, nor was it correlated with mitotic or apoptotic status (PUBMED:19626409). Another study showed that clinicopathological features known to be poor prognostic factors, such as tumor size, depth of infiltration, and lymph node positivity, significantly impacted the FDG-PET SUV. Higher SUVs were associated with more advanced tumor stages and the presence of perineural invasion, lymphatic invasion, and extracapsular spread (PUBMED:23790108). In the case of adenoid cystic carcinoma (ACC) of the head and neck, pretreatment SUVmax of the primary tumor was found to be an independent prognostic factor for distant metastasis-free and disease-free survival (PUBMED:25524466). A multicenter trial determined that FDG-PET/CT has a high negative predictive value (NPV) for the clinically N0 neck in head and neck squamous cell carcinoma (HNSCC), suggesting that PET/CT may assist clinicians in deciding on the best therapy for the clinically N0 neck in HNSCC (PUBMED:30768363). In summary, while FDG-PET SUV can correlate with certain tumor characteristics and stages, its role as a standalone predictor of radiotherapy outcomes in head and neck cancer is not definitively established. It appears to be more useful when considered alongside other clinicopathological factors and may influence treatment planning rather than directly predict outcomes (PUBMED:19626409, PUBMED:23790108, PUBMED:25524466, PUBMED:30768363).
Instruction: Can long-term hearing preservation be expected in children following cholesteatoma surgery? Abstracts: abstract_id: PUBMED:22948389 Can long-term hearing preservation be expected in children following cholesteatoma surgery? Results from a 14-year-long study of atticotomy-limited mastoidectomy with cartilage reconstruction. Objective: Following cholesteatoma surgery, effective long-term hearing preservation in children is difficult and is not typically expected. Hence, long-term data on hearing outcomes are lacking. The aim of this study was to analyze long-term hearing outcomes in children following cholesteatoma surgery. Methods: For this study, 49 ears in 47 children (≤16 years) with acquired cholesteatomas following atticotomy-limited mastoidectomy with cartilage reconstruction (inside-out approach) during 1986-2010 were included. Pre- and post-operative recidivism-free audiometric results were compared. Hearing success was defined as a post-operative air conduction (AC) threshold of ≤30 dB (serviceable hearing). Logistic regression analyses were used to evaluate potential prognostic factors that independently contributed to the prediction of hearing success. These factors included stapes condition, pre-operative AC threshold, ossicular chain integrity, disease severity, age, and gender. Results: The mean duration of follow-up was 14.2 years. The post-operative AC (33.55 ± 15.42 dB) and air-bone gap (17.88 ± 12.94 dB) were significantly improved compared with the pre-operative AC (42.90 ± 16.47 dB, p < 0.001) and air-bone gap (30.23 ± 13.68 dB, p < 0.001). The probability of hearing success following surgery (40.8%) was significantly higher than prior to surgery (24.5%, p = 0.008). Multivariate logistic regression analyses revealed a statistically significant correlation between hearing success and stapes integrity only (p = 0.005). Conclusions: This study provides important information on effective long-term hearing preservation over a mean follow-up of 14 years. In addition, stapes destruction is an independent negative prognostic determinant of achieving hearing success. The prediction model in this study provides otologists with useful pre-operative information to inform patients and parents on expected hearing outcomes and may be useful for post-operative observations. abstract_id: PUBMED:34900435 Long-term follow-up of applying autologous bone grafts for reconstructing tympanomastoid defects in functional cholesteatoma surgery. Objectives: This study investigated the long-term surgical outcomes of functional cholesteatoma surgery with canal wall reconstruction using autologous bone grafts as the primary material in patients with acquired cholesteatoma. Subjects And Methods: Medical charts were retrospectively reviewed for all patients admitted to one institution for surgical intervention between 2010 and 2018. We analyzed 66 patients (66 ears) who underwent functional tympanomastoidectomy involving the use of autologous bone grafts for canal wall defect reconstruction. Surgical outcomes were evaluated by comparing preoperative audiometric results with follow-up data (at least 36 months after surgery). Logistic regression analyses were performed to determine prognostic factors related to long-term hearing success. These factors included classification and stage of cholesteatoma, stapes condition, ossicular chain damage, active infection of the middle ear, state of the contralateral ear, preoperative hearing thresholds, gender, and age. Results: The mean follow-up period was 49.2 months. The recidivism rate was 6% (four of 66 ears). The pure-tone average significantly improved from 50.78 ± 19.98 to 40.81 ± 21.22 dB hearing level (HL; p < 0.001). Air-bone gaps significantly improved from 26.26 ± 10.53 to 17.58 ± 8.21 dB HL (p < 0.001). In multivariate logistic regression analysis, early-stage disease (p = 0.021) and pars flaccida cholesteatoma (p = 0.036) exhibited statistically significant correlations with successful hearing preservation. Conclusion: Functional cholesteatoma surgery with autologous bone grafts reconstruction is an effective approach to significantly improve hearing with low recidivism rates. Localized disease and pars flaccida cholesteatoma were two independent predictors of successful hearing preservation. abstract_id: PUBMED:28106625 Long-Term Hearing Results After Ossiculoplasty. Objectives: To determine if the OOPS index is predictive of long-term hearing results after ossiculoplasty. Study Design: Case series with retrospective chart review. Setting: Tertiary care otology practice. Patients: Adult and pediatric patients (3-88 years of age). Interventions: Ossiculoplasty with cartilage tympanoplasty, with or without mastoidectomy. Outcome Measures: Primary outcome measures included short-term hearing results (pure-tone average air-bone gap [PTA-ABG] measured between 60 days and 1 year after surgery), long-term hearing results (PTA-ABG measured ≥5 years after surgery), and the rate of successful ABG closure to ≤20 dB. Secondary measures included the need for revision surgery, delayed tympanic membrane graft failure, worsening conductive hearing loss (after an initially satisfactory hearing result), and recurrence of cholesteatoma. Results: There was no significant difference between adults and children for short-term hearing results (average post-op PTA-ABG was 18.9 dB vs. 19.8 dB, respectively; p = 0.544), long-term hearing results (average final PTA-ABG was 19.3 dB vs. 19.4 dB, respectively; p = 0.922), or rate of ABG closure to less than 20 dB (63.1% vs. 58.0%, p = 0.282). Spearman's rank-order correlation (ρ) identified a strong positive correlation between OOPS index score and average post-operative PTA-ABG (ρ = 0.983; p < 0.001; 2-tailed), as well as average long-term PTA-ABG (ρ = 0.950, p < 0.001; 2-tailed). Conclusions: The OOPS index makes it possible to accurately prognosticate hearing outcomes in adult and pediatric patients undergoing ossiculoplasty in both the short term and the long term. abstract_id: PUBMED:34797992 Petrous Bone Cholesteatoma: Facial and Hearing Preservation. Objectives: Petrous bone cholesteatoma (PBC) is a rare condition of the petrous portion of the temporal bone. Treatment of choice consists of radical surgical removal, paying attention to protect the facial nerve and inner ear as far as possible. The aim of the present study was to evaluate the efficacy of modified translabyrinthine techniques in preserving hearing function and the use of the adjuvant endoscopic techniques in a group of PBC patients. Methods: This study comprised 16 cases of PBCs surgically treated in our Department. Pre- and post-operative hearing status was assessed with pure tone audiometry and speech discrimination and graded according to the Gardner- Robertson classification system. Facial function was based on the House Brackman (HB) classification. PBCs were grouped using Sanna's classification. The choice of surgical technique was based on the above findings together with preoperative evidence. Post-operative follow-up ranged from 1 to 10 years and also included Computed Tomography and Magnetic Resonance Imaging assessment. Results: PBCs were classified as follows: 37.5% infralabyrinthine; 43.75% supralabyrinthine; and 18.75% massive. Preservation of the bone conduction threshold was feasible in 62.5% of patients. For supralabyrinthine PBCs a subtotal petrosetomy was performed in all cases and hearing preservation was possible in 57% of them: an adjuvant endoscopic approach was performed in 43%. Infralabyrinthine PBCs were treated using a modified translabyrinthine approach with preservation of bone conduction in 83% of patients; an adjuvant endoscopic approach was performed in 50% cases. One patient with a massive cholesteatoma was treated by modified translabyrinthine approach, preserving a serviceable level of hearing. In all massive cases, an adjuvant endoscopic approach was performed. In 2 patients with preoperative palsy, facial nerve function showed an improvement. The follow-up period revealed evidence of limited recurrence at CT imaging in 2 patients. Conclusions: The introduction of modified surgical approaches, able to preserve the anatomical-functional structures, have shown an improvement of post-operative hearing outcomes. abstract_id: PUBMED:32200311 Endoscopic findings and long-term hearing results for pediatric unilateral conductive hearing loss. Objectives: Analyze reasons for unilateral conductive hearing loss (CHL) with unknown etiology in children. Introduction: Unilateral conductive hearing loss (HL) without known etiology can be undiagnosed despite of hearing screening programs. It can be difficult to find the reason for HL and to make a treatment plan. Middle ear endoscopy gives hard-evidence diagnosis and basis for an individual treatment plan. Methods And Material: Prospective clinical follow-up study for a cohort of generally healthy elementary school age children with unilateral conductive HL with unknown etiology. The study population was 192 children, of which 46 had a HL of at least 25 dB with more than 10 dB conductive component. Mean age was 8.7 years. Preoperative tests included otomicroscopy, bone- and air-conduction audiogram, tympanometry, stapes reflex tests, Rinne and Weber test and Otoacoustic emissions. The children underwent endoscopy of the middle ear with an individual treatment plan and long-term follow-up. The aim was to explore etiology and to give a treatment plan for hearing loss. Follow-up included air- and bone conduction hearing tests annually or every other year. Mean follow-up was 5.2 years. Results: A clear etiological finding was found in 36 (78%) ears, stapes anomaly (23) as the most common (64%) finding. Other findings were two cholesteatomas, 2 status after trauma, 5 middle ear anomalies, 5 incus fixations and one incus erosion. Air conduction hearing improved spontaneously during follow-up in 81% (17/21, 2 dropouts) of the stapes anomaly ears (mean 11,3 dB, range 4-32 dB), and none of these ears showed hearing deterioration. In the incus fixation group, one ear showed hearing deterioration. There were no major complications for exploration, and 5 minor postoperative infections. Conclusions: The most common reason for pediatric unilateral conductive hearing loss was stapes anomaly/fixation. The HL does not deteriorate. Hearing loss in stapes anomalies shows a tendency for spontaneous recovery. Stapes surgery can be postponed or avoided. abstract_id: PUBMED:30776961 Long-Term Hearing Outcomes following Total Ossicular Reconstruction with Titanium Prostheses. Objective: (1) Characterize a large cohort of patients undergoing total ossicular chain reconstruction with titanium prosthesis. (2) Analyze long-term hearing outcomes of the same cohort. Study Design: Case series with chart review. Setting: Tertiary care center. Subject And Methods: This study reviews patients who underwent total ossicular chain reconstruction (OCR) with titanium prostheses (TORPs) at a single tertiary care center from 2005 to 2015. Patient charts were reviewed for demographic data, diagnosis, and operative details. Patients were included in statistical analysis if length of follow-up was 2 years or more. Evaluation of hearing improvement was made by comparing preoperative air-bone gap (ABG) and ABG at follow-up at 2 years. Results: In total, 153 patients were identified who met inclusion criteria. The mean age of included patients was 40 years (range, 6-89 years). Sixty patients (39%) had a history of OCR, and 120 patients (78%) had a diagnosis of cholesteatoma at the time of OCR. Preoperatively, the mean ABG was 36 ± 12, whereas the mean ABG at 2-year follow-up improved to 26 ± 13. This was statistically significant (P < .0001) using a Wilcoxon matched-pairs signed rank test. Twelve patients (8%) required revision OCR. Two revisions were performed due to prosthesis extrusion (<1%). Conclusion: Titanium prostheses lead to significant improvement in hearing over long periods. The results are sustained as far out as 5 years following surgery. In addition, rates of revision surgery with titanium TORPs are low. Based on this series, there are no readily identifiable predictors for outcomes following total OCR. abstract_id: PUBMED:32584395 Hearing Preservation in Patients Who Undergo Labyrinthectomy and Translabyrinthine Procedures: A Case Report and Systematic Review. Importance: Preservation of endolymphatic fluids, maintenance of a fluid-filled vestibule, and preservation of the cochlear nerve and its vasculature are believed to be necessary to retaining hearing after an inner-ear operation. However, some studies have reported no hearing loss despite the violation of the vestibule, questioning the importance of maintaining a fluid-filled vestibule in preserving hearing. Objective: To report on the preservation of hearing after a complete labyrinthectomy for Meniere disease and after disruption of the vestibule. Evidence Review: This systematic review adhered to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) reporting guidelines. A search of PubMed, Scopus, Ovid, and Cochrane Library databases was conducted to identify English-language articles on hearing preservation after labyrinthectomy, published from 1947 through December 11, 2019. The search strategy used a combination of boolean operators and included the following Medical Subject Heading terms and keywords: hearing preservation, labyrinth surgery, labyrinthectomy, vestibule violation, vestibule disruption, translabyrinthine approach, schwannoma removal, and semicircular canal ablation. Studies that included disruption of the vestibule and hearing preservation were included. Findings: This systematic review identified 10 studies with 10 patients who underwent surgical removal of cholesteatoma or vestibular schwannoma and displayed postoperative hearing preservation. This study also reported on 1 patient with Meniere disease who retained hearing after undergoing a complete labyrinthectomy. Among these 11 patients, the mean (range) age was 45.1 (27-55) years, and 8 patients (73%) were women. Multiple theories exist that explain the mechanism behind hearing preservation, such as sealing of the ductus reuniens or closure of the remaining vestibule. Conclusions And Relevance: This systematic review describes a set of patients who did not experience hearing loss after a labyrinthectomy or surgical violation of the vestibule, which seems to contradict prevailing principles for retaining hearing after inner-ear surgical procedures. This finding suggests that hearing preservation is possible after labyrinthine destruction despite the absence of a fluid-filled vestibule and that other mechanisms, such as occlusion of the ductus reuniens with granulation tissue or bone dust, may be sufficient to achieve that outcome. abstract_id: PUBMED:18097335 Long-term hearing outcomes after ossiculoplasty in comparison to short-term outcomes. Objectives: To investigate long-term hearing outcomes after ossiculoplasty. Study Design: Retrospective study. Setting: Tertiary referral center. Patients: One hundred ninety-nine patients who underwent ossiculoplasty between 1989 and 2001 by the same surgeon and were followed for more than 5 years. Methods: Postoperative hearing outcomes were considered successful if the postoperative air-bone gap was within 20 dB. The outcomes after 6 months and 5 years were analyzed. Results: The overall rate of successful outcomes was 61.3% after 6 months and 54.3% after 5 years. There was a significant difference between 6 months and 5 years (p < 0.05). The significant deterioration of long-term hearing outcomes was found in malleus-present ears, ears with cholesteatoma/atelectasis, adults, and ears treated by one-stage operation. The reasons for long-term unsuccessful outcomes were adhesion or retraction of the ear drum in 13 ears (54.2%). Conclusion: The hearing outcomes after ossiculoplasty showed significant deterioration on long-term follow-up. Innovative approaches to the management of cholesteatoma/atelectasis are necessary to improve and stabilize long-term hearing outcomes. abstract_id: PUBMED:17176796 Why do some children have good hearing results following type III and IV tympanoplasty? Current theories of middle ear mechanics. Introduction: Middle ear reconstruction in children following tympanomastoidectomy for cholesteatoma is commonly limited to a Wullstein type III or IV reconstruction owing to ossicular erosion. The hearing outcomes of this procedure have been unpredictable. Nevertheless, there are children who have remarkably good hearing results despite having extensive and aggressive cholesteatoma surgery and limited reconstruction. Methods: The current theories of middle ear mechanics following tympanoplasty and ossicular reconstruction are reviewed. In addition, a selective retrospective chart review of pediatric type III and IV tympanoplasty at The Hospital for Sick Children between 1998 and 2003 is presented. Results: Nine patients were reconstructed with a type III (n = 3) or IV (n = 6) tympanoplasty. The mean pre- and postoperative air-bone gaps were 43.6 and 24.9 dB. Speech reception threshold improved from 37.5 to 22.8 dB. The changes were statistically significant (p < .05). Conclusions: This series of patients demonstrated a statistically significant hearing improvement at long-term follow-up. The improvements are consistent with optimal hearing outcomes predicted by current theories of middle ear mechanics. abstract_id: PUBMED:33210744 Long-Term Stability and Safety of the Soundbridge Coupled to the Round Window. Objective: The objective of the study was to demonstrate the long-term outcomes of patients implanted with the active middle ear implant (AMEI) Vibrant Soundbridge (VSB) through coupling the floating mass transducer (FMT) to the round window (RW). Methods: This retrospective study evaluated the short- and long-term clinical performance (audiological outcomes) and safety (revisions/explantations) of the VSB coupled to the RW between 2013 and 2019 at the St. Pölten University Hospital, Austria. For the outcome analysis, the sample was divided into a short-term examination group followed up for less than 12 months (<12 months) and a long-term examination group followed up for more than 12 months (>12 months). Cumulative survival outcomes were separately analyzed for subjects with and without cholesteatoma. Results: 46 patients with an average long-term follow-up period of 31.43 months (13-75 months) were investigated. Complications requiring revision surgery were reported in total in seven patients with cholesteatoma (15.2%) and none in subjects without cholesteatoma (0%). Residual hearing was not affected by VSB surgery. Word understanding on the Freiburger monosyllabic speech test improved significantly at 65 dB (P < .0001) and 80 dB (P < .0001), and these outcomes were stable for long-term follow up. Conclusion: The VSB coupled to the RW is a safe implantation method for patients with conductive or mixed hearing loss. Hearing improvement was stable for the long-term follow-up up to 74 months. The revision rates are directly related to the underlying pathology of cholesteatoma (with radical cavity); thus, this special cohort requires additional counseling on potential complications. Level Of Evidence: 4 (Case-series) Laryngoscope, 131:E1434-E1442, 2021. Answer: Long-term hearing preservation in children following cholesteatoma surgery can be expected to some degree, as evidenced by various studies. A 14-year-long study of atticotomy-limited mastoidectomy with cartilage reconstruction showed that post-operative air conduction thresholds and air-bone gaps were significantly improved compared to pre-operative levels, with a higher probability of hearing success following surgery. Stapes integrity was identified as an independent positive prognostic determinant of achieving hearing success (PUBMED:22948389). Another study with a mean follow-up period of 49.2 months reported significant improvements in pure-tone average and air-bone gaps after functional cholesteatoma surgery with autologous bone grafts reconstruction, with early-stage disease and pars flaccida cholesteatoma being independent predictors of successful hearing preservation (PUBMED:34900435). Additionally, the OOPS index was found to be predictive of long-term hearing results after ossiculoplasty, which is often part of cholesteatoma surgery, indicating that accurate prognostication of hearing outcomes is possible (PUBMED:28106625). Modified surgical approaches, such as modified translabyrinthine techniques and adjuvant endoscopic techniques, have shown improvements in post-operative hearing outcomes in patients with petrous bone cholesteatoma (PUBMED:34797992). Furthermore, a study on pediatric unilateral conductive hearing loss found that stapes anomalies, a common reason for hearing loss, showed a tendency for spontaneous recovery, suggesting that hearing loss does not deteriorate and that stapes surgery can sometimes be postponed or avoided (PUBMED:32200311). Titanium prostheses used in total ossicular chain reconstruction have also led to significant and sustained hearing improvement over long periods, with low rates of revision surgery (PUBMED:30776961). However, it is important to note that long-term hearing outcomes can deteriorate over time, as shown in a study where the rate of successful outcomes decreased from 61.3% after 6 months to 54.3% after 5 years, with adhesion or retraction of the eardrum being a significant reason for long-term unsuccessful outcomes (PUBMED:18097335). Despite these challenges, some children have shown good hearing results following type III and IV tympanoplasty, which is consistent with optimal hearing outcomes predicted by current theories of middle ear mechanics (PUBMED:17176796).
Instruction: Does the smoking status of general practitioners affect the efficacy of smoking cessation counselling? Abstracts: abstract_id: PUBMED:18818045 Does the smoking status of general practitioners affect the efficacy of smoking cessation counselling? Objective: To examine the association between the smoking status of general practitioners (GPs) and abstinence rates among patients receiving GP-delivered brief advice for smoking cessation. Methods: A quasi-experimental multilevel study with follow-up assessments at 6, 12, 18, and 24 months after baseline was conducted using a random sample of 39 general practices in a defined area (participation rate=87.2%). Patients aged 18-70 were consecutively screened for smoking status (n=11,560) over the course of 3 weeks and were assigned to a control group (week 1), a computer expert system intervention (week 2), or a personal counselling intervention with the GP (week 3). For the current analysis, patients participating in study week 2 were excluded. A total of 1260 patients fulfilled the inclusion criteria and 80.2% took part: 609 patients in study week 1 and 402 patients from study week 3. GPs participated in a training session concerning smoking counselling, which was held between study weeks 2 and 3. Self-reported 4-week and 6-month prolonged abstinence measures at the 6-, 12-, 18-, and 24-month follow-ups were assessed. Results: The smoking status of the GP was neither significantly related to 4-week prolonged abstinence nor 6-month prolonged abstinence among patients in a main effects model. Further modelling revealed that the intervention group modified the effect of the non-smoking status of the GP on the likelihood to quit smoking. A significant interactive effect was found between the non-smoking status of the GP and the intervention group on both abstinence measures. Conclusion: The non-smoking status of the GP had a positive effect among counselled patients. Practice Implications: The consideration of lifestyle behavioural variables such as the smoking status of the GP will be essential for further research concerning the efficacy of smoking interventions. abstract_id: PUBMED:36575392 Smoking quit rates among patients receiving pharmacotherapy who received general practitioner counselling versus intensive counselling: a retrospective cohort study. Background: Behavioral treatments can augment the success of pharmacotherapy in smoking cessation. The aim of this study was to compare smoking quit rates between patients receiving individual counseling with their general practitioner during office visits or intensive counselling with behavioral support, both augmented by varenicline. Methods: A nationwide retrospective cohort study conducted in a large Healthcare Maintenance Organization in Israel. We selected randomly patients who filled a prescription for varenicline and received either individual consulting by their general practitioner or intensive counselling with behavioural support, and asked them to answer a questionnaire. The outcome variables were smoking cessation 26-52 weeks following the beginning of treatment and satisfaction with the process. Results: 870 patients were contacted and 604 agreed to participate (a response rate of 69%); 301 patients in the general practitioner group, 300 in the intensive counselling group and 3 were excluded due to missing date. The quit rate was 36.5% in the general practitioner group and 42.3% in the intensive counselling group (P = 0.147). In a logistic regression analysis, controlling for age, gender, socioeconomic status, ischemic heart disease, chronic obstructive pulmonary disease, pack years and duration of varenicline consumption, the adjusted OR for quitting in the general practitioner group was 0.79 (95% CI 0.56,1.13). The adjusted OR was higher in the group with the highest socioeconomic status at 2.06 (1.39,3.07) and a longer period of varenicline consumption at 1.30 (1.15,1.47). Age, gender and cigarette pack-years were not associated with quit rate. In the general practitioner group 68% were satisfied with the process, while 19% were not. In the intensive counselling group 64% were satisfied and 14% were not (P = 0.007). Conclusion: We did not detect a statistically significant difference in smoking quit rates, though there was a trend towards higher quit rates with intensive counselling. abstract_id: PUBMED:28103398 Cognitive, behavioural and psychosocial factors associated with successful and maintained quit smoking status among patients who received smoking cessation intervention with nurses' counselling. Aim: To identify cognitive, behavioural and psychosocial factors associated with successful and maintained quit smoking status after patients received smoking cessation intervention with nurses' counselling. Background: Although nurses' intervention for smoking cessation is effective for patients, few studies have been conducted to identify cognitive, behavioural and psychosocial factors associated with the success of quitting smoking. Design: Prospective cohort study METHODS: In a multi-institutional study between October 2008 - October 2014, we administered the Japanese smoking cessation therapy, which consists of smoking cessation intervention five times with nurses' counselling over 12 weeks. Log-binomial regression analysis was performed in 1,320 participants using the following independent variables: age, gender, having a present illness, prescription, Fagerström test for nicotine dependence, strength of desire to quit, age at smoking initiation, previous abstinence, motivation of quit smoking, self-efficacy of quit smoking and Center for Epidemiologic Studies Depression Scale. Factors associated with maintained cessation for 12 months were identified in the 541 abstainers at the end of the intervention. Results: Having higher self-efficacy to quit smoking as assessed before the intervention was significantly associated with the success of quitting smoking at the end of the intervention. Strong desire to smoke as assessed at the end of the intervention was associated with significantly increased risk of discontinuing cessation during the 12 months after the end of the intervention. Conclusion: It is important for nurses who provide smoking cessation intervention to reinforce patients' self-efficacy and to control the strength of the patients' desire to smoke by behavioural counselling. abstract_id: PUBMED:19533584 Readiness to provide smoking cessation counselling--results of a survey among general practitioners in Brandenburg Background: The effectiveness of brief interventions on smoking cessation together with regular visits to the general practitioners (GP) has been proven. Nevertheless, the guidelines for smoking cessation are not currently implemented sufficiently. A lack of financial resources, time, and consulting abilities prevent GPs from offering systematic advice on smoking cessation. This study examine 1) to what extent GPs ask their patients to provide information about their smoking habits and to what extent they document this, 2) how willing, and 3) how confident GPs are to offer all smoking patient counselling, and 4) which factors influence their level of confidence. Methods: From August 2005 until May 2006, a questionnaire was sent to all 1 247 GPs in Brandenburg. In all 68 practices was excluded for several reasons (closed practice, death, not providing primary care) a total of 54.0% (n=637) of the GPs took part. Results: 30.0% of the GPs documented the smoking status of their patients during the first consultation. 12.9% had already offered advice to all their smoking patients, while 27.6% were not willing to offer advice to all smoking patients. The average confidence of GPs to offer all smoking patients advice on smoking cessation was 4.1 (SD=2.6) on scale of 1 to 10 (1=not at all confident and 10=very confident). The confidence of non-smoking GPs to offer an advice was higher in comparison with smoking GPs. Consequences: To motivate GPs to offer advice on smoking cessation, it seems necessary to change some conditions. This includes programms, initiated by professional medical associations, to help colleagues stop smoking. Further studies should indicate whether the inclusion of practice colleagues in screening and regular updates of the patient's smoking status increase the number of regular counselling. abstract_id: PUBMED:29679105 The practices of French general practitioners regarding screening and counselling pregnant women for tobacco smoking and alcohol drinking. Objectives: Our study aims to describe French general practitioners' (GPs') practices toward pregnant patients regarding alcohol consumption and smoking and to highlight factors associated with specific practices. Methods: In 2015, a representative sample of 1414 French GPs completed a telephone survey based on a stratified random sampling. Results: 61% of GPs declared screening for alcohol use and 82% for smoking at least once with each pregnant patient; quitting was not systematically advised either for alcohol or for smoking. GPs' practices were significantly better among those who had more recent ongoing training. GPs who drank regularly were less likely to screen for alcohol use and GPs' drinking frequency was inversely related to recommending quitting. Current and former smokers were less likely to recommend quitting to pregnant patients smoking over five cigarettes per day. Conclusions: Screening and counselling practices for substance use during pregnancy are heterogeneous among French GPs and are notably related to their personal consumption. GP's role in preventing substance use during pregnancy could be strengthened by actions regarding their own consumption and by modifications in their initial and ongoing training. abstract_id: PUBMED:25657051 Combining intensive practice nurse counselling or brief general practitioner advice with varenicline for smoking cessation in primary care: study protocol of a pragmatic randomized controlled trial. Introduction: Combining behavioural support and pharmacotherapy is most effective for smoking cessation and recommended in clinical guidelines. Despite that smoking cessation assistance from the general practitioner can be effective, dissemination of clinical practice guidelines and efforts on upskilling has not lead to the routine provision of smoking cessation advice among general practitioners. Intensive counselling from the practice nurse could contribute to better smoking cessation rates in primary care. However, the effectiveness of intensive counselling from a practice nurse versus usual care from a general practitioner in combination with varenicline is still unknown. Materials And Methods: A pragmatic randomized controlled trial was conducted comparing: (a) intensive individual counselling delivered by a practice nurse and (b) brief advice delivered by a general practitioner; both groups received 12-weeks of open-label varenicline. A minimum of 272 adult daily smoking participants were recruited and treated in their routine primary care setting. The primary outcome was defined as prolonged abstinence from weeks 9 to 26, biochemically validated by exhaled carbon monoxide. Data was analysed blinded according to the intention-to-treat principle and participants with missing data on their smoking status at follow-up were counted as smokers. Secondary outcomes included: one-year prolonged abstinence, short-term incremental cost-effectiveness, medication adherence, and baseline predictors of successful smoking cessation. Discussion: This trial is the first to provide scientific evidence on the effectiveness, cost-effectiveness, and potential mechanisms of action of intensive practice nurse counselling combined with varenicline under real-life conditions. This paper explains the methodology of the trial and discusses the pragmatic and/or explanatory design aspects. Trial Registration: Dutch Trial Register NTR3067. abstract_id: PUBMED:19137221 Smoking cessation in Germany in 2007. Patterns of outpatient smoking cessation counselling and treatment A qualified and comprehensive supply of outpatient smoking cessation counselling and treatment is essential to help smokers quit. In order to assess the status quo, structure and regional differences of the smoking cessation services available in Germany, a complete market survey was conducted in 2007. Descriptive results on the structure and characteristics of smoking cessation services show that there are distinct regional differences, especially in the New Laender, where the supply is insufficient. Overall, about a quarter of the providers of smoking cessation services lack specialised additional skills required for smoking cessation counselling, treatment or therapy of addicts. Especially medical practitioners more often lack such additional skills compared to other occupational groups. The smoking cessation services generally used the programs "Rauchfrei in 10 Schritten" and "Das Rauchfrei Programm", and the method of cognitive behavioural therapy. However, more than half of the services still use a method which is not evidence-based. Since February 2008 data on all supplies and suppliers of smoking cessation services, who gave their permission, have been published online and can be accessed at www.anbieter-raucherberatung.de. abstract_id: PUBMED:25649312 Missed opportunities: general practitioner identification of their patients' smoking status. Background: In order to provide smoking cessation support to their patients in line with clinical practice guidelines, general practitioners must first ascertain whether their patients' use tobacco. This study examined (i) the sensitivity, specificity, positive predictive value and negative predictive value of general practitioner detection of smoking, and (ii) the general practitioner and patient characteristics associated with detection of tobacco use. Methods: Eligible patients completed a touchscreen computer survey while waiting for an appointment with their general practitioner. Patients self-reported demographic characteristics, medical history, and current smoking status. Following the patient's consultation, their general practitioner was asked to indicate whether the patient was a current smoker (yes/no/unsure/not applicable). Smoking prevalence, sensitivity, specificity, positive predictive value and negative predictive values (with 95% confidence intervals) were calculated using patient self-report of smoking status as the gold standard. Generalised estimating equations were used to examine the general practitioner and patient characteristics associated with detection of tobacco use. Results: Fifty-one general practitioners and 1,573 patients in twelve general practices participated. Patient self-report of smoking was 11.3% compared to general practitioner estimated prevalence of 9.5%. Sensitivity of general practitioner assessment was 66% [95% CI 59-73] while specificity was 98% [95% CI 97-98]. Positive predictive value was 78% [95% CI 71-85] and negative predictive value was 96% [95% CI 95-97]. No general practitioner factors were associated with detection of smoking. Patients with a higher level of education or who responded 'Other' were less likely to be detected as smokers than patients who had completed a high school or below level of education. Conclusion: Despite the important role general practitioners play in providing smoking cessation advice and support, a substantial proportion of general practitioners do not know their patient's smoking status. This represents a significant missed opportunity in the provision of preventive healthcare. Electronic waiting room assessments may assist general practitioners in improving the identification of smokers. abstract_id: PUBMED:19440272 Smoking cessation counselling for pregnant and postpartum women among midwives, gynaecologists and paediatricians in Germany. The incorporation of guidelines for the treatment of tobacco smoking into routine care requires positive attitudes, counselling skills and knowledge about additional help available for smokers. The study assesses performance of smoking cessation intervention, attitudes, training status and knowledge about additional help for smokers in the care for pregnant and parenting women by midwives, gynaecologists and paediatricians. A survey of all midwives, gynaecologists and paediatricians registered for primary medical care in the federal state Saarland, Germany, was conducted. Participation in the postal questionnaires was 85 %. Depending on profession, 90 % to 100 % see smoking cessation counselling as their assignment, 17 % to 80 % screen for, 48 % to 90 % document smoking status, and 55 % to 76 % offer brief or extensive counselling. 61 % to 87 % consider training to enhance their knowledge and/or counselling skills necessary. The compliance of providers with the necessity to give support in smoking cessation is very high. However, the current status of cessation counselling does not sufficiently correspond to the evidence based requirements. Reports in medical press and advanced training courses should support health care providers and establish smoking as an inherent topic of the anamnesis and treatment of current and former pregnant or parenting smokers. abstract_id: PUBMED:21735847 Generalized self-efficacy in relation to anti-health behaviours of general practitioners exemplified by smoking The purpose of the present work is to make an assessment of generalized self-efficacy of general practitioners and to investigate its relationship with socio-demographic factors and undertaking risky behaviours taking smoking as an example. The participants of the research encompassed 500 general practitioners aged 27-75 living in Poland. The participants' level of generalized self-efficacy was measured first, then, the relationship between generalized self-efficacy, socio-demographic variables and GP's attitudes towards smoking was investigated. The averaged level of GP's generalized self-efficacy was 29.425 +/- 3.64, which is within high referential values. There have been statistically significant relationships between self-efficacy and sex of the participants (men achieved higher scores than women). There have been no significant differences found in the perception of self-efficacy in terms of age and marital status. Statistically significant relationships between generalized self-efficacy and smoking were found only with general practitioners addicted to smoking and smoking every day (in this group, the GSES score was 29.34; SD = 3.60, while in the non-smokers' group the score is significantly higher and is 31.19; SD 4.23). When implementing protective actions related to smoking special attention needs to be devoted to the psychological resources stimulating pro-health activity, including generalized self-efficacy. Answer: The smoking status of general practitioners (GPs) does appear to have an effect on the efficacy of smoking cessation counselling. According to a study with a quasi-experimental multilevel design, the non-smoking status of the GP had a positive effect among counselled patients, indicating that patients receiving personal counselling intervention from non-smoking GPs were more likely to quit smoking (PUBMED:18818045). This finding suggests that the personal health behaviors of GPs, such as their smoking status, can influence the outcomes of smoking cessation interventions they deliver. However, it is important to note that the overall effectiveness of smoking cessation counselling can also be influenced by other factors, such as the intensity of the counselling and the use of pharmacotherapy. For instance, a retrospective cohort study found no statistically significant difference in smoking quit rates between patients receiving general practitioner counselling versus intensive counselling, both augmented by varenicline, although there was a trend towards higher quit rates with intensive counselling (PUBMED:36575392). Additionally, the readiness and confidence of GPs to provide smoking cessation counselling can be affected by their own smoking status, with non-smoking GPs showing higher confidence in offering advice compared to their smoking counterparts (PUBMED:19533584). Moreover, GPs' personal consumption of substances like alcohol and tobacco can influence their screening and counselling practices, with those who consume these substances less likely to recommend quitting to their patients (PUBMED:29679105). In summary, while the smoking status of GPs does have an impact on the efficacy of smoking cessation counselling, the relationship is complex and can be moderated by the counselling approach, the use of pharmacotherapy, and the GPs' own attitudes and behaviors towards smoking.
Instruction: Isolated clinical hypertension diagnosis: self-home BP, ambulatory BP monitoring, or both simultaneously? Abstracts: abstract_id: PUBMED:36229535 Diagnostic agreement of masked uncontrolled hypertension detected by ambulatory blood pressure and home blood pressure measured by an all-in-one BP monitoring device: The HI-JAMP study. Masked hypertension is defined by office blood pressure (BP) in the controlled-BP range while out-of-office BP measured by ambulatory BP monitoring (ABPM) and home BP monitoring (HBPM) is in the uncontrolled range. However, diagnosis of masked hypertension may differ if assessed by different out-of-office BP indices. This study aims to investigate the diagnostic agreement of masked uncontrolled hypertension (MUHT) detected by ABPM indices (ABPM-MUHT) and HBPM indices (HBPM-MUHT) using the same all-in-one device (TM2441; A&D Company). The present study enrolled a total of 2322 treated hypertensive patients (males 53.2%, average age 69.2 ± 11.5 years) from the Home-Activity ICT-based Japan Ambulatory Blood Pressure Monitoring Prospective (HI-JAMP) Study, who consecutively underwent office BP monitoring, 24-h ABPM (at 30-min intervals), and 5-day HBPM (twice each morning and evening) using the same device. When out-of-office BP control status was assessed only by 24-h average SBP or by the average of morning and evening SBP, the diagnostic agreement of MUHT detected by ABPM and HBPM was 29.7% among the 445 patients with any type of MUHT. When out-of-office BP indices in each time-window were simultaneously assessed, the diagnostic agreement increased to 40-45.7%. Our results indicated the importance of assessing BPs at various times of day, especially morning hours, for perfect hypertension management. Diagnosis of masked hypertension only by an averaged BP index, without considering specific time-windows, might underestimate cardiovascular risk. abstract_id: PUBMED:36053278 Inconsistent Control Status of Office, Home, and Ambulatory Blood Pressure All Taken Using the Same Device: The HI-JAMP Study Baseline Data. Background: Inconsistencies between the office and out-of-office blood pressure (BP) values (described as white-coat hypertension or masked hypertension) may be attributable in part to differences in the BP monitoring devices used. Methods: We studied consistency in the classification of BP control (well-controlled BP vs. uncontrolled BP) among office, home, and ambulatory BPs by using a validated "all-in-one" BP monitoring device. In the nationwide, general practitioner-based multicenter HI-JAMP study, 2,322 hypertensive patients treated with antihypertensive drugs underwent office BP measurements and 24-hour ambulatory BP monitoring (ABPM), consecutively followed by 5-day home BP monitoring (HBPM), for a total of seven BP measurement days. Results: Using the thresholds of the JSH2019 and ESC2018 guidelines, the patients with consistent classification of well-controlled status in the office (<140 mmHg) and home systolic BP (SBP) (<135 mmHg) (n = 970) also tended to have well-controlled 24-hour SBP (<130 mmHg) (n = 808, 83.3%). The patients with the consistent classification of uncontrolled status in office and home SBP (n = 579) also tended to have uncontrolled 24-hour SBP (n = 444, 80.9%). Among the patients with inconsistent classifications of office and home BP control (n = 803), 46.1% had inconsistent ABPM-vs.-HBPM out-of-office BP control status. When the 2017 ACC/AHA thresholds were applied as an alternative, the results were essentially the same. Conclusions: The combined assessment of the office and home BP is useful in clinical practice. Especially for patients whose office BP classification and home BP classification conflict, the complementary clinical use of both HBPM and ABPM might be recommended. abstract_id: PUBMED:21183853 Isolated clinical hypertension diagnosis: self-home BP, ambulatory BP monitoring, or both simultaneously? Unlabelled: Self-blood pressure (BP) measurement (SBPM) and ambulatory BP measurement (ABPM) are suitable for the isolated clinical hypertension (ICH) or 'white-coat' hypertension diagnosis. However, patients with ICH have a different cardiovascular risk according to the measurement technique used for the diagnosis. Objective: To describe baseline cardiovascular risk of patients with hypertension and with ICH according to SBPM and daytime ABPM. Methods: Six hundred and sixty-four newly diagnosed and never treated patients with hypertension and with an average age of 59.3 years (standard deviation=12.0) were included (52% men) in this study. Clinical data, analytical data with urinary albumin excretion rate, estimated glomerular filtration rate, retinography, SBPM, and ABPM were performed. Cardiovascular risk was estimated from the European Society of Hypertension and Systemic Coronary Risk Evaluation tables. Results: ICH prevalence varies according to the ambulatory measurement technique used: SBPM=24.2%, daytime ABPM=8.1, and 5.2% if criteria are required from both techniques. In the 403 patients with hypertension and who had SBPM and ABPM, the percentage of patients with high or very high baseline cardiovascular risk, falls progressively from 31.2% of patients with sustained hypertension to 20.0% of patients with ICH measured using SBPM, to 15.1% of patients with ICH measured using ABPM-day and to 9.5% of patients who present ICH using both techniques (P<0.005 for trend). Conclusion: The baseline results show that patients with hypertension and with ICH using SBPM and daytime ABPM are those who have a lower baseline cardiovascular risk and allow ICH to be defined on the basis of normal ambulatory readings using both techniques. abstract_id: PUBMED:24529535 Home blood pressure monitoring in CKD. Hypertension is common in patients with chronic kidney disease (CKD) and the prevalence increases with declining kidney function. Hypertension management is particularly important due to the increased risk of cardiovascular disease and stroke in the CKD population. Most clinical decisions for blood pressure (BP) management are based on BP readings in the office or dialysis unit. These BP readings often are inaccurate. Home BP monitoring provides more data than conventional clinic or dialysis-unit BP measurements and is relatively easy to accomplish, is cost-effective, and has been shown to have an increasing role in the management of BP in the CKD population. This In Practice article focuses on the use of home BP monitoring in patients with CKD. We also provide guidance for choosing a BP monitoring device and review recent literature regarding the use of home BP monitoring and the effect on CKD outcomes. In addition, we address the future use of electronic medical records and how they may interface with home BP monitoring. abstract_id: PUBMED:26022727 Moderator's view: Ambulatory blood pressure monitoring and home blood pressure for the prognosis, diagnosis and treatment of hypertension in dialysis patients. Major health agencies now recommend the systematic application of ambulatory blood pressure monitoring (ABPM) for the diagnosis of hypertension. Given the exceedingly high prevalence of nocturnal hypertension, masked and white coat hypertension and the overt inadequacy of peridialysis (pre-, intra- and post-dialysis) BP measurements, more extensive application of ABPM for the diagnosis of hypertension in dialysis patients would appear logical. In a recent survey performed in NDT Educational, organizational problems and/or cognitive resistance emerged as important factors hindering more extensive application of ABPM and home BP by nephrologists. External validation of observations made in landmark studies in a single institution about hypertension subcategorization by ABPM is urgently needed. Furthermore, apparent cognitive resistance by nephrologists may be justified by the fact that these techniques have been insufficiently tested in the dialysis population for applicability in everyday clinical practice, tolerability, organizational impact and cost-effectiveness. We should be more resolute in abandoning peridialysis measurements for diagnosing and treating hypertension in haemodialysis patients. Home BP is a formidable educational instrument for patient empowerment and self-care, and evidence exists that this technique is superior to peridialysis values to better hypertension control as defined on the basis of ABPM. We should strive to promote more extensive application of home BP monitoring to diagnose and manage hypertension in haemodialysis patients. ABPM with novel, user friendly and better tolerated techniques is to be awaited in the near future. abstract_id: PUBMED:32973085 Prognostic Significance of Ambulatory BP Monitoring in CKD: A Report from the Chronic Renal Insufficiency Cohort (CRIC) Study. Background: Whether ambulatory BP monitoring is of value in evaluating risk for outcomes in patients with CKD is not clear. Methods: We followed 1502 participants of the Chronic Renal Insufficiency Cohort (CRIC) Study for a mean of 6.72 years. We evaluated, as exposures, ambulatory BP monitoring profiles (masked uncontrolled hypertension, white-coat effect, sustained hypertension, and controlled BP), mean ambulatory BP monitoring and clinic BPs, and diurnal variation in BP-reverse dipper (higher at nighttime), nondipper, and dipper (lower at nighttime). Outcomes included cardiovascular disease (a composite of myocardial infarction, cerebrovascular accident, heart failure, and peripheral arterial disease), kidney disease (a composite of ESKD or halving of the eGFR), and mortality. Results: Compared with having controlled BP, the presence of masked uncontrolled hypertension independently associated with higher risk of the cardiovascular outcome and the kidney outcome, but not with all-cause mortality. Higher mean 24-hour systolic BP associated with higher risk of cardiovascular outcome, kidney outcome, and mortality, independent of clinic BP. Participants with the reverse-dipper profile of diurnal BP variation were at higher risk of the kidney outcome. Conclusions: In this cohort of participants with CKD, BP metrics derived from ambulatory BP monitoring are associated with cardiovascular outcomes, kidney outcomes, and mortality, independent of clinic BP. Masked uncontrolled hypertension and mean 24-hour BP associated with high risk of cardiovascular disease and progression of kidney disease. Alterations of diurnal variation in BP are associated with high risk of progression of kidney disease, stroke, and peripheral arterial disease. These data support the wider use of ambulatory BP monitoring in the evaluation of hypertension in patients with CKD. Podcast: This article contains a podcast at https://www.asn-online.org/media/podcast/JASN/2020_09_24_JASN2020030236.mp3. abstract_id: PUBMED:36552968 Hypertension in Dialysis Patients: Diagnostic Approaches and Evaluation of Epidemiology. Whereas hypertension is an established cardiovascular risk factor in the general population, the contribution of increased blood pressure (BP) to the huge burden of cardiovascular morbidity and mortality in patients receiving dialysis continues to be debated. In a large part, this controversy is attributable to particular difficulties in the accurate diagnosis of hypertension. The reverse epidemiology of hypertension in dialysis patients is based on evidence from large cohort studies showing that routine predialysis or postdialysis BP measurements exhibit a U-shaped or J-shaped association with cardiovascular or all-cause mortality. However, substantial evidence supports the notion that home or ambulatory BP measurements are superior to dialysis-unit BP recordings in diagnosing hypertension, in detecting evidence of target-organ damage and in prognosticating the all-cause death risk. In the first part of this article, we explore the accuracy of different methods of BP measurement in diagnosing hypertension among patients on dialysis. In the second part, we describe how the epidemiology of hypertension is modified when the assessment of BP is based on dialysis-unit versus home or ambulatory recordings. abstract_id: PUBMED:27057289 Practical Aspects of Home and Ambulatory Blood Pressure Monitoring. Out-of-office blood pressure (BP) monitoring is becoming increasingly important in the diagnosis and management of hypertension. Home BP and ambulatory BP monitoring (ABPM) are the two forms of monitoring BP in the out-of-office environment. Home BP monitoring is easy to perform, inexpensive, and engages patients in the care of their hypertension. Although ABPM is expensive and not widely available, it remains the gold standard for diagnosing hypertension. Observational studies show that both home BP and ABPM are stronger predictors of hypertension-related outcomes than office BP monitoring. There are no clinical trials showing their superiority over office BP monitoring in guiding the treatment of hypertension, but the consistency of observational data make a compelling case for their preferential use in clinical practice. abstract_id: PUBMED:29459219 Home blood pressure-guided antihypertensive therapy in chronic kidney disease: more data are needed. In the era of newly introduced hypertension guidelines recommending lower blood pressure (BP) targets for drug-treated hypertensives, the necessity for optimized management of hypertension becomes even more urgent. The concept of home BP-guided antihypertensive therapy is for long suggested as a simple and feasible approach to improve BP control rates and optimize the management of hypertension. Home BP-guided antihypertensive therapy is particularly applicable to hypertensives with chronic kidney disease (CKD) for several reasons including the following: (1) difficult-to-control BP and high BP variability in the CKD setting; (2) poor accuracy of office BP in determining hypertension control status and detecting "white-coat" and "masked" hypertension; (3) poor value of routine office BP recordings in predicting the longitudinal progression of target-organ damage; and (4) superiority of home BP over office BP recordings in prognosticating the risk of incident end-stage renal disease or death. The concept of home BP-guided antihypertensive therapy is even more relevant for those on hemodialysis, given the high intradialytic and interdialytic BP variability and poor value of conventional peridialytic BP recordings in estimating the actual BP load recorded outside of dialysis with the use of home or ambulatory BP monitoring. Randomized trials comparing home BP-guided antihypertensive therapy versus usual care are warranted to prove the feasibility and effectiveness of this therapeutic approach and convince clinicians for using home BP monitoring as the standard of care when managing hypertension, particularly in people with CKD or end-stage renal disease. abstract_id: PUBMED:36380199 Concept, study design, and baseline blood pressure control status of the nationwide prospective HI-JAMP study using multisensor ABPM. The Home-Activity Information and Communication Technology (ICT)-based Japan Ambulatory Blood Pressure Monitoring Prospective (HI-JAMP) study is a nationwide general practitioner-based cohort of medicated hypertensive patients that began in 2017. The primary objective of this study is to clarify which blood pressure (BP) measure is the best predictor of cardiovascular disease: office, home, or ambulatory BP. To this end, we used a newly developed ICT-based multisensor ambulatory BP monitoring (IMS-ABPM) device (TM-2441; A&D Company), which can also be used for home BP monitoring and is equipped with a high-sensitivity actigraph, a thermometer, and a barometer. This is the first study to evaluate three hemodynamic properties under resting home and active ambulatory conditions using the same device: (1) BP variability; (2) trigger-specific BP sensitivity to physical activity, temperature, and atmospheric pressure; and (3) central hemodynamics. A total of 2754 patients were enrolled, and 2731 patients with the required number of ambulatory BP readings (at least 6 daytime and 3 nighttime readings) were included in the final analysis. The overall patient group showed worse control of morning and nighttime BPs compared to office and daytime BPs, and the control rate was also poor among patients with a higher number of antihypertensive drugs. In conclusion, the baseline data of the HI-JAMP study demonstrated that morning home and nighttime BP remain difficult to control even among patients taking two or more hypertensive agents. By simultaneously considering office, home, and ambulatory BPs, the HI-JAMP study will contribute to the development of hypertension management strategies and new BP indices. Answer: The diagnosis of isolated clinical hypertension (ICH), also known as 'white-coat' hypertension, can be assessed using self-home blood pressure measurement (SBPM), ambulatory blood pressure measurement (ABPM), or both techniques simultaneously. The choice of method can influence the perceived cardiovascular risk of patients with ICH. A study found that the prevalence of ICH varied according to the ambulatory measurement technique used: 24.2% with SBPM, 8.1% with daytime ABPM, and 5.2% if both techniques' criteria were required. The study also noted that patients with ICH diagnosed using both SBPM and daytime ABPM had the lowest baseline cardiovascular risk compared to those diagnosed with either method alone (PUBMED:21183853). In clinical practice, the combined assessment of office and home BP is useful, especially for patients with conflicting office BP and home BP classifications. For these patients, the complementary clinical use of both HBPM and ABPM might be recommended (PUBMED:36053278). Moreover, the HI-JAMP study baseline data indicated that the diagnostic agreement of masked uncontrolled hypertension detected by ABPM and HBPM increased when out-of-office BP indices in each time-window were simultaneously assessed (PUBMED:36229535). In summary, while both SBPM and ABPM are suitable for diagnosing ICH, using both techniques simultaneously may provide a more accurate assessment of a patient's cardiovascular risk and hypertension status. The combined use of these methods can help to define ICH based on normal ambulatory readings using both techniques, which may be particularly important for patients with conflicting BP control status between office and home measurements.
Instruction: Screening protocols for group B streptococcus: are transport media appropriate? Abstracts: abstract_id: PUBMED:25249270 Comparative evaluation of 5 different selective media for Group B Streptococcus screening in pregnant women. We compared the performances and the cost-effectiveness of 5 selective media for Group B Streptococcus (GBS) screening in vaginal samples from pregnant women. The usefulness of these media is unquestionable for GBS screening; the choice will depend largely on the laboratory organization. abstract_id: PUBMED:15108865 Screening protocols for group B streptococcus: are transport media appropriate? Objective: To evaluate group B streptococcus (GBS) detection in an in vitro setting, using a low and controlled inoculum from swabs directly inoculated into a selective medium, as compared to delayed inoculation following a period in a commercial Amies transport medium with charcoal (Venturi Transystem Copan, Italy). Study Design: Clinical isolates of GBS (n = 103), were inoculated into the Amies transport medium with charcoal in a concentration of 100 colony-forming units (cfu)/ml (10 cfu/swab). Swabs were then transferred to an enrichment broth (NPC) at time intervals of 0, 2, 4, 6 and 24 hours. Broths were then incubated for 18-24 hours at 35 degrees C in air, before being transferred to New Granada Medium Modified (NGM) for GBS detection and incubated for a further 18-24 hours at 35 degrees C in air. If the characteristic orange pigmented colonies were observed after this period, the specimen was recorded as + (1-10 colonies) or + + (more than 10 colonies). Results: Overall 92.2% (95/103) of isolates were detected in all tubes and at all times. An additional two isolates were non-hemolytic, non-pigment forming GBS. Of note, 3.9% (4/103) were negative until 2 hours delayed inoculation and 1.9% (2/103) gave inconsistent results, likely due to the low inoculum used. Conclusion: Delayed inoculation into selective enrichment broth following a period in transport medium, even with a low inoculum, gave a similar and acceptable GBS detection rate to direct inoculation. Hence, Amies transport medium with charcoal is an appropriate transport medium to use, where it is not practical for clinical specimens to be directly inoculated into selective enrichment broth and as endorsed in the Centers for Diseases Control (CDC) Guidelines, 2002. abstract_id: PUBMED:31020650 Compliance with screening for and recommended management of maternal group B streptococcus carriage in pregnancy. Background: Universal screening of pregnant women at 35-37 weeks gestation is recommended for detection of anogenital group B streptococcus carriage. Intrapartum chemoprophylaxis is prescribed to carriers to prevent transmission to babies, reducing early-onset neonatal group B streptococcal sepsis. Aims: To review compliance with, and the effects of education on group B streptococcus screening and intrapartum chemoprophylaxis practices at The Royal Women's Hospital, Melbourne, Australia. Materials And Methods: A retrospective audit of women delivering in February 2016 and February-March 2017 was conducted. In February 2017, updated early-onset group B streptococcal disease prevention guidelines were released and promoted with targeted education of clinical staff. Compliance was considered appropriate if practices followed up-to-date local protocols. Results: Screening rate for group B streptococcus was 84.4% (599/710) and carriage rate 19.5% (109/558), while intrapartum antibiotic prophylaxis was optimal in 83% of those labouring greater than four hours (39/47). There was no significant difference in compliance between 2016 and 2017. Of 113 women with unknown group B streptococcal status at delivery, only five of 33 (15%) with clinical risk factors for early-onset neonatal disease received intrapartum prophylaxis. Conclusions: Compliance remained stable, with no change during or after implementation of new protocols. Compliance with protocols was low for cases with unknown group B streptococcal status at delivery but with the presence of one or more clinical risk factors for early-onset group B streptococcal sepsis. abstract_id: PUBMED:33394277 Group B Streptococcus Screening Guidelines in Pregnancy: A Critical Review of Compliance. Introduction: Colonization with Group B Streptococcus in pregnancy is a major risk factor for neonatal infection. Universal screening for maternal streptococcal colonization and the use of intrapartum antibiotic prophylaxis has resulted in substantial reductions of neonatal early-onset Group B Streptococcus disease. To achieve the best neonatal outcomes, it is imperative for maternity healthcare providers to adhere to screening and management guidelines. Aim: This literature review uses a systematic approach and aims to provide a synthesis of what is known about compliance with Group B Streptococcus screening protocols in a variety of global settings, including maternity homes, private obstetric practice, and hospital clinical environments. Methods: The review was carried out using electronic databases as well as hand-searching of reference lists. Included papers reported primarily on compliance with Group B Streptococcus screening guidelines, potential factors which influence compliance rates, and implementations and outcomes of interventions. Results: Six international studies have been retained which all focused on adherence to Group B Streptococcus screening guidelines and demonstrated that different factors might have an influence on adherence to GBS screening protocols such as financial aspects and high caesarean section rates. Findings of relatively low compliance rates led to recognizing the need of developing improved strategies for optimising antenatal GBS screening adherence. Conclusion: Adhering to Group B Streptococcus screening guidelines to prevent neonatal infection is crucial. Various factors influence compliance rates such as financial aspects and high proportions of caesarean sections. The implementation of strategies and different forms of education can result in improved compliance rates. abstract_id: PUBMED:8127527 An analysis of the cost-effectiveness of selected protocols for the prevention of neonatal group B streptococcal infection. Objective: To determine the expected neonatal outcome in a hypothetical obstetric population with various screening and intrapartum management protocols for the prevention of neonatal group B streptococcal infections. Methods: We used decision analysis to investigate the performance of various antenatal and intrapartum group B streptococcal screening protocols combined with selective or universal intrapartum antibiotic prophylaxis in preventing neonatal early-onset group B streptococcal disease. Population characteristics, screening test performance, and treatment efficacy profiles were abstracted from previous investigations. Results: In the absence of screening or treatment, the estimated neonatal attack rate and mortality rate were 3.6 and 0.60 cases per 1000 neonates, respectively. Universal antenatal screening at 26-28 weeks' gestation and selective intrapartum prophylaxis (treatment of colonized women with risk factors) or nonselective prophylaxis (treatment of all colonized women) had attack rates of 1.64 and 1.08 and mortality rates of 0.21 and 0.18 cases per 1000 neonates, respectively. Rapid intrapartum screening and selective prophylaxis had an estimated attack rate of 1.92-2.58 and a mortality rate of 0.26-0.40 cases per 1000 neonates, with results dependent upon the time required to attain test results. Similarly, rapid intrapartum screening and nonselective prophylaxis had an attack rate of 1.44-2.30 and a mortality rate of 0.24-0.38 cases per 1000 neonates. Empirical prophylaxis of all women delivering prematurely, regardless of culture status, had an attack rate of 2.40 and mortality rate of 0.36 cases per 1000 neonates. Conclusions: Under ideal circumstances, antenatal group B streptococcal screening at 26-28 weeks' gestation with vaginal-rectal cultures and subsequent intrapartum prophylaxis is associated with lower projected neonatal attack and mortality rates than protocols using rapid intrapartum screening methods. However, in a non-research population, difficulties frequently encountered with antenatal screening procedures may minimize these differences. abstract_id: PUBMED:12150749 Screening and management protocols for group B streptococcus in pregnancy. Infection with Group B streptococcus (Streptococcus agalactiae) (GBS) is a leading cause of neonatal morbidity and mortality. Screening by antepartum cultures or by risk factors and intrapartum antibiotics has been shown to reduce the risk of early onset GBS disease in newborns, but controversy exists about the best approach for screening and treatment. Currently recommended protocols do not prevent all cases of early onset GBS disease. Intrapartum antibiotics are not without harm, and currently recommended protocols will result in large numbers of women being treated who would not have benefited from treatment. In this article, we review the advantages and disadvantages of currently recommended protocols. abstract_id: PUBMED:31608675 Effect of a media intervention on hepatitis B screening among Vietnamese Americans. Objective: There is a lack of controlled studies of community-wide interventions to increase screening for hepatitis B (HBV) among Asian Americans, particularly Vietnamese Americans, who disproportionately suffer from HBV-related illnesses. The objective of our study was to develop, implement, and evaluate the effectiveness of a media campaign to promote HBV screening among Vietnamese Americans.Design: We designed and implemented a three-year media campaign promoting HBV screening among Vietnamese Americans. Evaluation consisted of cross-sectional pre- and post-intervention population-based telephone surveys of Vietnamese Americans adults age 18-64 who spoke English or Vietnamese and lived in the Northern California (intervention) or Greater Washington, D.C. (comparison) communities in 2007 or 2011. Statistical analysis was completed in 2012. The main outcome was self-report of HBV testing, defined as participants answering 'Yes' to the question: 'Have you ever had a blood test to check for hepatitis B?'Results: The sample sizes at pre- and post-intervention were 1,704 and 1,666, respectively. Both communities reported increased exposure to HBV-related booklets, radio and television advertisements, and websites. Only the intervention community reported increased exposure to newspaper elements. HBV screening increased in both communities (intervention: 65.3% to 73.1%, p < 0.01, comparison: 57.7% to 66.0%, p < 0.01). In multivariable analyses, there was no intervention effect. In both communities, exposure to media elements (Odds Ratio 1.26 [95% Confidence Interval: 1.21, 1.31] for each additional element) was significantly associated with screening.Conclusions: Among Vietnamese Americans in 2 large communities, HBV screening rates were sub-optimal. Screening increased in both the intensive media intervention and comparison communities, and exposure to HBV-related media messages was associated with increased screening. Efforts to address HBV screening among Vietnamese Americans should include mass media messaging. abstract_id: PUBMED:31913562 Universal screening versus risk-based protocols for antibiotic prophylaxis during childbirth to prevent early-onset group B streptococcal disease: a systematic review and meta-analysis. Background: Early-onset group B streptococcal (EOGBS) disease (including sepsis, meningitis, and pneumonia) causes significant morbidity and mortality in newborn infants worldwide. Antibiotic prophylaxis can prevent vertical streptococcal transmission, yet no uniform criteria exist to identify eligible women for prophylaxis. Some guidelines recommend universal GBS screening to pregnant women in their third trimester (screening-based protocol), whereas others employ risk-based protocols. Objectives: To compare the effectiveness of screening-based versus risk-based protocols in preventing EOGBS disease. Search Strategy: Key words for the database searches included GBS, Streptococcus agalactiae, pregnancy, screening, culture-based, risk-based. Selection Criteria: Studies were included if they investigated EOGBS disease incidence in newborn infants and compared screening or risk-based protocols with each other or with controls. Data Collection And Analysis: Risk ratios (RR) and 95% confidence intervals (CI) were determined using Mantel-Haenszel analyses with random effects. Main Results: Seventeen eligible studies were included. In this meta-analysis, screening was associated with a reduced risk for EOGBS disease compared either with risk-based protocols (ten studies, RR 0.43, 95% CI 0.32-0.56) or with no policy (four studies, RR 0.31, 95% CI 0.11-0.84). Meta-analysis could not demonstrate a significant effect of risk-based protocols versus no policy (seven studies, RR 0.86, 95% CI 0.61-1.20). In studies reporting on the use of antibiotics, screening was not associated with higher antibiotic administration rates (31 versus 29%). Conclusions: Screening-based protocols were associated with lower incidences of EOGBS disease compared with risk-based protocols, while not clearly overexposing women to antibiotics. This information is of relevance for future policymaking. Tweetable Abstract: Meta-analysis: general screening is associated with lower rates of early-onset group B strep. neonatal sepsis compared with risk-based protocols. abstract_id: PUBMED:7677673 Prevention of neonatal group B streptococcal sepsis: is routine antenatal screening appropriate. Four strategies for prevention of early onset neonatal group B streptococcal (GBS) sepsis were considered: A: routine antenatal screening for GBS vaginal carriage at 26-28 weeks' gestation and intrapartum antibiotic prophylaxis for all carriers; B: screening as above and prophylaxis only for carriers with risk factors for sepsis; C: prophylaxis for all women with risk factors; D: as for C plus screening at 37 weeks' gestation and prophylaxis for carriers. The outcomes considered for each option were: the proportion of women given prophylaxis; the risk of anaphylaxis; cases of neonatal GBS sepsis and deaths prevented; costs of screening, prophylaxis and of acute care of remaining cases. Published local and overseas studies of neonatal GBS sepsis, effectiveness of antenatal screening and prophylaxis and estimated costs were evaluated. Any of the proposed strategies can prevent a significant proportion of cases of neonatal GBS sepsis and a strategy for prevention of neonatal group B streptococcal sepsis should be part of routine obstetric practice. Strategy C is simple, effective, inexpensive and avoids unnecessary antibiotic use; it is recommended particularly when antenatal care is provided mainly in community or private practice. Strategy A (using vaginal and rectal swabs for screening) could prevent more cases, but at greater cost which could be justified only if protocols can be properly implemented and monitored. abstract_id: PUBMED:22760404 Hepatitis B reactivation following immunosuppressive therapy and screening adherence: a neglected problem? Background: Current guidelines of the "Centers for Disease Control and Prevention [CDC]" recommend routine screening for Hepatitis B before cytotoxic or immunosuppressive therapies are initiated. The national German guideline "Prophylaxis, diagnosis and therapy of hepatitis B virus infection" is in line with the CDC recommendations and underscores general HBV screening before immunosuppression is induced. However, screening adherence and acceptance of these guidelines vary in different oncological specialities. To assess the HBV screening adherence a retrospective study was performed. Patients And Methods: Data of 140 patients were analyzed retrospectively. 37 case-records did not meet inclusion criteria. Patients diagnosed with breast-cancer (n = 43) and Hodgkin's disease (n = 14) requiring chemotherapy were included, as well as patients receiving allogenic stem cell transplantation (SCTx) therapy (n = 22) or transarterial chemoembolization (TACE) therapy of the liver (n = 24). All included case-records were reviewed regarding HBV and HCV serology. Results: In the TACE group three patients were screened for HBsAg. Four patients with breast cancer and five patients in the Hodgkin disease group were screened for HBsAg. In contrast, screening adherence was 100 % in the group of patients receiving allogenic stem cell transplantation therapy (n = 22). Conclusion: Apart from patients with allogenic stem cell transplantation, only some patients receiving immunosuppressive therapies had been screened for HBV infection. Our data indicate that standardized checklists may improve HBV screening previous to immunosuppressive therapies. These clinical structures have led to an almost optimal screening adherence in the high-risk group of allogenic SCTx patients. Answer: The evaluation of screening protocols for Group B Streptococcus (GBS) in pregnant women includes the assessment of transport media's appropriateness for GBS detection. According to a study that aimed to evaluate GBS detection in an in vitro setting, using a low and controlled inoculum from swabs directly inoculated into a selective medium compared to delayed inoculation following a period in a commercial Amies transport medium with charcoal (Venturi Transystem Copan, Italy), it was found that delayed inoculation into selective enrichment broth following a period in transport medium, even with a low inoculum, gave a similar and acceptable GBS detection rate to direct inoculation. This suggests that Amies transport medium with charcoal is an appropriate transport medium to use when it is not practical for clinical specimens to be directly inoculated into selective enrichment broth, as endorsed in the Centers for Diseases Control (CDC) Guidelines, 2002 (PUBMED:15108865). Therefore, based on the available evidence from this study, transport media, specifically Amies transport medium with charcoal, are appropriate for use in GBS screening protocols when direct inoculation into selective enrichment broth is not feasible. This finding supports the use of transport media as a practical component of GBS screening protocols in clinical settings.
Instruction: Can osteoprotegerin be used to identify the presence and severity of coronary artery disease in different clinical settings? Abstracts: abstract_id: PUBMED:25104079 Can osteoprotegerin be used to identify the presence and severity of coronary artery disease in different clinical settings? Purpose: The biomarker Osteoprotegerin (OPG) is associated with coronary artery disease (CAD). The main purpose of this study was to evaluate the diagnostic value of OPG in healthy subjects and in patients with suspected angina pectoris (AP). Methods: A total of 1805 persons were enrolled: 1152 healthy subjects and 493 patients with suspected AP. For comparison 160 patients with acute myocardial infarction (MI) were included. To uncover subclinical coronary atherosclerosis, a non-contrast cardiac-CT scan was performed in healthy subjects; while in patients with suspected AP a contrast coronary angiography was used to detect significant stenosis. OPG concentrations were analyzed and compared between groups. ROC-analyses were performed to estimate OPG cut-off values. Results: OPG concentrations increased according to disease severity with the highest levels found in patients with acute MI. No significant difference (p = 0.97) in OPG concentrations was observed between subgroups of healthy subjects according to severity of coronary calcifications. A significant difference (p < 0.0001) in OPG concentrations was found between subgroups of patients with suspected stable AP according to severity of CAD. ROC-analysis showed an AUC of 0.62 (95% CI: 0.57-0.67). The optimal cut-off value of OPG (<2.29 ng/mL) had a sensitivity of 56.2% (95% CI: 49.2-63.0%) and a specificity of 62.9% (95% CI: 57.3-68.2%). Conclusion: OPG cannot be used to differentiate between healthy subjects with low versus high levels of coronary calcifications. In patients with suspected AP a single OPG measurement is of limited use in the diagnosis of CAD. abstract_id: PUBMED:19014006 Increased plasma osteoprotegerin levels are associated with the presence and severity of acute coronary syndrome. Objective: The objective of this study was to explore the relationship between increased plasma osteoprotegerin (OPG) levels and acute coronary syndrome (ACS). Methods: Plasma OPG levels from 85 subjects undergoing coronary artery angiography in three different groups, including ACS (n=45), stable angia pectoris (SAP) (n=20) and normal coronary artery (NCA) (n=20), were detected by ELISA. Twenty-two ascending aorta specimens were surgically taken from 8 ACS, 7 SAP and 7 NCA patients, and OPG mRNA expression in the specimens was detected by RT-PCR. In addition, 10 coronary artery sections each were selected from autopsy archives for the presence of vulnerable atherosclerosis plaques (VP), stable plaques (SP) or no plaques (NP) and OPG protein expression in the sections was detected by immunohistochemistry. Results: Plasma OPG concentrations in the ACS group were significantly higher than those in the SAP or NCA group.The levels of plasma OPG in the 1-, 2- and 3-vessel disease subgroups of ACS were increasingly higher (P < 0.05 or 0.01). Multiple logistic regression analyses revealed a significant independent relation between plasma OPG concentration and the presence of ACS (P = 0.032, odd ratio = 1.006).Ascending aorta specimens from the ACS group had a greater OPG mRNA expression than those from the NCA or SAP group (P < 0.01). Sections with VP had a markedly higher OPG expression than sections with SP or NP (P < 0.05 and P < 0.01, respectively). Conclusions: Increased plasma osteoprotegerin levels are associated with the presence and severity of acute coronary syndrome. abstract_id: PUBMED:12208791 Serum osteoprotegerin levels are associated with the presence and severity of coronary artery disease. Background: Osteoprotegerin (OPG) is a secretory glycoprotein that belongs to the tumor necrosis factor receptor family. OPG-deficient mice develop severe osteoporosis and medial arterial calcification of the aorta and renal arteries. OPG immunoreactivity was demonstrated in the normal blood vessels and in early atherosclerotic lesions. A recent clinical study suggests that there is a significant correlation between elevated serum OPG levels and cardiovascular mortality. We examined whether serum OPG levels are associated with the progression of coronary artery disease (CAD). Methods And Results: Serum OPG levels were examined in 201 patients who underwent coronary angiography because of stable chest pain. The number of diseased vessels was used to represent the severity of CAD. Serum OPG levels were measured by ELISA and were significantly greater in patients with significant stenosis of the coronary arteries than in those without stenosis. As the severity of CAD increased, there was a significant increase in serum OPG levels. Serum OPG levels were 0.94+/-0.34, 1.04+/-0.38, 1.19+/-0.38, and 1.44+/-0.54 ng/mL (medians 0.91, 0.99, 1.09, and 1.37) for the subjects with normal coronary arteries or luminal irregularities, 1-vessel disease, 2-vessel disease, and 3-vessel disease, respectively. Multivariate logistic regression analysis revealed that serum OPG levels were significantly associated with the presence of CAD [odds ratio, 5.2; 95% confidence interval, 1.7 to 16.0]. Conclusions: Our data show that serum OPG levels are associated with the presence and severity of CAD, suggesting that OPG may be involved in the progression of CAD. abstract_id: PUBMED:26037625 Cardiovascular risk in chronic kidney disease patients: intima-media thickness predicts the incidence and severity of histologically assessed medial calcification in radial arteries. Background: The objective of the study was to determine the relationship between common carotid artery intima-media thickness (CCA-IMT) and histologically assessed calcification of radial artery in relation to clinical features and laboratory markers of bone and mineral metabolism, inflammation, and oxidative stress in patients with stage 5 chronic kidney disease (CKD). Methods: The study comprised 59 patients (36 hemodialyzed, 23 predialysis). CCA-IMT was measured by ultrasonography; the biochemical parameters examined were assessed using routine laboratory methods, ELISA micro-plate immunoassays and spectrophotometry. Fragments of radial artery obtained during creation of hemodialysis access were cryosectioned and stained for calcifications using von Kossa method and alizarin red. Results: Glucose, osteoprotegerin, pentraxin 3 and Framingham risk score significantly correlated with CCA-IMT. In multiple regression analysis, OPG positively predicted CCA-IMT. Radial artery calcifications were found in 34 patients who showed higher CCA-IMT (0.98 ± 0.13 vs 0.86 ± 0.14 mm; P = 0.006). Higher CCA-IMT values were also associated with more advanced calcifications. CCA-IMT and the presence of plaques in common carotid artery were positive predictors of radial artery calcifications, independent of dialysis status, Framingham risk score, CRP and Ca x Pi [OR for calcifications 2.19 (1.08-4.45) per 0.1 mm increase in CCA-IMT]. The presence of radial artery calcifications was a significant predictor of mortality, independent of dialysis status and Framingham risk score [HR 3.16 (1.03-9.64)]. Conclusions: In CKD patients, CCA-IMT examination can be used as a surrogate measure to assess the incidence and severity of arterial medial calcification which is associated with poor clinical outcome in these patients. abstract_id: PUBMED:22640692 Serum osteoprotegerin and osteopontin levels are associated with arterial stiffness and the presence and severity of coronary artery disease. Background: Osteopontin (OPN) and osteoprotegerin (OPG) have recently emerged as key factors in both vascular remodeling and development of atherosclerosis. Arterial stiffness has an independent predictive value for cardiovascular events. We evaluate the relationship between OPG, OPN serum levels and vascular function in coronary artery disease (CAD) patients. Methods: The study population was consisted of 409 subjects (280 with CAD and 129 without CAD). Carotid-femoral pulse wave velocity (PWV) was measured as an index of aortic stiffness. OPG and OPN levels were measured, as markers of vascular remodeling and calcification, by ELISA. Gensini score was used to evaluate the extent of CAD. Results: CAD patients, compared to those without CAD, had higher OPG (3.91 ± 1.87 pmol/l vs. 2.88 ± 1.32 pmol/l, p<0.001) and logOPN levels (1.81 ± 0.18 ng/ml vs. 1.71 ± 0.24 ng/ml, p<0.001) and impaired PWV (8.94 ± 2.21 m/s vs. 8.28 ± 1.91 m/s, p=0.006). Furthermore, PWV was associated with serum OPG levels (r=0.19, p<0.001) and with serum logOPN levels (r=0.10, p=0.049). Multivariate linear regression analysis revealed that increased OPG (p=0.013) and logOPN (p=0.006) levels are associated with 3-vessel CAD and Gensini score (p=0.04 for OPG and p=0.09 for OPN), independently of other known cardiovascular risk factors. Conclusion: The present study revealed that serum OPG and OPN levels are positively associated with arterial stiffness, and with the extent of CAD. These preliminary results suggest that OPG and OPN levels are significantly correlated with vascular function contributing to the pathogenesis of atherosclerosis in CAD. Further studies are needed to explore the mechanisms of action of OPG and OPN in CAD. abstract_id: PUBMED:22936311 Giant cell tumor occurring in familial Paget's disease of bone: report of clinical characteristics and linkage analysis of a large pedigree. Neoplastic degeneration represents a rare but serious complication of Paget's disease of bone (PDB). Although osteosarcomas have been described in up to 1% of PDB cases, giant cell tumors are less frequent and mainly occur in patients with polyostotic disease. We recently characterized a large pedigree with 14 affected members of whom four developed giant cell tumors at pagetic sites. The high number of affected subjects across multiple generations allowed us to better characterize the clinical phenotype and look for possible susceptibility loci. Of interest, all the affected members had polyostotic PDB, but subjects developing giant cell tumors showed an increased disease severity with a reduced clinical response to bisphosphonate treatment and an increased prevalence of bone pain, deformities, and fractures. Together with an increased occurrence of common pagetic complications, affected patients of this pedigree also evidenced a fivefold higher prevalence of coronary artery disease with respect to either the unaffected family members or a comparative cohort of 150 unrelated PDB cases from the same geographical area. This association was further enhanced in the four cases with PDB and giant cell tumors, all of them developing coronary artery disease before 60 years of age. Despite the early onset and the severe phenotype, PDB patients from this pedigree were negative for the presence of SQSTM1 or TNFRSF11A mutations, previously associated with enhanced disease severity. Genome-wide linkage analysis identified six possible candidate regions on chromosomes 1, 5, 6, 8, 10, and 20. Because the chromosome 8 and 10 loci were next to the TNFRSF11B and OPTN genes, we extended the genetic screening to these two genes, but we failed to identify any causative mutation at both the genomic and transcription level, suggesting that a different genetic defect is associated with PDB and potentially giant cell tumor of bone in this pedigree. abstract_id: PUBMED:15569000 Relationship of serum osteoprotegerin levels with coronary artery disease severity, left ventricular hypertrophy and C-reactive protein. OPG (osteoprotegerin) is an inhibitor of osteoclastogenesis and recent work suggests it has a role in atherosclerosis. Therefore we measured serum OPG levels in patients with coronary artery disease, compared the serum OPG levels among the different groups according to the number of stenotic vessels and determined whether there was any correlation with aortic calcification, LV (left ventricular) mass index and serum CRP (C-reactive protein) levels. Subjects (n=100; mean age, 57 years) who underwent coronary angiograms were enrolled. Blood pressure, body mass index, fasting blood glucose, lipid profiles and CRP levels were measured and the LV mass indices were calculated using ECGs. Serum OPG levels were measured by ELISA. The presence of calcification in the aortic notch was checked by a chest X-ray. The subjects were divided into four groups according to the number of stenotic vessels. The mean serum OPG levels increased significantly as the number of stenotic vessels increased, and the mean serum OPG levels were higher in the group with three-vessel disease compared with the groups with no- or one-vessel disease. The mean serum CRP level was significantly higher in the group with three-vessel disease compared with the groups with no-, one- and two-vessel disease. Age and LV mass index showed significant positive correlations with serum OPG levels, although significance was lost after an adjustment for age. Serum CRP levels were positively correlated with serum OPG levels even after an adjustment for age. There were no differences in serum OPG levels according to the presence of fasting hyperglycaemia or aortic calcification. In conclusion, serum OPG level was related to the severity of stenotic coronary arteries and serum CRP levels. LV mass indices showed no significant correlation with OPG levels. The precise mechanism for the role of OPG in atherosclerosis needs to be investigated further. abstract_id: PUBMED:26753467 Osteoprotegerin and Vascular Calcification: Clinical and Prognostic Relevance. Osteoprotegerin (OPG) is a key regulator in bone metabolism, that also has effect in vascular system. Studies suggest that osteoprotegerin is a critical arterial calcification inhibitor, and is released by endothelial cells as a protective mechanism for their survival in certain pathological conditions, such as diabetes mellitus, chronic kidney disease, and other metabolic disorders. That has been shown in studies in vitro and in animal models. The discovery that OPG deficient mice (OPG -/- mice) develop severe osteoporosis and arterial calcification, has led to conclusion that osteoprotegerin might be mulecule linking vascular and bone system. Paradoxically however, clinical trials have shown recently that OPG serum levels is increased in coronary artery disease and correlates with its severity, ischemic cardial decompensation, and future cardiovascular events. Therefore it is possible that osteoprotegerin could have a new function as a potential biomarker in early identification and monitoring patients with cardiovascular disease. Amongst that osteoprotegerin is in association with well known atherosclerosis risc factors: undoubtedly it is proven its relationship with age, smoking and diabetes mellitus. There is evidence regarding presence of hyperlipoproteinemia and increased serum levels of osteoprotegerin. Also the researches have been directed in genetic level, linking certain single nucleotid genetic polymorphisms of osteoprotegerin and vascular calcification appearance. This review emphasises multifactorial role of OPG, presenting numerous clinical and experimental studies regarding its role in vascular pathology, suggesting a novel biomarker in cardiovascular diseases, showing latest conclusions about this interesting topic that needs to be further explored. abstract_id: PUBMED:32077762 Association of serum FGF-23, klotho, fetuin-A, osteopontin, osteoprotegerin and hs-CRP levels with coronary artery disease. Environmental, genetic, oxidative and biochemical factors play an important role in the atherosclerotic process. We investigated the association of serum fibroblast growth factor (FGF-23), klotho, fetuin-A, osteoprotegerin (OPG), osteopontin (OPN) and high-sensitive-CRP (Hs-CRP) markers with coronary artery disease and whether one was superior to others or not. A study group of 52 patients with coronary artery disease (CAD) and a control group of 30 patients with angiographically normal epicardial coronary arteries were included in the study. Serum FGF-23, klotho, fetuin-A, OPN, OPG and Hs-CRP marker levels were studied. Patients with CAD were classified in two groups as low (SYNTAX ≤22, n = 29) and moderate-high (SYNTAX ≥ 23, n = 23) according to anatomic SYNTAX score. FGF-23 (p = .033), klotho (p < .001), fetuin-A (p = .005) and OPG (p = .001) serum marker levels were significantly lower in CAD patients than the control group. Serum levels of FGF-23 (p = .012), klotho (p = .001), fetuin-A (p = .015) and OPG (p = 0.002) were significantly different between SYNTAX tertiles and control group. Klotho (p = .025, odd ratio (OR) = 0.542, 95% confidence interval (CI): 0.317-0.926) and HT (p = .004, OR = 34.598, 95%CI:1.054-1135.657) were the independent predictors of CAD presence. Serum klotho levels of 91.48 pmol/L predicts the presence of CAD with 60% sensitivity and 96.55% specificity (p < .001, area under curve = 0.864, 95% CI = 0.768, 0.931). We found that serum klotho level is an independent predictor of presence, extent and severity of CAD. abstract_id: PUBMED:23788300 The value of serum osteoprotegerin levels in patients with angina like chest pain undergoing diagnostic coronary angiography. Background: Osteoprotegerin (OPG) is a member of the tumor necrosis factor superfamily.Recent evidence supports a relationship between serum OPG level and atherosclerosis. The aim of this study was to evaluate the possible association of OPG with the presence of coronary artery disease (CAD), its severity and prognosis in patients with chest pain and suspected coronary stenosis. Methods: In this cross-sectional analytic study, 180 candidates of elective coronary artery angiography were recruited. Serum level of OPG was measured by ELISA method in all patients and its relation with presence and severity of CAD based on a coronary atherosclerosis score (CAS) was assessed. Patients were followed for a mean period of about 24 ± 3.2 months and the relationship between OPG levels and future cardiac events were evaluated. Results: The mean serum level of OPG was 1637 ± 226 pg/mL in those with CAD and 1295 ± 185 pg/mL (nonparametric p = 0.001) in those without it. There was a significant direct correlation between the level of serum OPG and CAS (rho = 0.225, p = 0.002). The optimalcut-off point for predicting a significant coronary artery obstruction was a serum level of≥ 1412 pg/mL with a sensitivity and specificity of 60% and 57.8%, respectively. Major adversecardiac events (MACE) including cardiovascular death, admission with acute coronary syndrome,or heart failure, was significantly higher in those with higher OPG levels (22 [34.3%]vs. 15 [16%], p = 0.012). Conclusions: There was a direct and significant correlation between the serum level of OPG and CAS. MACE occurred more commonly in those with higher baseline OPG levels. Answer: Osteoprotegerin (OPG) has been studied as a potential biomarker for the presence and severity of coronary artery disease (CAD) in various clinical settings. Several studies have found that increased levels of OPG are associated with the presence and severity of CAD. For instance, one study found that serum OPG levels were significantly greater in patients with significant stenosis of the coronary arteries than in those without stenosis, and as the severity of CAD increased, there was a significant increase in serum OPG levels (PUBMED:12208791). Another study reported that plasma OPG concentrations in the acute coronary syndrome (ACS) group were significantly higher than those in the stable angina pectoris (SAP) or normal coronary artery (NCA) group, and the levels of plasma OPG in the 1-, 2- and 3-vessel disease subgroups of ACS were increasingly higher (PUBMED:19014006). Furthermore, serum OPG levels have been found to be positively associated with arterial stiffness and with the extent of CAD, suggesting that OPG may contribute to the pathogenesis of atherosclerosis in CAD (PUBMED:22640692). In patients with chest pain and suspected coronary stenosis, a significant direct correlation was observed between the level of serum OPG and coronary atherosclerosis score (CAS), and major adverse cardiac events (MACE) were more common in those with higher baseline OPG levels (PUBMED:23788300). However, the diagnostic value of OPG is not consistent across all studies. One study concluded that OPG cannot be used to differentiate between healthy subjects with low versus high levels of coronary calcifications, and in patients with suspected angina pectoris, a single OPG measurement is of limited use in the diagnosis of CAD (PUBMED:25104079). Additionally, while OPG has been associated with the presence and severity of CAD, it is also influenced by other factors such as age, smoking, and diabetes mellitus, and its role in vascular pathology is multifactorial (PUBMED:26753467). In summary, while there is evidence to suggest that OPG is associated with the presence and severity of CAD, its use as a diagnostic biomarker may be limited and should be considered in conjunction with other clinical factors and biomarkers. Further research is needed to fully understand the role of OPG in CAD and its potential as a clinical tool.
Instruction: Blunt splenic trauma in adults: can CT findings be used to determine the need for surgery? Abstracts: abstract_id: PUBMED:8310923 Blunt splenic trauma in adults: can CT findings be used to determine the need for surgery? Objective: The role of CT grading of blunt splenic injuries is still controversial. We studied the CT scans of adult patients with proved blunt splenic injuries to determine if the findings accurately reflect the extent of the injury. We were specifically interested in establishing if CT findings can be used to determine whether patients require surgery or can be managed conservatively. Materials And Methods: The CT scans of 45 patients with blunt splenic injuries were analyzed retrospectively, and the CT findings were correlated with the need for surgery. We used (1) a CT scale (I-V) for splenic parenchymal injuries that also allowed a comparison with the surgical findings in patients who underwent laparotomy, and (2) a CT-based score (1-6) that referred to both the extent of parenchymal injuries and the degree of hemoperitoneum. Early laparotomy was done in nine patients. Conservative treatment was attempted in 36 patients and was successful in 31; five patients needed delayed laparotomy after attempted conservative treatment. Results: According to the CT scale (I-V), 25 patients had injuries of grade I or II; 20 patients were successfully treated conservatively, whereas five patients needed delayed surgery. Nineteen patients had injuries of grade III, IV, or V; eight patients underwent early laparotomy, and 11 patients were successfully treated conservatively. CT findings were false-negative in one patient who underwent early surgery for diaphragmatic rupture. A comparison of the CT findings with the intraoperative findings according to the CT scale (I-V) revealed identical parenchymal injury grades in four cases, whereas the injuries were underestimated on CT scans in four patients and overestimated on CT scans in six patients. The CT-based score (1-6) was applied to 41 patients; four patients who had peritoneal lavage before CT were excluded. Twelve patients had scores below 2.5; 10 patients were successfully treated conservatively, and two patients needed delayed surgery. Twenty-nine patients had scores of 2.5 or higher; six patients underwent early laparotomy, 20 patients were successfully treated conservatively, and three patients needed delayed surgery. Patients who required delayed surgery had a mean score of 3.0 (SD, +/- 1.0), which was similar to those who did not require surgery (3.1 +/- 1.5; p = .45). Conclusion: Our results show that CT findings cannot be used to determine reliably which patients require surgery and which patients can be treated conservatively. Even patients with splenic parenchymal injuries of CT grade III, IV, and V and with CT-based scores of 2.5 or higher can be successfully treated conservatively if the clinical situation is appropriate, whereas delayed splenic rupture can still develop in patients with low CT grades or scores. The choice between operative and nonoperative management of splenic trauma should be mainly based on clinical findings rather than CT findings. abstract_id: PUBMED:29692826 Assessment of blunt splenic trauma: Which imaging scoring system is superior? Background: Spleen is the most common viscera that may be hurt in blunt abdominal trauma. Operative or nonoperative management of splenic injury is a dilemma. The American Association for the Surgery of Trauma (AAST) is the most common grading system which has been used for the management of blunt splenic injuries. The new recommended grading system assesses other aspects of splenic injury such as contrast extravasation, pseudoaneurysm, arteriovenous fistula, and severity of hemoperitoneum, as well. The aim of this study is to compare and prioritize the cutoff of AAST grading system with the new recommended one. Materials And Methods: This is a cross-sectional study on patients with splenic injury caused by abdominal blunt trauma referred to Isfahan University of Medical Sciences affiliated Hospitals, Iran, in 2013-2016. All patients underwent abdominopelvic computed tomography scanning with intravenous (IV) contrast. All images were reported by a single expert radiologist, and splenic injury grading was reported based on AAST and the new recommended system. Then, all patients were followed to see if they needed surgical or nonsurgical management. Results: Based on the findings of this study conducted on 68 patients, cutoff point of Grade 2, in AAST system, had 90.3% (95% confidence interval [CI]: 0.73-0.97) specificity, 51.4% (95% CI: 0.34-0.67) sensitivity, 86.4% (95% CI: 0.64-0.95) positive predictive value (PPV), and 60.9% (95% CI: 0.45-0.74) negative predictive value (NPV) for prediction of surgical management requirement, while it was 90.3% (95% CI: 0.73-0.97) specificity, 45.9% (95% CI: 0.29-0.63) sensitivity, 85% (95% CI: 0.61-0.96) PPV, and 58.3% (95% CI: 0.43-0.72) NPV for the new system (P = 0.816). Conclusion: In contrast to the previous studies, the new splenic injury grading method was not superior to AAST. Further studies with larger populations are recommended. abstract_id: PUBMED:36802823 Surveillance Imaging Associated With Delayed Splenectomy in High-Grade Blunt Splenic Trauma. This retrospective, single-site study at a level I trauma center (2016-2021) sought to determine whether repeat CT had an impact on clinical decision making after splenic angioembolization following blunt splenic trauma (grades II-V). The primary outcome was need for intervention after subsequent imaging (defined as angioembolization and/or splenectomy) by high- or low-grade injury. Of the 400 individuals examined, 78 (19.5%) underwent intervention after repeat CT, from which 17% were in the low-grade group (grades II and III) and 22% were in the high-grade group (grades IV and V). Individuals in the high-grade group were 3.6 times more likely to undergo delayed splenectomy than those in the low-grade group (P = .006). Delayed intervention after surveillance imaging in blunt splenic injury is driven mostly by the identification of new vascular lesions and leads to greater rates of splenectomy in high-grade injuries. Surveillance imaging should be considered for all AAST injury grades II or higher. abstract_id: PUBMED:37187590 Splenic Artery Embolization in Conservative Management of Blunt Splenic Injury Graded by 2018 AAST-OIS: Results from a Hospital in Vietnam. Purpose: This study was conducted to evaluate the results of conservative management of blunt splenic trauma according to the American Association for the Surgery of Trauma-Organ Injury Scale (AAST-OIS) in 2018 by embolization. Methods: This observational study included 50 patients (42 men and 8 women) with splenic injury who underwent multidetector computed tomography (MDCT) and embolization. Results: According to the 2018 AAST-OIS, 27 cases had higher grades than they did according to the 1994 AAST-OIS. The grades of two cases of grade II increased to grade IV; those of 15 cases of grade III increased to grade IV; and four cases of grade IV increased to grade V. As a result, all patients underwent successful splenic embolization and were stable at discharge. No patients required re-embolization or conversion to splenectomy. The mean hospital stay was 11.8±7 days (range, 6-44 days), with no difference in length of hospital stay among grades of splenic injury (p >0.05). Conclusion: Compared with the AAST-OIS 1994, the AAST-OIS 2018 classification is useful in making embolization decisions, regardless of the degree of blunt splenic injury with vascular lacerations visible on MDCT. abstract_id: PUBMED:31489105 Outcomes Following Blunt Traumatic Splenic Injury Treated with Conservative or Operative Management. Introduction: Laparotomy, embolization, and observation are described for blunt splenic injury management. This study evaluated outcomes of blunt splenic injury management based on baseline factors, splenic injury severity, and associated injuries. Methods: A nine-year retrospective review was conducted of adult patients with blunt splenic injury. Collected data included demographics, injury characteristics, treatment modality, complications, and outcomes (mechanical ventilation, days on mechanical ventilation, intensive care unit [ICU] admission and length of stay, hospital length of stay, and in-hospital mortality). Categorical and continuous variables were analyzed using χ2 analysis and one-way analysis of variance for normally distributed variables and a non-parametric test of medians for variables that did not meet the assumption of normality, respectively. Results: Splenic injury grade was similar between operative and embolization groups, but severe hemoperitoneum was more common in the operative group. Complications and mortality were highest in the operative group (50.7% and 26.3%, respectively) and lowest in the embolization group (5.3% and 2.6%, respectively). Operative patients required more advanced interventions (ICU admission, mechanical ventilation). There were no differences between those treated with proximal versus distal embolization. Observation carried a failure rate of 11.2%, with no failures of embolization. Conclusions: Embolization patients had the lowest rates of complications and mortality, with comparable splenic injury grades to those treated operatively. Further prospective research is warranted to identify patients that may benefit from early embolization and avoidance of major abdominal surgery. abstract_id: PUBMED:33894453 Trends in Blunt Splenic Injury Management: The Rise of Splenic Artery Embolization. Introduction: Splenic injury is common in blunt trauma. We sought to evaluate the injury characteristics and outcomes of BSI admitted over a 10-y period to an academic trauma center. Methods: A retrospective review of adult blunt splenic injury patients admitted between January 2009 and September 2018. Results: The 423 patients meeting inclusion criteria were divided by management: Observational (OBS, n = 261), splenic surgery (n = 114 including 4 splenorrhaphy patients), SAE (n = 43), and multiple treatment modalities (3 had SAE followed by surgery and 2 OBS patients underwent splenic surgery at readmission). The most common mechanism of injury was motor vehicle collision (47.8%). The median ISS (OBS 17, SAE 22, Surgery 34) and spleen AIS (OBS 2, SAE 3, Surgery 4) were significantly different. Complication rates (OBS 21.8%, SAE 9.3%, Surgery 45.6%) rates were significantly different, but mortality (OBS 7.3%, SAE 2.3%, Surgery 13.2%), discharge to home and readmission rates were not. Additional abdominal injuries were identified in 26.3% of the surgery group and 2.7% of OBS group. SAE rate increased from 3.0% to 28%; median spleen AIS remained 2-3. Thirty-five patients expired; 28 had severe head, chest, and/or extremity injuries (AIS ≥4). Conclusion: SAE rates increased over time. Splenorrhaphy rates were low. SAE was associated with relatively low rates of mortality and complications in appropriately selected patients. abstract_id: PUBMED:38097784 Feasibility of non-operative management for patients sustained blunt splenic traumas with concomitant aortic injuries. Purpose: This study aimed to elucidate the treatment approach for blunt splenic injuries concurrently involving the aorta. We hypothesized that non-operative management failure rates would be higher in such cases, necessitating increased hemorrhage control surgeries. Methods: Data from the Trauma Quality Improvement Program spanning 2017 to 2019 were utilized. All patients with blunt splenic trauma were considered for inclusion. We conducted comparisons between blunt splenic trauma patients with and without thoracic or abdominal aortic injuries to identify any potential disparities in treatment. Results: Among the 32,051 patients with blunt splenic injuries during the study period, 752 (2.3%) sustained concurrent aortic injuries. Following 2:1 propensity score matching, it was determined that the presence of aortic injuries did not significantly affect the utilization of splenic transarterial angioembolization (TAE) (7.2% vs. 8.7%, p = 0.243) or the necessity for splenectomy or splenorrhaphy (15.3% vs. 15.7%, p = 0.853). Moreover, aortic injuries were not a significant factor contributing to TAE failure, regardless of the location or severity of the injury. Patients with simultaneous splenic and aortic injuries required more red blood cell transfusion within first 4 hours (0 ml [0, 900] vs. 0 ml [0, 650], p = 0.001) and exhibited a higher mortality rate (10.6% vs. 7.9%, p = 0.038). Conclusion: This study demonstrated that patients with concurrent aortic and splenic injuries presented with more severe conditions, higher mortality rates, and extended hospital stays. The presence of aortic injuries did not substantially influence the utilization of TAE or the necessity for splenectomy or splenorrhaphy. Patients of this type can be managed in accordance with current treatment guidelines. Nonetheless, given their less favorable prognosis, they necessitate prompt and proactive intervention. abstract_id: PUBMED:26814761 Nonoperative Management of Blunt Splenic Trauma: Also Feasible and Safe in Centers with Low Trauma Incidence and in the Presence of Established Risk Factors. Background: Treatment of blunt splenic trauma has undergone dramatic changes over the last few decades. Nonoperative management (NOM) is now the preferred treatment of choice, when possible. The outcome of NOM has been evaluated. This study evaluates the results following the management of blunt splenic injury in adults in a Swedish university hospital with a low blunt abdominal trauma incidence. Method: Fifty patients with blunt splenic trauma were treated at the Department of Surgery, Lund University Hospital from January 1994 to December 2003. One patient was excluded due to a diagnostic delay of > 24 h. Charts were reviewed retrospectively to examine demographics, injury severity score (ISS), splenic injury grade, diagnostics, treatment and outcome measures. Results: Thirty-nine patients (80%) were initially treated nonoperatively (NOM), and ten (20%) patients underwent immediate surgery (operative management, OM). Only one (3%) patient failed NOM and required surgery nine days after admission (failure of NOM, FNOM). The patients in the OM group had higher ISS (p < 0.001), higher grade of splenic injury (p < 0.001), and were hemodynamically unstable to a greater extent (p < 0.001). This was accompanied by increased transfusion requirements (p < 0.001), longer stay in the ICU unit (p < 0.001) and higher costs (p = 0.001). Twenty-seven patients were successfully treated without surgery. No serious complication was found on routine radiological follow-up. Conclusion: Most patients in this study were managed conservatively with a low failure rate of NOM. NOM of blunt splenic trauma could thus be performed in a seemingly safe and effective manner, even in the presence of established risk factors. Routine follow-up with CT scan did not appear to add clinically relevant information affecting patient management. abstract_id: PUBMED:23000055 Reliability of injury grading systems for patients with blunt splenic trauma. Objectives: The most widely used grading system for blunt splenic injury is the American Association for the Surgery of Trauma (AAST) organ injury scale. In 2007 a new grading system was developed. This 'Baltimore CT grading system' is superior to the AAST classification system in predicting the need for angiography and embolization or surgery. The objective of this study was to assess inter- and intraobserver reliability between radiologists in classifying splenic injury according to both grading systems. Methods: CT scans of 83 patients with blunt splenic injury admitted between 1998 and 2008 to an academic Level 1 trauma centre were retrospectively reviewed. Inter and intrarater reliability were expressed in Cohen's or weighted Kappa values. Results: Overall weighted interobserver Kappa coefficients for the AAST and 'Baltimore CT grading system' were respectively substantial (kappa=0.80) and almost perfect (kappa=0.85). Average weighted intraobserver Kappa's values were in the 'almost perfect' range (AAST: kappa=0.91, 'Baltimore CT grading system': kappa=0.81). Conclusion: The present study shows that overall the inter- and intraobserver reliability for grading splenic injury according to the AAST grading system and 'Baltimore CT grading system' are equally high. Because of the integration of vascular injury, the 'Baltimore CT grading system' supports clinical decision making. We therefore recommend use of this system in the classification of splenic injury. abstract_id: PUBMED:15187739 Use of splenic artery embolization as an adjunct to nonsurgical management of blunt splenic injury. Background: Splenic artery embolization (SAE) has been used as an adjunct to the nonsurgical treatment of blunt splenic injuries since 1981. It is imperative to define the role of SAE in the management of splenic trauma and to establish a guideline for its use. Methods: In this study, 39 consecutive patients with blunt splenic ruptures were evaluated. All the patients were treated according to the authors' protocol, which included SAE as an adjunct. Angiographic study was performed for patients with any of the following presentations: recurrent hypotension despite fluid resuscitation, significant hemoperitoneum or extravasation of contrast media on computed tomography, grade 4 or 5 splenic injury, or progressive need for blood transfusion. Laparotomy was reserved for patients with unstable hemodynamics or failure of SAE. Results: Four patients were excluded from the study, and 6 of the 35 remaining patients (male-to-female ratio, 22:13) received SAE. One of the six SAE patients underwent operation because of persistent hemorrhage after SAE. Nonoperative treatment was successful for 31 patients. Splenic artery embolization increased the success rate for nonsurgical management from 74% (26 of 35 patients) to 89% (31 of 35 patients). Conclusions: Judicious use of SAE for patients with blunt splenic injury avoids unnecessary surgery and expands the number of patients who can retain their spleen. Answer: The use of CT findings to determine the need for surgery in cases of blunt splenic trauma in adults is not straightforward. According to a study (PUBMED:8310923), CT findings alone cannot reliably indicate which patients require surgery and which can be managed conservatively. The study found that even patients with higher-grade splenic injuries on CT could be successfully treated without surgery if the clinical situation allows, while some patients with lower CT grades or scores might still develop delayed splenic rupture. The decision between operative and nonoperative management should be primarily based on clinical findings rather than CT findings alone. Another study (PUBMED:29692826) compared the American Association for the Surgery of Trauma (AAST) grading system with a new recommended grading system that includes additional aspects of splenic injury. The study concluded that the new grading system was not superior to the AAST system in predicting the need for surgical management. A retrospective study (PUBMED:36802823) found that surveillance imaging after splenic angioembolization in high-grade blunt splenic trauma could lead to a higher rate of delayed splenectomy, especially in high-grade injuries. This suggests that repeat CT imaging may influence clinical decision-making in certain cases. Research from Vietnam (PUBMED:37187590) showed that the 2018 AAST-Organ Injury Scale (OIS) classification was useful in making embolization decisions for blunt splenic injury, regardless of the injury's grade, and all patients in the study underwent successful splenic embolization without the need for re-embolization or conversion to splenectomy. A study evaluating outcomes of blunt splenic injury management (PUBMED:31489105) found that patients treated with embolization had the lowest rates of complications and mortality compared to those treated operatively, suggesting that embolization could be a preferable option in certain cases. The rise of splenic artery embolization (SAE) as a treatment modality (PUBMED:33894453) has been associated with relatively low rates of mortality and complications in selected patients, indicating its increasing role in the management of blunt splenic injury. In summary, while CT findings provide valuable information about the extent of splenic injury, they should not be used in isolation to determine the need for surgery.
Instruction: Does preconditioning protect the coronary vasculature from subsequent ischemia/reperfusion injury? Abstracts: abstract_id: PUBMED:8339428 Does preconditioning protect the coronary vasculature from subsequent ischemia/reperfusion injury? Background: "Preconditioning" with brief episodes of coronary artery occlusion reduces infarct size caused by subsequent sustained ischemia. However, the effects of preconditioning on the coronary vasculature are poorly understood. We sought to determine whether preconditioning would attenuate "low reflow" (ie, the deterioration in resting myocardial perfusion) and blunt the loss in coronary vasodilator reserve after sustained occlusion/reperfusion in the anesthetized open-chest canine model. Methods And Results: Thirty-two dogs underwent 1 hour of sustained left anterior descending (LAD) coronary artery occlusion and 4 hours of reperfusion. Each dog was randomly assigned to the preconditioned group (four episodes of 5 minutes of LAD occlusion plus 5 minutes of reperfusion before sustained ischemia) or control group (no intervention). Submaximal vasodilator reserve was determined by measuring the increase in CBF in response to 0.01 mg acetylcholine (an endothelium-dependent dilator) and 0.05 mg nitroglycerin (an endothelium-independent dilator); low reflow was assessed by measurement of regional myocardial blood flow at 30 minutes and 4 hours after reflow; and infarct size was delineated by triphenyltetrazolium staining. In protocol 1 (n = 14), vasodilator reserve was measured at baseline and at 30 minutes and 4 hours after reflow. There was no change in the response to acetylcholine and nitroglycerin at 30 minutes after reperfusion compared with baseline. However, all dogs exhibited a loss in vasodilator reserve during the subsequent 3.5 hours of reflow, with no difference between control and preconditioned groups. That is, in control dogs, acetylcholine increased CBF from a baseline value of 10.1 +/- 1.3 mL/min to 18.0 +/- 2.6, 18.2 +/- 2.1, and 15.4 +/- 1.7 mL/min before occlusion, 30 minutes after reflow, and 4 hours after reperfusion, respectively (P < .05 for 30 minutes vs 4 hours after reperfusion). Similarly, in the preconditioned group, acetylcholine increased CFB from a baseline value of 12.0 +/- 2.9 mL/min to 19.6 +/- 3.8, 23.6 +/- 5.3, and 15.6 +/- 3.5 mL/min, respectively (P < .01 for 30 minutes vs 4 hours after reperfusion; P = NS between groups). In addition, all dogs exhibited low reflow, with no difference between control and preconditioned groups: subendocardial blood flow deteriorated between 30 minutes and 4 hours after reflow, from 0.91 +/- 0.20 to 0.40 +/- 0.03 mL min-1 x g-1 in control animals (P = .05 for 30 minutes vs 4 hours after reperfusion) and from 1.03 +/- 0.25 to 0.35 +/- 0.02 mL.min-1 x g-1 in the preconditioned group (P < .05 for 30 minutes vs 4 hours after reperfusion). However, all dogs in protocol 1 had small infarcts (3 +/- 1% and 2 +/- 1% of the risk region in control and preconditioned groups; P = NS), suggesting that control dogs may have been "preconditioned" by the vasodilators. An additional 18 dogs were entered into protocol 2, which was identical to protocol 1 except that acetylcholine and nitroglycerin were given only after reperfusion. In this case, we observed the expected reduction in infarct size in preconditioned dogs vs control dogs (2 +/- 1% vs 11 +/- 3% of the risk region; P < .01). However, the loss in vasodilator reserve was similar to that observed in protocol 1, with no difference between groups. Subendocardial blood flow at 30 minutes after reperfusion was higher in control animals than in preconditioned dogs (1.84 +/- 0.50 vs 0.74 +/- 0.08 mL.min-1 x g-1; P < .05), but subendocardial flow then deteriorated during the subsequent 3.5 hours to a similar value in both groups (0.55 +/- 0.11 and 0.50 +/- 0.06 mL.min-1 x g-1 in control and preconditioned dogs; P < .05 vs 30 minutes after reperfusion for both groups). Conclusions: The protective effects of preconditioning do not extend to the coronary vasculature in this canine model: Preconditioning neither prevented the deterioration in resting myocardial perfusion nor blunted the loss in submaximal vasodilator reserve obs abstract_id: PUBMED:20428451 Reperfusion-induced coronary endothelial injury: A new target for ischemic preconditioning. Although cardiac ischemia-reperfusion is well known as a disease of the myocytes, it is now clear that the consequences of this disease also extend to the vascular wall and especially to the endothelium. A rat model of ischemia-reperfusion in vivo was used to detect severe endothelial dysfunction characterized by a decreased nitric oxide (NO)-dependent relaxation to acetylcholine in isolated coronary arteries. Given the essential role of the endothelium and NO in the regulation of vascular tone, protection of the coronary endothelial cells is an important therapeutic target. For this purpose, a focus on the concept of endogenous protection against ischemia, ie, preconditioning, showed that endothelial dysfunction could be reversed by both the early and the delayed phase of preconditioning. With regard to the mechanisms of the coronaroprotective effects of preconditioning, it was shown that both free radicals and NO seem to have an important triggering role, leading to a delayed increase in NO production and decreased adhesion of neutrophils to endothelial cells. Identification of the precise triggers and mediators of this protection will allow the development of new therapeutic agents targeting both the myocardium and the coronary vasculature. abstract_id: PUBMED:15249189 Coronary microembolization does not induce acute preconditioning against infarction in pigs-the role of adenosine. Objective: After coronary microembolization (ME) adenosine is released from ischemic areas of the microembolized myocardium. This adenosine dilates vessels in adjacent nonembolized myocardium and increases coronary blood flow. For ischemic preconditioning (IP) to protect the myocardium against infarction, an increase in the interstitial adenosine concentration (iADO) prior to the subsequent ischemia/reperfusion is necessary. We hypothesized that the adenosine release after ME is sufficient to increase iADO and protect the myocardium against infarction from subsequent ischemia/reperfusion. We have therefore compared myocardial protection by either coronary microembolization or ischemic preconditioning prior to ischemia/reperfusion. Methods: In anesthetized pigs, the left anterior descending (LAD) was cannulated and perfused from an extracorporeal circuit. In 11 pigs, sustained ischemia was induced by 85% inflow reduction for 90 min (controls). Two other groups of pigs were subjected either to IP (n = 8; 10-min ischemia/15-min reperfusion) or coronary ME (n = 9; i.c. microspheres; 42 microm Ø; 3000 x ml(-1) x min inflow) prior to sustained ischemia. Coronary venous adenosine concentration (vADO) and iADO (microdialysis) were measured. Infarct size was determined after 2-h reperfusion by triphenyl tetrazolium chloride staining. Results: In pigs subjected to IP, infarct size was reduced to 2.6 +/- 1.1% (mean +/- S.E.M.) vs. 17.0 +/- 3.2% in controls. iADO was increased from 2.4 +/- 1.3 to 13.1 +/- 5.8 micromol x l(-1) during the reperfusion following IP. In pigs subjected to ME, at 10 min after ME, coronary blood flow (38.6 +/- 3.6 to 53.6 +/- 4.3 ml x min(-1)) and vADO (0.25 +/- 0.04 to 0.48 +/- 0.07 micromol x l(-1)) were increased. However, iADO (2.0 +/- 0.5 at baseline vs. 2.3 +/- 0.6 micromol x l(-1) at 10 min after ME) did not increase. Infarct size induced by sustained ischemia following ME (22.5 +/- 5.2%) was above that of controls for any given subendocardial blood flow. Conclusion: ME released adenosine into the vasculature and increased coronary blood flow. The failure of iADO to increase with ME possibly explains the lack of protection against infarction after ME. abstract_id: PUBMED:12898803 Coronary endothelial dysfunction after ischemia and reperfusion and its prevention by ischemic preconditioning. In the coronary circulation, when reperfusion follows ischemia, endothelial dysfunction occurs. This is characterized by a reduced endothelial release of nitric oxide and by an increased release of reactive oxygen species and endothelin. The reduced availability of nitric oxide leads to the adhesion of neutrophils to the vascular endothelium, platelet aggregation and, with the contribution of endothelin, vasoconstriction, which are responsible for the "no-reflow" phenomenon. Neutrophil adhesion is followed by the release of the superoxide anion from neutrophils and endothelial cells. Preconditioning limits the endothelial damage by ischemia-reperfusion. A relevant role is attributed to the increased endothelial release of nitric oxide, while that of adenosine is controversial. Another effect of preconditioning on the coronary vasculature is the acceleration of vasodilation in reactive hyperemia after a brief coronary occlusion. The acceleration is prevented if myocardial protection is achieved by means of the activation of the mitochondrial adenosine triphosphate sensitive potassium channels by diazoxide and persists when ischemic preconditioning is induced after blockade of the same channels by 5-hydroxydecanoate. abstract_id: PUBMED:15190469 How to use the paradigm of ischemic preconditioning to protect the heart? Ischemic preconditioning affords the most powerful protection to a heart submitted to a prolonged ischemia-reperfusion. During the past decade, a huge amount of work allowed to better understand the features of this protective effect as well as the molecular mechanisms. Ischemic preconditioning reduces infarct size and improves functional recovery; its effects on arrhythmias remain debated. Triggering of the protection involves cell surface receptors that activate pro-survival pathways including protein kinase C, PI3-kinase, possibly Akt and ERK1/2, whose downstream targets remain to be determined. Much attention has been recently focused on the role of mitochondrial K(+)ATP channels and the permeability transition pore that seem to play a major role in the progression toward irreversible cellular injury. Based on these experimental studies attempts have been made to transfer preconditioning from bench to bedside. Human experimental models of ischemic preconditioning have been set up, including cardiac surgery, coronary angioplasty or treadmill exercise, to perform pathophysiological studies. Yet, protecting the heart of CAD (coronary artery disease) patients requires a pharmacological approach. The IONA trial has been an example of the clinical utility of preconditioning. It helped to demonstrate that chronic administration of nicorandil, a K(+)ATP opener that mimics ischemic preconditioning in experimental preparations, improves the cardiovascular prognosis in CAD patients. Recent experimental studies appear further encouraging. It appears that "postconditioning" the heart (i.e. performing brief episodes of ischemia-reperfusion at the time of reperfusion) is as protective as preconditioning. In other words, a therapeutic intervention performed as late as at the time of reflow can still significantly limit infarct size. Further work is needed to determine whether this may be transferred to the clinical practice. abstract_id: PUBMED:10899048 Cardiac interstitial bradykinin release during ischemia is enhanced by ischemic preconditioning. Ischemic preconditioning is known to protect the myocardium from ischemia-reperfusion injury. We examined the transmural release of bradykinin during myocardial ischemia and the influence of ischemic preconditioning on bradykinin release during subsequent myocardial ischemia. Myocardial ischemia was induced by occlusion of the left anterior descending coronary artery in anesthetized cats. Cardiac microdialysis was performed by implantation and perfusion of dialysis probes in the epicardium and endocardium. In eight animals, bradykinin release was greater in the endocardium than in the epicardium (14.4 +/- 2.8 vs. 7.3 +/- 1.7 ng/ml, P < 0.05) during 30 min of ischemia. In seven animals subjected to preconditioning, myocardial bradykinin release was potentiated significantly from 2.4 +/- 0.6 ng/ml during the control period to 23.1 +/- 2.5 ng/ml during 30 min of myocardial ischemia compared with the non-preconditioning group (from 2.7 +/- 0.6 to 13.4 +/- 1.9 ng/ml, P < 0.05, n = 6). Thus this study provides further evidence that transmural gradients of bradykinin are produced during ischemia. The results also suggest that ischemic preconditioning enhances bradykinin release in the myocardial interstitial fluid during subsequent ischemia, which is likely one of the mechanisms of cardioprotection of ischemic preconditioning. abstract_id: PUBMED:24396361 Adenosine Preconditioning versus Ischemic Preconditioning in Patients undergoing Off-Pump Coronary Artery Bypass (OPCAB). Background: During off-pump coronary artery bypass (OPCAB), the heart is subjected to ischemic and reperfusion injury. Preconditioning is a mechanism that permits the heart to tolerate myocardial ischemia. The aim of this study was to compare the effects of Adenosine preconditioning with ischemic preconditioning on the global ejection fraction (EF) in patients undergoing OPCAB. Methods: In this single-blind, randomized controlled trial, sixty patients undergoing OPCAB were allocated into three equally-numbered groups through simple randomization: Adenosine group, ischemic group, and control group. The patients in the Adenosine group received an infusion of Adenosine. In the ischemic group, ischemic preconditioning was induced by the temporary occlusion of the left anterior descending coronary artery twice for a 2-minute period, followed by 3-minute reperfusion before bypass grafting of the first coronary vessel. The control group received an intravenous infusion of 0.9% saline. Blood samples at different times were sent for the measurement of creatine kinase isoenzyme MB (CK-MB) and cardiac troponin I (cTnI). We also recorded electrocardiographic indices and clinical parameters, including postoperative use of inotropic drugs and preoperative and postoperative EF. Results: History of myocardial infarction, hyperlipidemia, diabetes mellitus, kidney disease, preoperative arrhythmias, and utilization of postoperative inotrope was the same between the three groups. The incidence of postoperative arrhythmias was not significant between the three groups. Also, there were no significant differences in preoperative and postoperative EF and the serum levels of enzymes (cTnI and CK-MB) between the groups. Conclusion: Based on the findings of this study, there was no significant difference in the postoperative EF between the groups. Although the incidence of arrhythmias was higher in the ischemic preconditioning group than in the other groups, the difference between the groups did not constitute statistical significance. abstract_id: PUBMED:33280529 Remote ischemic preconditioning effects on inflammatory markers and myocardial protection in coronary artery bypass graft surgery. Background: Induction of short episodes of ischemia to remote organs, namely upper or lower limbs, literally known as remote ischemic preconditioning (RIPC) has been suggested as a preconditioning approach to ameliorate ischemia/reperfusion injury (IRI). RIPC has been demonstrated to effectively protect various vital organs, including heart, against the next ischemic events in preclinical studies. However, human studies are required to approve its clinical applicability. Present study was performed to evaluate the effect of RIPC on the myocardial protection and inflammatory response markers in patients undergoing coronary artery bypass graft surgery. Methods: In this randomized clinical trial, 43 coronary artery bypass graft (CABG) patients from Imam Hossein educational hospital were allocated in two groups, RIPC (21 patients) and control (22 patients). Serum level of interleukin (IL)-4, IL-8, and IL-10, interferon (IFN)-γ and Cardiac Troponin-I (cTnI) were measured in (1) after induction of anesthesia (before incision of skin), (2) after separation from CPB and (3) 24 hours after ICU arrival.Results:increase pack cell transfusions were observed in control group in ICU. Serum level of IL-10 at 24 hours after ICU admission was significantly higher in the RIPC group. Significantly lower amounts of IL-8 at post-CPB time were observed in the RIPC group in comparison with control.Conclusion:RIPC regulates the circulatory inflammatory cytokines, IL-8 decrement and IL-10 elevation, which could be translated into protection against IRI. However, further studies with larger sample sizes with careful consideration of parameters such as use of propofol as an anesthetic in the patients should be conducted to consolidate the findings from the current study. abstract_id: PUBMED:16516180 Preconditioning, postconditioning and their application to clinical cardiology. Ischemic preconditioning is a well-established phenomenon first described in experimental preparations in which brief episodes of ischemia/reperfusion applied prior to a longer coronary artery occlusion reduce myocardial infarct size. There are ample correlates of ischemic preconditioning in the clinical realm. Preconditioning mimetic agents that stimulate the biochemical pathways of ischemic preconditioning and protect the heart without inducing ischemia have been examined in numerous experimental studies. However, despite the effectiveness of ischemic preconditioning and preconditioning mimetics for protecting ischemic myocardium, there are no preconditioning-based therapies that are routinely used in clinical medicine at the current time. Part of the problem is the need to administer therapy prior to the known ischemic event. Other issues are that percutaneous coronary intervention technology has advanced so far (with the development of stents and drug-eluting stents) that ischemic preconditioning or preconditioning mimetics have not been needed in most interventional cases. Recent clinical trials such as AMISTAD I and II (Acute Myocardial Infarction STudy of ADenosine) suggest that some preconditioning mimetics may reduce myocardial infarct size when given along with reperfusion or, as in the IONA trial, have benefit on clinical events when administered chronically in patients with known coronary artery disease. It is possible that some of the benefit described for adenosine in the AMISTAD 1 and 2 trials represents a manifestation of the recently described postconditioning phenomenon. It is probable that postconditioning--in which reperfusion is interrupted with brief coronary occlusions and reperfusion sequences--is more likely than preconditioning to be feasible as a clinical application to patients undergoing percutaneous coronary intervention for acute myocardial infarction. abstract_id: PUBMED:7680290 Regional ischemic 'preconditioning' protects remote virgin myocardium from subsequent sustained coronary occlusion. Background: One or more brief episodes of coronary artery occlusion protect or "precondition" the myocardium perfused by that artery from a subsequent episode of sustained ischemia. We sought to determine whether ischemic preconditioning protects only those myocytes subjected to brief coronary occlusion or whether brief occlusions in one vascular bed also limit infarct size and/or attenuate contractile dysfunction in remote virgin myocardium subjected to subsequent sustained coronary occlusion. Methods And Results: In the preliminary limb of the study, six anesthetized dogs underwent four episodes of 5-minute circumflex branch occlusion plus 5-minute reperfusion, followed by 1 hour of sustained left anterior descending coronary artery occlusion and 4.5 hours of reflow. Subendocardial blood flow during left anterior descending coronary artery occlusion (measured by injection of radiolabeled microspheres) was 0.07 +/- 0.03 mL.min-1 x g tissue-1, similar to the value of 0.07 +/- 0.02 mL.min-1 x g-1 observed in a group of eight concurrent control dogs. However, infarct size (assessed by triphenyltetrazolium staining) in the circumflex preconditioned group averaged 4 +/- 1% of the myocardium at risk, significantly less (p < 0.05) than the value of 13 +/- 4% observed in the concurrent controls. An additional 18 dogs were then randomized to undergo either four episodes of circumflex branch occlusion (n = 8) or no intervention (n = 10) before 1 hour of left anterior descending coronary artery occlusion and 4.5 hours of reflow. Subendocardial blood flow averaged 0.08 +/- 0.02 versus 0.08 +/- 0.03 mL.min-1 x g-1 in the control versus circumflex preconditioned groups, yet infarct size was significantly smaller in circumflex preconditioned dogs than in the controls (6 +/- 2% versus 16 +/- 5% of the risk region; p < 0.05). At 4.5 hours following reperfusion, segment shortening in the left anterior descending coronary artery bed (assessed by sonomicrometry) averaged -21 +/- 19% of baseline in control animals versus 13 +/- 12% of baseline in the preconditioned group (p = NS). Circumflex preconditioning did not, however, have an independent beneficial effect on contractile function: Regression analysis revealed that the trend toward improved function in circumflex preconditioned dogs reflected the smaller infarct sizes in this group. Conclusions: Brief episodes of ischemia in one vascular bed protect remote, virgin myocardium from subsequent sustained coronary artery occlusion in this canine model. These data imply that preconditioning may be mediated by factor(s) activated, produced, or transported throughout the heart during brief ischemia/reperfusion. Answer: Preconditioning, a process involving brief episodes of ischemia followed by reperfusion, has been studied to determine its protective effects against subsequent sustained ischemia-reperfusion injury. The research on this topic has yielded mixed results, with some studies suggesting benefits while others indicate limited or no protection for the coronary vasculature. In one study using a canine model, preconditioning did not prevent the deterioration in resting myocardial perfusion nor blunt the loss in submaximal vasodilator reserve observed after sustained occlusion/reperfusion. The study found no significant difference in the loss of vasodilator reserve or in the occurrence of low reflow between preconditioned and control groups (PUBMED:8339428). Another study in a rat model of ischemia-reperfusion showed that both the early and delayed phases of preconditioning could reverse endothelial dysfunction characterized by decreased nitric oxide-dependent relaxation to acetylcholine in isolated coronary arteries. This suggests that preconditioning can protect the coronary endothelium, with free radicals and nitric oxide playing a role in triggering this protection (PUBMED:20428451). In contrast, a study on pigs found that coronary microembolization, which releases adenosine from ischemic myocardium, did not induce acute preconditioning against infarction. The study suggested that the adenosine release was not sufficient to increase interstitial adenosine concentration and protect the myocardium against subsequent ischemia/reperfusion injury (PUBMED:15249189). Further research indicated that ischemic preconditioning limits endothelial damage by ischemia-reperfusion, with an increased endothelial release of nitric oxide being a relevant factor. However, the role of adenosine in this process was considered controversial (PUBMED:12898803). In clinical settings, studies have explored the use of ischemic preconditioning and pharmacological agents that mimic its effects. For example, the IONA trial demonstrated that chronic administration of nicorandil, a K(+)ATP opener that mimics ischemic preconditioning, improved cardiovascular prognosis in coronary artery disease patients (PUBMED:15190469). In summary, while some studies suggest that preconditioning can protect the coronary endothelium and reduce endothelial dysfunction after ischemia-reperfusion, other studies have found no significant protective effects on the coronary vasculature.
Instruction: Is percentage of predicted maximal exercise oxygen consumption a better predictor of survival than peak exercise oxygen consumption for patients with severe heart failure? Abstracts: abstract_id: PUBMED:8800737 Is percentage of predicted maximal exercise oxygen consumption a better predictor of survival than peak exercise oxygen consumption for patients with severe heart failure? Background: Peak exercise oxygen consumption provides valuable short-term prognostic information in patients with heart failure. However, peak exercise oxygen consumption is determined not only by the cardiac output response to exercise but also by age, gender, and muscle mass. We investigated whether percentage of predicted maximal exercise oxygen consumption rather than an absolute value may be a better predictor of survival. Methods: Peak exercise oxygen consumption was measured and percentage of predicted maximal exercise oxygen consumption was derived from two standard formulas (Wasserman and Astrand) in 272 ambulatory patients referred for transplant evaluation. The predictive ability of these variables was determined by comparison of Kaplan-Meier curves, univariable proportional-hazards models, and receiver operating characteristic curves. Results: Neither method of determining percentage of predicted maximal exercise oxygen consumption significantly improved the prediction of survival over peak exercise oxygen consumption alone. Overall model discrimination, as assessed by area under the receiver operating characteristic curve, was not significantly improved with percentage of predicted maximal exercise oxygen consumption (Wasserman) rather than weight-normalized peak exercise oxygen consumption (0.71 +/- 0.04 versus 0.66 +/- 0.04; Z = 1.60, p = 0.11). All of the difference between percentage of predicted maximal exercise oxygen consumption-Wasserman and peak exercise oxygen consumption resulted from differences in women (areas under receiver operating characteristic curve = 0.68 +/- 0.09 and 0.74 +/- 0.09; p = 0.14); results for men were the same (both areas = 0.68 +/- 0.04). Conclusions: Normalization of peak exercise oxygen consumption for predicted values adds only minimal prognostic information. A peak exercise oxygen consumption &lt; 14 ml/kg/min remains a reasonable guideline by which to time heart transplantation. abstract_id: PUBMED:10027440 Maximal exercise testing for the selection of heart transplantation candidates: limitation of peak oxygen consumption. Background: Peak exercise oxygen consumption (peak VO2), which is considered an indicator of prognosis in advanced heart failure, is currently being used as a major criterion in many centers for the selection of candidates for heart transplantation. Available studies suggest that patients with peak VO2 &lt; 14 mL/min/kg have improved survival and significant functional benefit with transplantation. Since patients may terminate symptom-limited exercise tests for a variety of reasons, peak VO2 does not necessarily reflect maximal VO2, leading to the possibility of inappropriate selection for transplantation. Therefore, we investigated the proportion of transplant candidates referred for exercise testing considered to have achieved maximal results from studies. Methods: Fifty-five patients with heart failure, aged 51+/-9 years, (mean +/- SD) underwent maximum symptom-limited exercise tests on a cycle ergometer utilizing a Jones stage 1 incremental protocol. Tests were considered maximal if subjects achieved peak heart rate (HR) &gt; 85% predicted ("cardiocirculatory limitation") or peak minute ventilation (VE) &gt; 85% predicted ("ventilatory limitation"), and achieved an anaerobic threshold (AT) by noninvasive measures. Results: Seven tests were terminated because of chest pain, ST-segment abnormalities, or ventricular arrhythmias. Of the remaining 48 studies, the reasons for stopping exercise were leg fatigue in 52%, dyspnea in 16%, and both symptoms in 23%. Sixteen of the 48 patients (33%) had peak VO2 &lt; 14 mL/min/kg. In 8 of these 16 patients, both peak HR and VE were &lt; 85% predicted. Of these eight without apparent HR or ventilatory limitation, none had oxygen desaturation below 90% or fall in BP, two were in atrial fibrillation, and only three had evidence that an AT was achieved. Conclusions: Among the patients with peak VO2 &lt; 14 mL/min/kg, there were no objective signs of a cardiocirculatory or a respiratory limitation to exercise in half of them, and 31% did not achieve an AT either, thus not meeting any criteria to support evidence of maximal exercise. Exercise tests without objective evidence of cardiocirculatory or ventilatory limitation may not represent maximal performance. Consequently, peak VO2 may misclassify an appreciable proportion of candidates if the test results are submaximal. Clinical Implications: Clinical exercise studies indicating low peak VO2 must be interpreted in the context of whether a defined objective exercise limitation is evident to avoid biasing the selection of heart transplant candidates. abstract_id: PUBMED:9688429 Percent achieved of predicted peak exercise oxygen uptake and kinetics of recovery of oxygen uptake after exercise for risk stratification in chronic heart failure. To investigate whether percent achieved of predicted peak exercise oxygen uptake (%VO2max) and recovery of oxygen consumption after exercise may provide prognostic information in chronic heart failure (CHF), we prospectively studied 196 patients with mild to moderate CHF. The following variables were examined: age, etiology of CHF, functional class, ejection fraction (EF), peak exercise oxygen uptake normalized for body weight (VO2max), %VO2max, time to reach 50% of the peak oxygen uptake after exercise (T1/2VO2max), presence of nonsustained ventricular tachycardia (NSVT) and inability to take ACE-inhibitors. VO2max was the most powerful predictor of cardiac death (P&lt;0.0001). Other independent predictors of death were EF, T1/2VO2max, NSVT and inability to take ACE-inhibitors. The discriminatory accuracy of VO2max for cardiac death was not significantly greater than that of %VO2max. In conclusion, the determination of %VO2max does not enhance risk stratification in CHF whereas the kinetics of oxygen consumption after exercise can provide prognostic information. abstract_id: PUBMED:10841967 Role of the etiology of cardiomyopathies on exercise capacity and oxygen consumption in patients with severe congestive heart failure. Peak oxygen consumption is of great importance for the decision of heart transplantation in congestive heart failure. Moreover, the level of exercise capacity seems to depend on the etiology of congestive heart failure. This study compared 14 heart failure patients with idiopathic dilated cardiomyopathy (group 1) to 14 heart failure patients with cardiomyopathy due to ischemic heart disease (group 2), matched for sex (13 male, one female in each group), age +/-10 years, left ventricular ejection fraction +/-5% and pulmonary artery mean pressure +/-5 mm Hg, to assess exercise capacity and oxygen consumption independently of the age, sex and the level of left ventricular dysfunction. Right ventricular function was also assessed. No difference existed in terms of right ventricular parameters. Maximal exercise parameters were significantly higher in group 1 than in group 2. Peak oxygen consumption was statistically higher in group 1 than in group 2. In the whole population, a significant correlation was found between peak oxygen consumption and right ventricular ejection fraction (r=0. 44, P&lt;0.02) but not between peak oxygen consumption and left ventricular ejection fraction. For similar levels of left ventricular dysfunction, exercise capacity and oxygen consumption appear to be better in idiopathic dilated cardiomyopathy than in ischemic cardiomyopathy, thereby suggesting that functional tolerance of left ventricular dysfunction might depend on the etiology of severe congestive heart failure. abstract_id: PUBMED:33070792 Maximal versus sub-maximal effort during cardiopulmonary exercise testing in adults with congenital heart disease: outcome analysis of short-term cardiac-related events. Peak respiratory exchange ratio is an objective marker of patient effort during cardiopulmonary exercise testing. We evaluated exercise variables in 175 adult congenital heart disease patients and the impact of respiratory exchange ratio on the prognostic value of exercise variables for short-term cardiac-related events. Of 175 patients, 110 completed the exercise test with a peak respiratory exchange ratio of ≥1.10 and the remaining 65 had a peak respiratory exchange ratio of &lt;1.10. Peak oxygen consumption, the percentage of oxygen consumption at the ventilatory threshold, peak heart rate, percentage predicted peak heart rate, double product, oxygen uptake efficiency slope, and the number of patients with exercise oscillatory ventilation were reduced significantly in patients with a respiratory exchange ratio of &lt;1.10 compared to those with a respiratory exchange ratio of ≥1.10. After a median follow-up of 21 months, total cardiac-related events occurred in 37 (21%) patients. Multivariate Cox proportional hazard analysis showed that the percentage predicted peak oxygen consumption, and oxygen uptake efficiency slope were independent predictors of cardiac-related events only in patients with a peak respiratory exchange ratio of ≥1.10. Sub-maximal exercise performance can be preserved in adult congenital heart disease patients. The percentage predicted oxygen consumption and the oxygen uptake efficiency slope are two independent predictors for short-term cardiac-related events in adult congenital heart disease patients. abstract_id: PUBMED:3342162 Extrapolated maximal oxygen consumption: a new method for the objective analysis of respiratory gas exchange during exercise. Respiratory gas exchange was measured during maximal treadmill exercise testing in six healthy volunteers and 20 patients with chronic heart failure. A curve of equation y = ax-bx2 was used to model the relation between the rate of oxygen consumption (y axis) and the rate of carbon dioxide production (x axis). The constants "a" and "b" were used to calculate the maximal value of the expression ax-bx2. This value was termed the "extrapolated maximal oxygen consumption". For all subjects a close fit between experimental data and mathematical model was obtained and the values of the measured maximal rate of oxygen consumption and "extrapolated maximal oxygen consumption" were similar. Respiratory gas exchange was reanalysed using only those values obtained during the first 90%, 75%, and 66% of exercise. In contrast with the value for the measured rate of oxygen consumption, the value of "extrapolated maximal oxygen consumption" was effectively independent of exercise duration. Extrapolated maximal oxygen consumption provides an objective measure of cardiorespiratory functional reserve that, within limits, is independent of exercise duration. Extrapolated maximal oxygen consumption is complementary to the direct measurement of the maximal rate of oxygen consumption and increases the amount of information derived from a single exercise test. abstract_id: PUBMED:1999029 Value of peak exercise oxygen consumption for optimal timing of cardiac transplantation in ambulatory patients with heart failure. Background: Optimal timing of cardiac transplantation in ambulatory patients with severe left ventricular dysfunction is often difficult. To determine whether measurement of peak oxygen consumption (VO2) during maximal exercise testing can be used to identify patients in whom transplantation can be safely deferred, we prospectively performed exercise testing on all ambulatory patients referred for transplant between October 1986 and December 1989. Methods And Results: Patients were assigned into one of three groups on the basis of exercise data: Group 1 (n = 35) comprised patients accepted for transplant (VO2 less than or equal to 14 ml/kg/min); group 2 (n = 52) comprised patients considered too well for transplant (VO2 greater than 14 ml/kg/min); and group 3 (n = 27) comprised patients with low VO2 rejected for transplant due to noncardiac problems. All three groups were comparable in New York Heart Association functional class, ejection fraction, and cardiac index (p = NS). Pulmonary capillary wedge pressure was significantly lower in group 2 than in either group 1 or 3 (p less than 0.05), although there was wide overlap. Patients with preserved exercise capacity (group 2) had cumulative 1- and 2-year survival rates of 94% and 84%, which are equal to survival levels after transplantation. In contrast, patients rejected for transplant (group 3) had survival rates of only 47% at 1 year and 32% at 2 years, whereas patients awaiting transplantation (group 1) had a survival rate of 70% at 1 year (both p less than 0.005 versus patients with VO2 greater than 14 ml/kg/min). All deaths in group 2 were sudden. By univariate and multivariate analyses, peak VO2 was the best predictor of survival, with only pulmonary capillary wedge pressure providing additional prognostic information. Conclusions: These data suggest that cardiac transplantation can be safely deferred in ambulatory patients with severe left ventricular dysfunction and peak exercise VO2 of more than 14 ml/min/kg. abstract_id: PUBMED:8070828 Can common exercise indices determine peak exercise oxygen consumption and anaerobic threshold during stress testing in patients with chronic congestive heart failure? Maximal oxygen consumption (VO2 max) is one of the most important predictors of prognosis in chronic heart failure and is now used to define degree of heart failure. While most centres can routinely do treadmill exercise testing (TMT), VO2 max measurements are not widely available. We, therefore, analysed the ability to predict VO2 max from common TMT variables: Peak exercise heart rate, exercise time, and METS achieved in 26 patients with chronic congestive heart failure (NYHA II-III, ejection fraction 43 +/- 2%) in whom exercise VO2 studies were simultaneously done by breath to breath expiratory gas analysis using a metabolic cart. METS achieved during exercise and exercise time correlated reasonably well although not perfectly (r = 0.78 &amp; 0.73 respectively, tail critical value +/-0.41). Resting ejection fraction did not correlate at all (r = 0.0004). The regression equation (2.7) (METS) + 5.8 defined VO2 max with SE of 0.47. Although in unvariate analysis, exercise time, METS achieved &amp; peak heart rate predicted VO2 max, only METS achieved was predictive in step wise regression. None of the parameters predicted the anaerobic threshold accurately although there was a modest relation between AT and peak exercise VO2. We conclude that most exercise variables do not accurately predict VO2 max in patients with chronic congestive heart failure. METS achieved is the best predictor and the VO2 max can be predicted using a regression equation. Anaerobic threshold cannot be predicted without tests involving expiratory gas analysis. abstract_id: PUBMED:12531415 The effect of beta-adrenergic blockers on the prognostic value of peak exercise oxygen uptake in patients with heart failure. Objectives: Our aim was to determine the effect of beta-adrenergic blockade on the prognostic value of peak oxygen consumption testing in patients with heart failure. Background: Peak oxygen consumption has been shown to be a useful prognostic tool in patients with heart failure. However, studies demonstrating the utility of peak oxygen consumption were conducted before beta-blocker therapy became widespread. Thus, our objective was to determine the effect of beta-blockers on the prognostic value of peak oxygen consumption in patients with heart failure. Methods: Actuarial, anthropomorphic, hemodynamic and exercise ventilatory data were collected from 369 patients with heart failure. Death and orthotopic heart transplants were the events tracked. Patients were divided into those taking beta-blockers and those not taking them. Event-free survival days were calculated. Results: One hundred ninety-nine patients on beta-blockers and 170 not on beta-blockers were studied. There were 40 orthotopic heart transplants and 82 deaths during follow-up. Peak oxygen consumption (milliliters per kilogram per minute) trended toward being an independent predictor of event-free survival (p = 0.055). In patients on and not on beta-blockers, a peak oxygen consumption of &gt;14 ml/kg.min was associated with a 1-year event rate of approximately half of that associated with a peak oxygen consumption &lt;/=14 ml/kg x min. However, for every level of peak oxygen consumption, the event rate was lower in the group taking beta-blockers. Conclusions: Beta-blocker status does not change the predictive power of peak oxygen consumption in patients with heart failure, but beta-blocker status is important to consider when using peak oxygen consumption to predict event-free survival in patients with heart failure. abstract_id: PUBMED:33208204 Impact of peak respiratory exchange ratio on the prognostic power of symptoms-limited exercise testing using Bruce protocol in patients with Fontan physiology. Objectives: We evaluated the impact of peak respiratory exchange ratio on the prognostic values of cardiopulmonary exercise variables during symptoms-limited incremental exercise tests in patients with Fontan physiology. Methods: Retrospective single-centre chart review study of Fontan patients who underwent exercise testing using the Bruce protocol between 2014 and 2018 and follow-up. Results: A total of 34 patients (age &gt; 18 years) had a Borg score of ≥7 on the Borg 10-point scale, but only 50% of patients achieved a peak respiratory exchange ratio of ≥ 1.10 (maximal test). Peak oxygen consumption, percent-predicted peak oxygen consumption, and peak oxygen consumption at the ventilatory threshold was reduced significantly in patients with a peak respiratory exchange ratio of &lt; 1.10. Peak oxygen consumption and percent-predicted peak oxygen consumption was positively correlated with peak respiratory exchange ratio values (r = 0.356, p = 0.039). After a median follow-up of 21 months, cardiac-related events occurred in 16 (47%) patients, with no proportional differences in patients due to their respiratory exchange ratio (odds ratio, 0.62; 95% CI: 0.18-2.58; p = 0.492). Multivariate Cox proportional hazard analysis showed percent-predicted peak oxygen consumption, peak heart rate, and the oxygen uptake efficient slope were highly related to the occurrence of events in patients only with a peak respiratory exchange ratio of ≥ 1.10. Conclusions: The value of peak cardiopulmonary exercise variables is limited for the determination of prognosis and assessment of interventions in Fontan patients with sub-maximal effort. Our findings deserve further research and clinical application. Answer: The percentage of predicted maximal exercise oxygen consumption does not significantly improve the prediction of survival over peak exercise oxygen consumption alone for patients with severe heart failure. A study by Osman et al. (PUBMED:8800737) found that neither the Wasserman nor the Astrand method of determining percentage of predicted maximal exercise oxygen consumption significantly improved the prediction of survival compared to peak exercise oxygen consumption. The overall model discrimination, as assessed by the area under the receiver operating characteristic curve, was not significantly improved with percentage of predicted maximal exercise oxygen consumption compared to weight-normalized peak exercise oxygen consumption. The study concluded that normalization of peak exercise oxygen consumption for predicted values adds only minimal prognostic information and that a peak exercise oxygen consumption < 14 ml/kg/min remains a reasonable guideline by which to time heart transplantation. Additionally, another study by Osada et al. (PUBMED:9688429) found that the determination of percentage of predicted peak exercise oxygen uptake (%VO2max) does not enhance risk stratification in chronic heart failure (CHF), whereas the kinetics of oxygen consumption after exercise can provide prognostic information. The study indicated that VO2max was the most powerful predictor of cardiac death, and the discriminatory accuracy of VO2max for cardiac death was not significantly greater than that of %VO2max. Therefore, based on the findings from these studies, peak exercise oxygen consumption remains a key predictor of survival in patients with severe heart failure, and the percentage of predicted maximal exercise oxygen consumption does not offer a better prognostic value.
Instruction: Can an educational intervention improve compliance with vaginal dilator use in patients treated with radiation for a gynecological malignancy? Abstracts: abstract_id: PUBMED:22552831 Can an educational intervention improve compliance with vaginal dilator use in patients treated with radiation for a gynecological malignancy? Objective: To investigate whether an educational intervention would facilitate compliance with vaginal dilators and potentially reduce stenosis in women receiving radiotherapy as treatment for a gynecological malignancy. Methods: From 2002 to 2009, all patients undergoing pelvic radiotherapy (either external beam radiotherapy or brachytherapy) at our center for treatment of gynecological malignancies were educated about the use of vaginal dilators. Sixty patients agreed to participate in a prospective 12-month study to evaluate use. The patients had a structured educational intervention regarding dilator use. Assessment was prospectively performed via questionnaires at baseline, 3, 6, 9, and 12 months after completion of radiotherapy. Data collected included patients' demographics, treatment, incidence of stenosis, and usage of and attitudes toward dilator use. Results: The median age of the patients was 60 years. Primary disease site was the uterus (56.6%) and cervix (40.0%). At 12 months, 52% of patients were still using the dilators, and 35% were using the dilators at least 2 to 3 times per week. Frequency of dilator use was greater in those patients older than 50 years (P = 0.005), even after adjusting for sexual frequency, and in those experiencing pain on vaginal examination (P &lt; 0.001). It was less frequent in those patients who were sexually active (P = 0.035). At 12 months, 11% of the patients had flimsy adhesions and 6.5% had partial stenosis. No patients had complete stenosis. The only independent predictor of stenosis was the treatment group with a hazard ratio of 0.200 (95% confidence interval, 0.059-0.685), favoring surgery and any radiotherapy reducing the risk of stenosis compared to definitive radiation therapy alone. Conclusions: Our educational intervention facilitates compliance with vaginal dilators. Surgery and adjuvant radiation therapy (with or without cisplatin as a radiation sensitizer) may predict a lower risk of vaginal stenosis compared to definitive radiation therapy alone. abstract_id: PUBMED:33867298 Prospective randomized trial of email and/or telephone reminders to enhance vaginal dilator compliance in patients undergoing brachytherapy for gynecologic malignancies. Purpose: To evaluate whether email survey +/- telephone reminder versus no intervention would facilitate compliance with vaginal dilator use in women undergoing brachytherapy for gynecologic malignancies and to assess changes in vaginal canal (VC) length between the groups. Methods And Materials: A 72 patients were enrolled onto a three-arm single-institution randomized prospective clinical trial and stratified by whether they received external beam radiation treatment in addition to brachytherapy and by total radiation dose to the VC. Patients were subsequently randomized to one of three groups: email survey alone, email survey + telephone reminder, or no intervention. Change in VC length over time was measured for each patient. Results: The median follow-up time was 17.3 months. There were no differences in patient-reported compliance between the 3 groups. Vaginal dilator compliance fell over the course of the study period and was 33% at 24 months. Baseline VC length and radiation dose were found to be the most important predictors of VC shortening over time. When accounting for baseline length, radiation dose, and follow-up time, type of intervention did not impact changes in VC length from baseline (p = 0.20). Conclusions: Our study is the first to show the importance of baseline VC length as it relates to VC shortening following brachytherapy for gynecologic cancers and highlights the difficulties in improving VC compliance among this patient population. Further study is required to improve the incidence of late effects in this group. abstract_id: PUBMED:25424559 Qualitative accounts of patients' determinants of vaginal dilator use after pelvic radiotherapy. Introduction: Treatment with pelvic external beam radiotherapy with brachytherapy (EBRT/BT) for gynecological cancers may cause sexual dysfunction because of vaginal shortening and tightening. Regular vaginal dilator use is thought to reduce vaginal shortening and/or tightening, but compliance is poor. Aims: This study identified determinants of patients' adherence with dilator use after EBRT/BT. Methods: Semi-structured interviews were conducted with 30 women, aged 32-67 years, treated with EBRT/BT for gynecological cancers at two university medical centers in the past 36 months. Transcriptions were coded and analyzed with N-Vivo software. Main Outcome Measures: Determinants of dilator use were clustered based on the Health Action Process Approach, which describes (i) motivation processes that lead to a behavioral intention and (ii) volition processes that lead to the initiation or maintenance of actual behavior. Results: Almost all women attempted to perform long-term regular vaginal dilator use. Intended dilator use was determined by the expectation that it would prevent the development of vaginal adhesions and stenosis. Planning dilator use and making it part of a routine, using it under the shower, using lubricants, a smaller dilator size, or vibrators helped women. Others reported a lack of time or privacy, forgetting, or feeling tired. Women self-regulated dilator use by rotating the dilator and timing dilator use. Influencing factors were negative emotions regarding dilator use or its hard plastic design, (being anxious for) pain or blood loss, and an association with EBRT/BT. Some women mentioned a lack of instrumental support, for example, lubricants. Others received reassurance through informational support or were supported socially. Conclusion: Motivation and volition processes that determined dilator use were identified and used in the development of a sexual rehabilitation intervention. It is important to provide sufficient patient information and support, and enlarge patients' perceived self-efficacy. abstract_id: PUBMED:33526400 Diagnostic Methods for Vaginal Stenosis and Compliance to Vaginal Dilator Use: A Systematic Review. Background: Vaginal stenosis (VS) is a common side effect of pelvic radiotherapy for gynecological cancer in women. It has a high incidence variability, likely due to unstandardized and subjective assessment methods. Furthermore, even though the worldwide standard treatment for VS is vaginal dilation, low compliance rates have been noted. Aim: To evaluate the parameters used to diagnose VS and to assess whether the lack of an objective measure of VS hampers vaginal dilator use. Methods: A systematic review in accordance with the PRISMA reporting guidelines was conducted. PubMed, EMBASE, and Web of Science databases were searched. Randomized trials and prospective, retrospective, and cross-sectional studies published from January 2011 to February 2020 were included. Outcomes: The main outcome of this study was a review of the published literature on assessment methods for VS and compliance to vaginal dilator use. Results: Of the 28 articles obtained, only 7 used objective methods to measure the vaginal volume. 3 studies have demonstrated patient's concern with VS development and showed a high compliance to dilator use, whereas others reported several barriers to dilator use. Clinical Implications: Lack of an objective assessment method can be a predisposing factor for uncertain VS incidence rates and impair compliance to vaginal dilator therapy, leading to long-term VS and sexual dysfunction. Strengths & Limitations: This is the first systematic review on the heterogeneity of VS evaluation methods and compliance to vaginal dilator use. All studies were comprehensively evaluated by 2 reviewers. The limitations included the heterogeneity of the study designs and the unstandardized criteria used to classify stenosis or to evaluate compliance to dilator use. Although 3 well-known databases were used, the inclusion of more data sources could have increased the number of publications included in this review. Conclusion: VS is frequently diagnosed using subjective parameters. Few unstandardized objective methods are used to evaluate this condition. Regarding compliance to vaginal dilator use, there was a high dropout rate during follow-up and no consensus on starting time or ideal usage. Haddad NC, Soares Brollo LC, Pinho Oliveira MA, et al. Diagnostic Methods for Vaginal Stenosis and Compliance to Vaginal Dilator Use: A Systematic Review. J Sex Med 2021;18:493-514. abstract_id: PUBMED:27787681 A nurse-led sexual rehabilitation intervention after radiotherapy for gynecological cancer. Purpose: Although vaginal dilator use after combined pelvic radiation therapy and brachytherapy (RT/BT) is recommended to prevent vaginal shortening and stenosis, women fail to use them and experience sexual problems. A nurse-led sexual rehabilitation intervention targeting sexual recovery and vaginal dilatation was developed. Its feasibility was investigated during a prospective, longitudinal, observational pilot study. Methods: Four oncology nurses were specifically trained to conduct the intervention. Gynecologic cancer patients treated with RT/BT were assessed using (i) questionnaires on frequency of dilator use (monthly), sexual functioning, and sexual distress (at baseline and 1, 6, and 12 months) and psychological and relational distress (at 1, 6, and 12 months); (ii) semi-structured interviews (between 6 and 12 months); and (iii) consultation recordings (a random selection of 21 % of all consults). Results: Twenty participants were 26-71 years old (mean = 40). Eight participants discontinued participation after 3 to 9 months. At 6 months after RT, 14 out of 16 (88 %), and at 12 months 9 out of 12 (75 %), participants dilated regularly, either by having sexual intercourse or by using dilators. Sexual functioning improved between 1 and 6 months after RT, with further improvement at 12 months. Most participants reported that the intervention was helpful and the nurses reported having sufficient expertise and counseling skills. Conclusions: According to the pilot results, the intervention was feasible and promising for sexual rehabilitation and regular dilator use after RT. Its (cost-)effectiveness will be investigated in a randomized controlled trial. abstract_id: PUBMED:36065026 Barriers to and facilitators of adherence to pelvic floor muscle exercises and vaginal dilator use among gynecologic cancer patients: a qualitative study. Objective: Women with gynecologic cancer may suffer from pelvic floor dysfunction (PFD). Before radiotherapy, prehabilitation with pelvic floor muscle exercises (PFME) and vaginal dilator (VD) might prevent it and foster sexual life. This study aims to explore the experience of gynecologic cancer patients getting external beam radiation treatments regarding barriers to and facilitators of adherence to a prehabilitation program to prevent PFD. Methods: This qualitative research with thematic content analysis included 11 women with gynecologic cancer and different levels of adherence to PFME and VD. Participants were interviewed based on a semi-structured script. The information was analyzed manually, assisted with Nvivo12® software, and triangulated with open coding. Results: High self-motivation, desire to improve their health, symptoms of improvement, availability of time, the desire to resume sexual life, and the support of the partner were facilitators of adherence. The instructional exercise audio, clarity of the information, and closer communication with the physical therapist were also valued. The main barriers were general malaise secondary to oncological treatments, forgetfulness, lack of time, misinformation, lack of coordination with the treatment team, discomfort with the VD, and a feeling of shame. Feedback from the attending physician was a facilitator when present or a barrier when absent. Conclusion: These barriers and facilitators should be considered when designing and implementing preventive programs with PFME and VD. Behavioral counselling should consider the desire to remain sexually active; in such cases, including the partner in the therapeutic process is appraised. Otherwise, the focus should be on benefits for maintenance of pelvic floor function. abstract_id: PUBMED:25248115 Sexual rehabilitation after pelvic radiotherapy and vaginal dilator use: consensus using the Delphi method. Objective: This study aimed to reach consensus among professional experts on patient information provision and support regarding sexual rehabilitation and vaginal dilator use after radiotherapy (RT) for gynecological cancers. Methods: A 3-round online Delphi study was conducted among 10 radiation oncologists, 10 gynecologic oncologists, and 10 oncology nurses from 12 gynecological cancer centers providing RT for gynecological tumors. We assessed the desired content and provider of instructions and patient support regarding sexuality and vaginal dilator use. Responses were measured on a 7-point scale varying from "totally disagree" to "totally agree." Consensus was reached when 70% of participants' answers fell within 2 scale categories with an interquartile range less than or equal to 1. Results: The panel agreed that information about sexual rehabilitation using vaginal dilators should be provided by radiation oncologists before treatment. Information should always be provided to sexually active cervical and vaginal cancer patients younger than 70 years. Tailored information was recommended for vulvar and endometrial cancer patients, patients older than 70 years, and sexually inactive patients. Preferably, specifically trained oncology nurses should give psychological and practical support. Participants recommended vaginal dilation to prevent vaginal adhesions, tightening, and shortening. The panel advised to start dilation around 4 weeks after treatment to perform dilation 2 to 3 times a week for 1 to 3 minutes and to continue dilation for 9 to 12 months. Plastic dilator sets were considered the most suitable type of dilator. Conclusions: Consensus was reached on patient information provision and support during sexual rehabilitation after RT for gynecological cancers. Results were used to develop a sexual rehabilitation intervention. abstract_id: PUBMED:30580905 A Psychosexual Rehabilitation Booklet Increases Vaginal Dilator Adherence and Knowledge in Women Undergoing Pelvic Radiation Therapy for Gynaecological or Anorectal Cancer: A Randomised Controlled Trial. Aims: Women treated with pelvic radiation therapy (PRT) for gynaecological or anorectal cancer report a high number of sexual problems and unmet post-treatment psychosexual information needs. Currently, there is suboptimal adherence to recommended rehabilitation aids, such as vaginal dilators, and a paucity of resources to facilitate post-radiation rehabilitation and reduce distress in this population. This randomised controlled trial aimed to evaluate the effectiveness of a study-developed psychosexual rehabilitation booklet in this setting. Materials And Methods: Eighty-two women scheduled for PRT to treat gynaecological/anorectal cancer were randomised to receive the intervention booklet (n = 44) or standard information materials (n = 38). Self-report questionnaires administered at pre-treatment baseline and at 3, 6 and 12 months post-treatment assessed adherence with rehabilitation aids, booklet knowledge, anxiety, depression and sexual functioning/satisfaction. Results: Dilator adherence and booklet knowledge were significantly higher in the intervention group than in the control group (averaged over time points), with scores significantly increasing over time. Younger age and gynaecological cancer were significant predictors of greater dilator adherence. No significant group differences were found on psychological and sexual measures. Conclusions: The psychosexual rehabilitation booklet was effective in educating women with gynaecological and anorectal cancers about PRT-related psychosexual side-effects and rehabilitation options, as well as promoting uptake of vaginal dilator use. Future research should elucidate the effectiveness of this booklet in women with greater psychological and sexual functioning needs. abstract_id: PUBMED:37051960 Systematic Review of Comparisons Between Plastic and Silicone Dilators: Revealing a Knowledge Gap. Introduction: Vaginal stenosis is a distressing side effect of radiation therapy that can impair quality of life. Dilator therapy is an option for patients undergoing pelvic radiotherapy to mitigate vaginal stenosis. Currently, the dilators given to patients by most hospitals are made of plastic, compared to silicone dilators which are available on the market for purchase. Objectives: We conducted a systematic literature review to find information to guide clinical recommendations to pelvic radiotherapy patients on potential differences regarding the use of plastic vs silicone dilators with regard to efficacy, cost, and patient preferences. Methods: A systematic literature review was conducted in Embase, MEDLINE, and PubMed using Emtree terms. To be included in the review, papers needed to: focus on female patients undergoing radiation therapy, assess a vaginal dilator, measure any dilator intervention outcome, and specifically compare plastic vs silicone dilators for any measured outcome (either qualitative or quantitative). Results: The initial search yielded 195 articles. Two area experts, with a third expert for arbitration, read each article and found that none met all review inclusion criteria. No studies were found that compared silicone to plastic dilators with regard to efficacy in treating vaginal stenosis due to radiation therapy, no studies were found that compared cost or cost-effectiveness of the 2 dilator types, and no studies were found comparing patient preferences or experiences (eg, comfort, adherence, ease of use) between the 2 dilator types. Conclusion: The materials used to create dilators have never been rigorously compared in the context of radiotherapy-related vaginal stenosis. Institutions and patients have no data to guide their choice. Significantly more research at the patient and institutional level is needed to explore the potential long-term quality of life and cost benefits of improved adherence with silicone dilator use, and to guide shared decision-making regarding dilator choice. abstract_id: PUBMED:22304701 From "sex toy" to intrusive imposition: a qualitative examination of women's experiences with vaginal dilator use following treatment for gynecological cancer. Introduction: Regular use of vaginal dilators has been recommended as a prophylactic measure following radiation treatment for gynecological cancers in order to minimize vaginal stenosis and promote optimal healing of the vagina. Despite the well-established reluctance of women to adopt this practice, little is known about the difficulties and concerns associated with vaginal dilator use. Aim: To investigate women's experiences with the vaginal dilator and to understand the psychosocial factors that influence women's adoption of rehabilitative dilator use. Methods: This is an exploratory qualitative study using semi-structured interviews with a sample of 10 women with a history of gynecological cancer and who were prescribed a vaginal dilator. Interviews were analyzed using the grounded theory method and examined for recurrent themes. Main Outcome Measure: The main outcome measure was a semi-structured interview comprised of open-ended questions designed to elicit information concerning topic areas relevant to gynecological cancer, vaginal dilator use, and sexuality. Results: The analysis resulted in five main categories underlying the core category of "From 'sex toy' to intrusive imposition." These were: (i) embarrassing sex toy; (ii) reliving the invasion of treatment; (iii) aversive "hands-on" experience; (iv) not at the forefront of my recovery; and (v) minimizing the resistance. Conclusions: Rehabilitative vaginal dilator use is a complex, multifaceted, and personal phenomenon that carries deep psychological and emotional implications that make it intrusive. These findings may enhance the way in which vaginal dilators are introduced and help healthcare providers address better women's difficulties and concerns with the dilators. Ultimately, it may also lead to improved health maintenance and quality of life for women recovering from gynecological cancer. Answer: Yes, an educational intervention can improve compliance with vaginal dilator use in patients treated with radiation for a gynecological malignancy. A study found that a structured educational intervention regarding dilator use facilitated compliance, with 52% of patients still using the dilators at 12 months, and 35% using them at least 2 to 3 times per week (PUBMED:22552831). Additionally, a nurse-led sexual rehabilitation intervention that included education on sexual recovery and vaginal dilatation was found to be feasible and promising for promoting regular dilator use after radiation therapy (RT), with high rates of regular dilation reported at 6 and 12 months post-RT (PUBMED:27787681). Furthermore, a psychosexual rehabilitation booklet provided to women undergoing pelvic radiation therapy for gynecological or anorectal cancer was effective in increasing vaginal dilator adherence and knowledge (PUBMED:30580905). These findings suggest that providing sufficient patient information and support, as well as enhancing patients' perceived self-efficacy, can improve adherence to vaginal dilator use following radiation treatment for gynecological malignancies.
Instruction: Do daily ward interviews improve measurement of hospital quality and safety indicators? Abstracts: abstract_id: PUBMED:27291891 Do daily ward interviews improve measurement of hospital quality and safety indicators? A prospective observational study. Rationale, Aims And Objectives: The aim of this study was to determine if the addition of daily ward interview data improves the capture of hospital quality and safety indicators compared with incident reporting systems alone. An additional aim was to determine the potential characteristics influencing under-reporting of hospital quality and safety indicators in incident reporting systems. Methods: A prospective, observational study was performed at two tertiary metropolitan public hospitals. Research assistants from allied health backgrounds met daily with the nurse in charge of the ward and discussed the occurrence of any falls, pressure injuries and rapid response medical team calls. Data were collected from four general medical wards, four surgical wards, an orthopaedic, neurosciences, plastics, respiratory, renal, sub-acute and acute medical assessment unit. Results: An estimated total of 303 falls, 221 pressure injuries and 884 rapid response medical team calls occurred between 15 wards across two hospitals, over a period of 6 months. Hospital incident reporting systems underestimated falls by 30.0%, pressure injuries by 59.3% and rapid response medical team calls by 17.0%. The use of ward interview data collection in addition to hospital incident reporting systems improved data capture of falls by 23.8% (n = 72), pressure injuries by 21.7% (n = 48) and rapid response medical team calls by 12.7% (n = 112). Falls events were significantly less likely to be reported if they occurred on a Monday (P = 0.04) and pressure injuries significantly more likely to be reported if they occurred on a Wednesday (P = 0.01). Conclusions: Hospital quality and safety indicators (falls, pressure injuries and rapid response medical team calls) were under-reported in incident reporting systems, with variability in under-reporting between wards and the day of event occurrence. The use of ward interview data collection in addition to hospital incident reporting systems improved reporting of hospital quality and safety indicators. abstract_id: PUBMED:25766064 Measuring ward round quality in urology. Background: Ward rounds are the traditional process by which clinical information is interpreted and management plans made in the inpatient setting and the only time during which patient-doctor interaction can reliably occur. Efforts to improve quality and safety have started looking at the ward round but this has mainly been in the acute medical setting. Objective: To begin the quality improvement process for Urological ward rounds. Methods: Twenty indicators thought to relate to quality were recorded for every weekday ward round by the Urology team for one month. Results: Twenty ward rounds, 93 patient encounters, were reviewed. A consultant was present for 37% of the patient encounters. 84% of observation charts were reviewed; drug charts 28% and antibiotics 70%. Plans were communicated to the doctors, patient and nursing staff. All notes were typed directly onto the electronic system, 20% of notes were checked by the lead clinician. Mean time per patient was 6 minutes. Conclusions: By starting a discussion about ward rounds we aim to align the process with the broader values of the organisation. Ward rounds can be the cornerstone of delivering safe, clean and personal care and measuring this process is vital to understanding efforts to improve them. abstract_id: PUBMED:24320819 Health-related quality of life and functional status quality indicators for older persons with multiple chronic conditions. Objectives: To explore central challenges with translating self-reported measurement tools for functional status and health-related quality of life (HRQOL) into ambulatory quality indicators for older people with multiple chronic conditions (MCCs). Design: Review. Setting: Sources including the National Quality Measures Clearinghouse and National Quality Forum were reviewed for existing ambulatory quality indicators relevant to functional status, HRQOL, and people with MCCs. Participants: Seven informants with expertise in indicators using functional status and HRQOL. Measurements: Informant interviews were conducted to explore knowledge about these types of indicators, particularly usability and feasibility. Results: Nine important existing indicators were identified in the review. For process, identified indicators addressed whether providers assessed functional status; outcome indicators addressed quality of life. In interviews, informants agreed that indicators using self-reported data were important in this population. Challenges identified included concerns about usability due to inability to discriminate quality of care adequately between organizations and feasibility concerns regarding high data collection burden, with a correspondingly low response rate. Validity was also a concern because evidence is mixed that healthcare interventions can improve HRQOL or functional status for this population. As a possible first step, a structural standard could be systematic collection of these measures in a specific setting. Conclusion: Although functional status and HRQOL are important outcomes for older people with MCCs, few relevant ambulatory quality indicators exist, and there are concerns with usability, feasibility, and validity. Further research is needed on how best to incorporate these outcomes into quality indicators for people with MCCs. abstract_id: PUBMED:31017060 Hospital quality indicators: a systematic review. Purpose: The purpose of this paper is to identify and describe hospital quality indicators, classifying them according to Donabedian's structure, process and outcome model and in specific domains (quality, safety, infection and mortality) in two care divisions: inpatient and emergency services. Design/methodology/approach: A systematic review identified hospital clinical indicators. Two independent investigators evaluated 70 articles/documents located in electronic databases and nine documents from the grey literature, 35 were included in the systematic review. Findings: In total, 248 hospital-based indicators were classified as infection, safety, quality and mortality domains. Only 10.2 percent were identified in more than one article/document and 47 percent showed how they were calculated/obtained. Although there are scientific papers on developing, validating and hospital indicator assessment, most indicators were obtained from technical reports, government publications or health professional associations. Research Limitations/implications: This review identified several hospital structure, process and outcome quality indicators, which are used by different national and international groups in both research and clinical practice. Comparing performance between healthcare organizations was difficult. Common clinical care standard indicators used by different networks, programs and institutions are essential to hospital quality benchmarking. Originality/value: To the authors' knowledge, this is the first systematic review to identify and describe hospital quality indicators after a comprehensive search in MEDLINE/PubMed, etc., and the grey literature, aiming to identify as many indicators as possible. Few studies evaluate the indicators, and most are found only in the grey literature, and have been published mostly by government agencies. Documents published in scientific journals usually refer to a specific indicator or to constructing an indicator. However, indicators most commonly found are not supported by reliability or validity studies. abstract_id: PUBMED:25380608 Examining the validity of AHRQ's patient safety indicators (PSIs): is variation in PSI composite score related to hospital organizational factors? Increasing use of Agency for Healthcare Research and Quality's Patient Safety Indicators (PSIs) for hospital performance measurement intensifies the need to critically assess their validity. Our study examined the extent to which variation in PSI composite score is related to differences in hospital organizational structures or processes (i.e., criterion validity). In site visits to three Veterans Health Administration hospitals with high and three with low PSI composite scores ("low performers" and "high performers," respectively), we interviewed a cross-section of hospital staff. We then coded interview transcripts for evidence in 13 safety-related domains and assessed variation across high and low performers. Evidence of leadership and coordination of work/communication (organizational process domains) was predominantly favorable for high performers only. Evidence in the other domains was either mixed, or there were insufficient data to rate the domains. While we found some evidence of criterion validity, the extent to which variation in PSI rates is related to differences in hospitals' organizational structures/processes needs further study. abstract_id: PUBMED:33726709 Development of quality indicators to measure pre-hospital emergency medical services for road traffic injury. Background: Pre-Hospital Emergency Care (PEC) is a fundamental property of prevention of Road Traffic Injuries (RTIs). Thus, this sector requires a system for evaluation and performance improvement. This study aimed to develop quality indicators to measure PEC for RTIs. Methods: Following the related literature review, 14 experts were interviewed through semi-structured interviews to identify Quality Measurement Indicators (QMIs). The extracted indicators were then categorized into three domains: structure, performance, and management. Finally, the identified QMIs were confirmed through two rounds of the Delphi technique. Results: Using literature review 11 structural, 13 performance, and four managerial indicators (A total of 28 indicators) were identified. Also, four structural, four performance, and three managerial indicators (A total of 11indicators) were extracted from interviews with experts. Two indicators were excluded after two rounds of Delphi's technics. Finally, 14 structural, 16 performance and, seven managerial indicators (A total of 37indicators) were finalized. Conclusion: Due to the importance and high proportion of RTIs compared to other types of injuries, this study set out to design and evaluate the QMIs of PEC delivered for RTIs. The findings of this research contribute to measuring and planning aimed at improving the performance of PEC. abstract_id: PUBMED:19155288 Using quality indicators to improve hospital care: a review of the literature. Purpose: To review the literature concerning strategies for implementing quality indicators in hospital care, and their effectiveness in improving the quality of care. Data Sources: A systematic literature study was carried out using MEDLINE and the Cochrane Library (January 1994 to January 2008). Study Selection: Hospital-based trials studying the effects of using quality indicators as a tool to improve quality of care. Data Extraction: Two reviewers independently assessed studies for inclusion, and extracted information from the studies included regarding the health care setting, type of implementation strategy and their effectiveness as a tool to improve quality of hospital care. Results: A total of 21 studies were included. The most frequently used implementation strategies were audit and feedback. The majority of these studies focused on care processes rather than patient outcomes. Six studies evaluated the effects of the implementation of quality indicators on patient outcomes. In four studies, quality indicator implementation was found to be ineffective, in one partially effective and in one it was found to be effective. Twenty studies focused on care processes, and most reported significant improvement with respect to part of the measured process indicators. The implementation of quality indicators in hospitals is most effective if feedback reports are given in combination with an educational implementation strategy and/or the development of a quality improvement plan. Conclusion: Effective strategies to implement quality indicators in daily practice in order to improve hospital care do exist, but there is considerable variation in the methods used and the level of change achieved. Feedback reports combined with another implementation strategy seem to be most effective. abstract_id: PUBMED:31732592 The process of clinical consultation is crucial to patient outcomes and safety: 10 quality indicators. In this paper, I discuss the central importance of the clinical consultation to defining the potential outcomes for an episode of care. The consultation is also crucial to patient safety. Yet the processes of clinical consultations on ward rounds and outpatients have attracted little attention in terms of ergonomics or research. I propose 10 quality indicators that would be simple to measure and could be used to improve clinical consultations. Better consultations would reduce the possibility of misdiagnosis and also improve patient outcomes, patient experience, patient safety and staff satisfaction. It is high time we improved clinical consultations in hospital settings. abstract_id: PUBMED:25018098 An approach to care indicators benchmarking. Learning to improve patient safety Unlabelled: Improvements in clinical safety can be achieved by promoting a safety culture, professional training, and learning through benchmarking. The aim of this study was to identify areas for improvement after analysing the safety indicators in two public Hospitals in North-West Madrid Region. Material And Methods: Descriptive study performed during 2011 in Hospital Universitario Puerta de Hierro Majadahonda (HUPHM) and Hospital de Guadarrama (HG). The variables under study were 40 indicators on nursing care related to patient safety. Nineteen of them were defined in the SENECA project as care quality standards in order to improve patient safety in the hospitals. The data collected were clinical history, Madrid Health Service assessment reports, care procedures, and direct observation Results: Within the 40 indicators: 22 of them were structured (procedures), HUPHM had 86%, and HG 95% 14 process indicators (training and protocols compliance) with similar results in both hospitals, apart from the care continuity reports and training in hand hygiene. The 4 results indicators (pressure ulcer, falls and pain) showed different results. Conclusions: The analysis of the indicators allowed the following actions to be taken: to identify improvements to be made in each hospital, to develop joint safety recommendations in nursing care protocols in prevention and treatment of chronic wound, to establish systematic pain assessments, and to prepare continuity care reports on all patients transferred from HUPHM to HG. abstract_id: PUBMED:27733194 Are performance indicators used for hospital quality management: a qualitative interview study amongst health professionals and quality managers in The Netherlands. Background: Hospitals are under increasing pressure to share indicator-based performance information. These indicators can also serve as a means to promote quality improvement and boost hospital performance. Our aim was to explore hospitals' use of performance indicators for internal quality management activities. Methods: We conducted a qualitative interview study among 72 health professionals and quality managers in 14 acute care hospitals in The Netherlands. Concentrating on orthopaedic and oncology departments, our goal was to gain insight into data collection and use of performance indicators for two conditions: knee and hip replacement surgery and breast cancer surgery. The semi-structured interviews were recorded and summarised. Based on the data, themes were synthesised and the analyses were executed systematically by two analysts independently. The findings were validated through comparison. Results: The hospitals we investigated collect data for performance indicators in different ways. Similarly, these hospitals have different ways of using such data to support their quality management, while some do not seem to use the data for this purpose at all. Factors like 'linking pin champions', pro-active quality managers and engaged medical specialists seem to make a difference. In addition, a comprehensive hospital data infrastructure with electronic patient records and robust data collection software appears to be a prerequisite to produce reliable external performance indicators for internal quality improvement. Conclusions: Hospitals often fail to use performance indicators as a means to support internal quality management. Such data, then, are not used to its full potential. Hospitals are recommended to focus their human resource policy on 'linking pin champions', the engagement of professionals and a pro-active quality manager, and to invest in a comprehensive data infrastructure. Furthermore, the differences in data collection processes between Dutch hospitals make it difficult to draw comparisons between outcomes of performance indicators. Answer: Yes, daily ward interviews improve the measurement of hospital quality and safety indicators. A study found that the use of ward interview data collection in addition to hospital incident reporting systems improved the capture of falls by 23.8%, pressure injuries by 21.7%, and rapid response medical team calls by 12.7%. The study concluded that hospital quality and safety indicators were under-reported in incident reporting systems, and the addition of ward interview data collection improved reporting of these indicators (PUBMED:27291891).
Instruction: An Observational Study on a Protocol for Withdrawal of Life-Sustaining Measures on Two Non-Academic Intensive Care Units in The Netherlands: Few Signs of Distress, No Suffering? Abstracts: abstract_id: PUBMED:26335762 An Observational Study on a Protocol for Withdrawal of Life-Sustaining Measures on Two Non-Academic Intensive Care Units in The Netherlands: Few Signs of Distress, No Suffering? Context: Because anticipation of death is common within the intensive care unit, attention must be paid to the prevention of distressing signs and symptoms, enabling the patient to die peacefully. In the relevant studies on this subject, there has been a lack of focus on measuring determinants of comfort in this population. Objectives: To evaluate whether dying without distressing signs after the withdrawal of life-sustaining measures is possible using a newly introduced protocol and to analyze the potential influence of opioids and sedatives on time till death. Methods: This was a prospective observational study, in two nonacademic Dutch intensive care units after the introduction of a national protocol for end-of-life care. The study lasted two years and included adult patients in whom mechanical ventilation and/or vasoactive medication was withdrawn. Exclusion criteria included all other causes of death. Results: During the study period, 450 patients died; of these, 305 patients were eligible, and 241 were included. Ninety percent of patients were well sedated before and after withdrawal. Severe terminal restlessness, death rattle, or stridor was seen in less than 6%. Dosages of opioids and sedatives increased significantly after withdrawal, but did not contribute to a shorter time till death according the regression analysis. Conclusion: The end-of-life protocol seems effective in realizing adequate patient comfort. Most patients in whom life-sustaining measures are withdrawn are well sedated and show few signs of distress. Dosages of opioids and sedatives increase significantly during treatment withdrawal but do not contribute to time until death. Dying with a minimum of distressing signs is thus practically possible and ethically feasible. abstract_id: PUBMED:25680980 Exploration of Withdrawal of Life-Sustaining Therapy in Canadian Intensive Care Units. Objective: The process of controlled donation after circulatory death (cDCD) is strongly connected with the process of withdrawal of life-sustaining therapy. In addition to impacting cDCD success, actions comprising withdrawal of life-sustaining therapy have implications for quality of palliative care. We examined pilot study data from Canadian intensive care units to explore current practices of life-sustaining therapy withdrawal in nondonor patients and described variability in standard practice. Design: Secondary analysis of observational data collected for Determination of Death Practices in Intensive Care pilot study. Setting: Four Canadian adult intensive care units. Patients: Patients ≥18 years in whom a decision to withdraw life-sustaining therapy was made and substitute decision makers consented to study participation. Organ donors were excluded. Interventions: None. Measurements And Main Results: Prospective observational data on interventions withdrawn, drugs administered, and timing of life-sustaining therapy withdrawal was available for 36 patients who participated in the pilot study. Of the patients, 42% died in ≤1 hour; median length of time to death varied between intensive care units (39-390 minutes). Withdrawal of life-sustaining therapy processes appeared to follow a general pattern of vasoactive drug withdrawal followed by withdrawal of mechanical ventilation and extubation in most sites but specific steps varied. Approaches to extubation and weaning of vasoactive drugs were not consistent. Protocols detailing the process of life-sustaining therapy withdrawal were available for 3 of 4 sites and also exhibited differences across sites. Conclusions: Standard practice of life-sustaining therapy withdrawal appears to differ between selected Canadian sites. Variability in withdrawal of life-sustaining therapy may have a potential impact both on rates of cDCD success and quality of palliative care. abstract_id: PUBMED:31735565 Intensive care nurses' experiences of withdrawal of life-sustaining treatments in intensive care patients: A qualitative study. Objective: To explore the experience of intensive care nurses when participating in the withdrawal of life-sustaining treatments from intensive care unit patients. Design And Methods: A qualitative descriptive and explorative design. Data were collected in 2017 and 2018 by interviewing nine intensive care nurses. The data were analysed by using systematic text condensation. Setting: The nine intensive care nurses interviewed worked in four different intensive care units located in one university hospital and one local hospital. Main Outcome Measures: Experiences when participating in the process of withdrawing life-sustaining treatments. Findings: Three categories emerged from the data analysis: ICU nurses' experiences of stress in the process of treatment withdrawal; a requirement for interdisciplinary support and cooperation; and elements to achieve a dignified treatment withdrawal process. Conclusion: The intensive care nurses experienced challenges and emotional reactions when patients were overtreated or when they had to participate in treatments they did not agree with. They considered debriefings to be helpful in dealing with emotions. Thorough planning, good communication, pain relief, and the creation of a peaceful environment were perceived as important elements in achieving a dignified treatment withdrawal process. abstract_id: PUBMED:37306034 Continuous and deep sedation until death after a decision to withdraw life-sustaining therapies in intensive care units: A national survey. Background: Continuous and deep sedation until death is a much highly debated end-of-life practice. France is unique in having a regulatory framework for it. However, there are no data on its practice in intensive care units (ICUs). Aim: The aim is to describe continuous and deep sedation in relation to the framework in the specific context of withdrawal of life-sustaining therapies in ICUs, that is, its decision-making process and its practice compared to other end-of-life practices in this setting. Design And Setting: French multicenter observational study. Consecutive ICU patients who died after a decision to withdraw life-sustaining therapies. Results: A total of 343 patients in 57 ICUs, 208 (60%) with continuous and deep sedation. A formalized procedure for continuous and deep sedation was available in 32% of the ICUs. Continuous and deep sedation was not the result of a collegial decision-making process in 17% of cases, and did not involve consultation with an external physician in 29% of cases. The most commonly used sedative medicines were midazolam (10 [5-18] mg h-1) and propofol (200 [120-250] mg h -1). The Richmond Agitation Sedation Scale (RASS) was -5 in 60% of cases. Analgesia was associated with sedation in 94% of cases. Compared with other end-of-life sedative practices (n = 98), medicines doses were higher with no difference in the depth of sedation. Conclusions: This study shows a poor compliance with the framework for continuous and deep sedation. It highlights the need to formalize it to improve the decision-making process and the match between the intent, the practice and the actual effect. abstract_id: PUBMED:32597309 Terminal withdrawal of mechanical ventilation in adult intensive care units: A systematic review and narrative synthesis of perceptions, experiences and practices. Background: During the terminal withdrawal of life-sustaining measures for intensive care patients, the removal of respiratory support remains an ambiguous practice. Globally, perceptions and experiences of best practice vary due to the limited evidence in this area. Aim: To identify, appraise and synthesise the latest evidence around terminal withdrawal of mechanical ventilation in adult intensive care units specific to perceptions, experiences and practices. Design: Mixed methods systematic review and narrative synthesis. A review protocol was registered on PROSPERO (CRD42018086495). Data Sources: Four electronic databases were systematically searched (Medline, Embase, CENTRAL and CINAHL). Obtained articles published between January 2008 and January 2020 were screened for eligibility. All included papers were appraised using relevant appraisal tools. Results: Twenty-five papers were included in the review. Findings from the included papers were synthesised into four themes: 'clinicians' perceptions and practices'; 'time to death and predictors'; 'analgesia and sedation practices'; 'physiological and psychological impact'. Conclusions: Perceptions, experiences and practices of terminal withdrawal of mechanical ventilation vary significantly across the globe. Current knowledge highlights that the time to death after withdrawal of mechanical ventilation is very short. Predictors for shorter duration could be considered by clinicians and guide the choice of pharmacological interventions to address distressing symptoms that patients may experience. Clinicians ought to prepare patients, families and relatives for the withdrawal process and the expected progression and provide them with immediate and long-term support following withdrawal. Further research is needed to improve current evidence and better inform practice guidelines. abstract_id: PUBMED:35954876 The Impact of Signing Do-Not-Resuscitate Orders on the Use of Non-Beneficial Life-Sustaining Treatments for Intensive Care Unit Patients: A Retrospective Study. Background: Intensive care medical technology increases the survival rate of critically ill patients. However, life-sustaining treatments also increase the probability of non-beneficial medical treatments given to patients at the end of life. Objective: This study aimed to analyse whether patients with a do-not-resuscitate (DNR) order were more likely to be subject to the withholding of cardiac resuscitation and withdrawal of life-sustaining treatment in the ICU. Methods: This retrospective study collected data regarding the demographics, illness conditions, and life-sustaining treatments of ICU patients who were last admitted to the ICU between 1 January 2016 and 31 December 2017, as determined by the hospital’s electronic medical dataset. Results: We identified and collected data on 386 patients over the two years; 319 (82.6%) signed a DNR before the end. The study found that DNR patients were less likely to receive cardiac resuscitation before death than non-DNR patients. The cardiac resuscitation treatments included chest compressions, electric shock, and cardiotonic drug injections (p &lt; 0.001). However, the life-sustaining treatments were withdrawn for only a few patients before death. The study highlights that an early-documented DNR order is essential. However, it needs to be considered that promoting discussions of time-limited trials might be the solution to helping ICU terminal patients withdraw from non-beneficial life-sustaining treatments. abstract_id: PUBMED:32859185 Withholding and withdrawal of life-sustaining treatments in intensive care units in Lebanon: a cross-sectional survey of intensivists and interviews of professional societies, legal and religious leaders. Background: Little is known about the attitudes and practices of intensivists working in Lebanon regarding withholding and withdrawing life-sustaining treatments (LSTs). The objectives of the study were to assess the points of view and practices of intensivists in Lebanon along with the opinions of medical, legal and religious leaders regarding withholding withdrawal of life-sustaining treatments in Lebanese intensive care units (ICU). Methods: A web-based survey was conducted among intensivists working in Lebanese adult ICUs. Interviews were also done with Lebanese medical, legal and religious leaders. Results: Of the 229 survey recipients, 83 intensivists completed it, i.e. a response rate of (36.3%). Most respondents were between 30 and 49 years old (72%), Catholic Christians (60%), anesthesiologists (63%), working in Beirut (47%). Ninety-two percent of them were familiar with the withholding and withdrawal concepts and 80% applied them. Poor prognosis of the acute and chronic disease and futile therapy were the main reasons to consider withholding and withdrawal of treatments. Ninety-five percent of intensivists agreed with the "Principle of Double Effect" (i.e. adding analgesia and or sedation to patients after the withholding/withdrawal decisions in order to prevent their suffering and allow their comfort, even though it might hasten the dying process). The main withheld therapies were vasopressors, respiratory assistance and CPR. Most of the respondents reported the decision was often to always multidisciplinary (92%), involving the family (68%), and the patient (65%), or his advance directives (77%) or his surrogate (81%) and the nurses (78%). The interviewees agreed there was a law governing withholding and withdrawal decisions/practices in Lebanon. Christians and Muslim Sunni leaders declared accepting those practices (withholding or withdrawing LSTs from patients when appropriate). Conclusion: Withholding and withdrawal of LSTs in the ICU are known concepts among intensivists working in Lebanon and are being practiced. Our results could be used to inform and optimize therapeutic limitation in ICUs in the country. abstract_id: PUBMED:36963435 End-of-life care in Brazilian Pediatric Intensive Care Units. Objective: Most deaths in Pediatric Intensive Care Units involve forgoing life-sustaining treatment. Such deaths required carefully planned end-of-life care built on compassion and focused on palliative care measures. This study aims to assess topics related to the end of life care in Brazilian pediatric intensive care units from the perspective of a multidisciplinary team. Method: The authors used a tested questionnaire, utilizing Likert-style and open-ended questions. After ethics committee approval, it was sent by email from September to November/2019 to three Pediatric Intensive Care Units in the South and Southeast of Brazil. One unit was exclusively dedicated to oncology patients; the others were mixed units. Results: From 144 surveys collected (23% response rate) 136 were analyzed, with 35% physicians, 30% nurses, 21% nurse technicians, and 14% physiotherapists responding. Overall, only 12% reported enough end-of-life care training and 40% reported never having had any, albeit this was not associated with the physician's confidence in forgoing life-sustaining treatment. Furthermore, 60% of physicians and 46% of other professionals were more comfortable with non-escalation than withdrawing therapies, even if this could prolong suffering. All physicians were uncomfortable with palliative extubation; 15% of all professionals have witnessed it. The oncologic team uniquely felt that "resistance from the teams of specialists" was the main barrier to end-of-life care implementation. Conclusion: Most professionals felt unprepared to forego life-sustaining treatment. Even for terminally ill patients, withholding is preferred over the withdrawal of treatment. Socio-cultural barriers and the lack of adequate training may be contributing to insecurity in the care of terminally ill patients, diverging from practices in other countries. abstract_id: PUBMED:30249115 Seeing the Patient and Family Through: Nurses and Physicians Experiences With Withdrawal of Life-Sustaining Therapy in the ICU. Withdrawal of life-sustaining therapy at the end of life is a complex phenomenon. Intensive care nurses and physicians are faced with caring for patients and supporting families, as these difficult decisions are made. The purpose of this study was to explore and describe the experience of critical care nurses and physicians participating in the process of withdrawal of life-sustaining therapy. A hermeneutic phenomenological approach was used to guide this qualitative investigation. Interviews were conducted with critical care nurses and physicians from 2 medical centers. An inductive approach to data analysis was used to understand similarities between the nurses and the physicians' experiences. Methodological rigor was established, and data saturation was achieved. The main categories that were inductively derived from the data analysis included from novice to expert, ensuring ethical care, uncertainty to certainty, facilitating the process, and preparing and supporting families. The categories aided in understanding the experiences of nurses and physicians, as they worked individually and together to see patients and families through the entire illness experience, withdrawal of life-sustaining therapy decision-making process and dying process. Understanding the perspectives of health-care providers involved in the withdrawal of life-sustaining therapy process will help other health-care providers who are striving to provide quality care to the dying and to their families. abstract_id: PUBMED:15750799 Withholding and withdrawal of life-sustaining treatment in a Lebanese intensive care unit: a prospective observational study. Objective: To evaluate the implementation and process of withholding and withdrawing life-sustaining treatment in an intensive care unit. Design And Setting: Prospective observational study in the medical intensive care unit of a university hospital in Lebanon. Patients: Forty-five consecutive adult patients admitted to the ICU for a 1-year period and for whom a decision to withholding and withdrawal of life-sustaining treatment was made. Measurements And Results: Patients were followed up until their death. Data regarding all aspects of the implementation and the process of withholding and withdrawal of life-sustaining treatment were recorded by a senior staff nurse. Withholding and withdrawing life-sustaining treatment was applied to 9.6% of all admitted patients to ICU. Therapies were withheld in 38% and were withdrawn in 7% of patients who died. Futility of care and poor quality of life were the two most important factors supporting these decisions. The nursing staff was not involved in 26% of the decisions to limit care. Families were not implicated in 21% of the cases. Decisions were not notified in the patients' medical record in 23% of the cases. Sixty-three percent of patients did not have a sedative or an analgesic to treat discomfort during end-of-life care. Conclusions: Life-sustaining treatment were frequently withheld or withdrawn from adult patients in the Lebanese ICU. Cultural differences and the lack of guidelines and official statements could explain the ethical limitations of the decision-making process recorded in this study. Answer: The observational study conducted in two non-academic intensive care units in the Netherlands evaluated the effectiveness of a newly introduced protocol for the withdrawal of life-sustaining measures, with a focus on patient comfort and the potential influence of opioids and sedatives on the time until death (PUBMED:26335762). The study included adult patients who had mechanical ventilation and/or vasoactive medication withdrawn, excluding other causes of death. Out of 450 patients who died during the study period, 241 were included in the study. The results showed that 90% of patients were well sedated both before and after the withdrawal of life-sustaining measures. Fewer than 6% of patients exhibited severe terminal restlessness, death rattle, or stridor, indicating minimal signs of distress. The dosages of opioids and sedatives increased significantly after withdrawal, but regression analysis indicated that these increases did not contribute to a shorter time until death. The conclusion drawn from the study was that the end-of-life protocol appeared to be effective in ensuring adequate patient comfort. The majority of patients in whom life-sustaining measures were withdrawn were well sedated and displayed few signs of distress. The findings suggest that dying with a minimum of distressing signs is practically possible and ethically feasible within the context of the studied protocol in the Netherlands (PUBMED:26335762).
Instruction: Is the "salvaged-cell syndrome" myth or reality? Abstracts: abstract_id: PUBMED:8795524 Is the "salvaged-cell syndrome" myth or reality? Background: Intraoperative autotransfusion (IAT) has been implicated in anecdotal cases and experimental models to precipitate, aggravate, or exacerbate a coagulopathy. This study assesses this hypothesis. Methods: A retrospective database review of over 36,000 multispecialty cases of IAT during an 18-year experience was conducted with special reference to the occurrence of coagulopathy (disseminated intravascular coagulation [DIC]) in association with adult respiratory syndrome (ARDS). Results: The incidence of coagulopathy was low (0.05%). A total of 18 cases of DIC/ARDS were identified: 10 associated with ruptured aneurysms, 6 following massive trauma, and 2 after complex redo cardiac surgery. All 18 patients suffered shock and profound hypothermia. The mean transfusion requirement was 28 units. The mortality was 100%. Conclusion: Although some degree of bleeding and clotting disorders are not uncommon in major cases, in our experience coagulopathy occurs infrequently and is a result of a complex interaction of shock, hypothermia, and multiple transfusions. It is our contention that these factors trigger the DIC, not the autotransfusor, and that the ARDS results from reperfusion injury following a profound ischemic event, associated in many cases with multiorgan failure. abstract_id: PUBMED:31890103 Subacromial Impingement Syndrome of the Shoulder: A Musculoskeletal Disorder or a Medical Myth? Subacromial impingement syndrome (SAIS) is a commonly diagnosed disorder of the shoulder. Though this disorder has been known for a long time, it remains a poorly understood entity. Over the years several hypotheses have been put forward to describe the pathogenesis of SAIS but no clear explanation has been found. Two mechanisms, the extrinsic and intrinsic mechanism, have been described for the impingement syndrome. The intrinsic mechanism theories which deny the existence of impingement are gaining popularity in recent years. The various shoulder tests used to diagnose SAIS have low specificity with an average of about 50%. Meta-analysis shows that neither the Neer sign nor the Hawkins sign has diagnostic utility for impingement syndrome. Several randomised controlled trials have shown that the outcome of treatment of SAIS by surgery is no better than conservative treatment. Physiotherapy alone can provide good outcome which is comparable to that achieved with surgery without the costs and complications associated with surgery. Since decompression with surgery does not provide any additional benefits as compared to conservative treatment for patients with SAIS, the impingement theory has become antiquated and surgical treatment should have no role in the treatment of such patients. There are calls by some practitioners to abandon the term impingement syndrome and rename it as anterolateral shoulder pain syndrome. It appears that SAIS is a medical myth. There are others who called SAIS as a clinical illusion. abstract_id: PUBMED:29564979 Levosimendan for Perioperative Cardioprotection: Myth or Reality? Background: Levosimendan is a calcium sensitizer drug causing increased contractility in the myocardium and vasodilation in the vascular system. It is mainly used for the therapy of acute decompensated heart failure. Several studies on animals and humans provided evidence of the cardioprotective properties of levosimendan including preconditioning and anti-apoptotic. In view of these favorable effects, levosimendan has been tested in patients undergoing cardiac surgery for the prevention or treatment of low cardiac output syndrome. However, initial positive results from small studies have not been confirmed in three recent large trials. Aim: To summarize levosimendan mechanisms of action and clinical use and to review available evidence on its perioperative use in a cardiac surgery setting. Methods: We searched two electronic medical databases for randomized controlled trials studying levosimendan in cardiac surgery patients, ranging from January 2000 to August 2017. Metaanalyses, consensus documents and retrospective studies were also reviewed. Results: In the selected interval of time, 54 studies on the use of levosimendan in heart surgery have been performed. Early small size studies and meta-analyses have suggested that perioperative levosimendan infusion could diminish mortality and other adverse outcomes (i.e. intensive care unit stay and need for inotropic support). Instead, three recent large randomized controlled trials (LEVO-CTS, CHEETAH and LICORN) showed no significant survival benefits from levosimendan. However, in LEVO-CTS trial, prophylactic levosimendan administration significantly reduced the incidence of low cardiac output syndrome. Conclusions: Based on most recent randomized controlled trials, levosimendan, although effective for the treatment of acute heart failure, can't be recommended as standard therapy for the management of heart surgery patients. Further studies are needed to clarify whether selected subgroups of heart surgery patients may benefit from perioperative levosimendan infusion. abstract_id: PUBMED:30071572 Current Status of the Use of Salvaged Blood in Metastatic Spine Tumour Surgery. To review the current status of salvaged blood transfusion (SBT) in metastatic spine tumour surgery (MSTS), with regard to its safety and efficacy, contraindications, and adverse effects. We also aimed to establish that the safety and adverse event profile of SBT is comparable and at least equal to that of allogeneic blood transfusion. MEDLINE and Scopus were used to search for relevant articles, based on keywords such as "cancer surgery," "salvaged blood," and "circulating tumor cells." We found 159 articles, of which 55 were relevant; 20 of those were excluded because they used other blood conservation techniques in addition to cell salvage. Five articles were manually selected from reference lists. In total, 40 articles were reviewed. There is sufficient evidence of the clinical safety of using salvaged blood in oncological surgery. SBT decreases the risk of postoperative infections and tumour recurrence. However, there are some limitations regarding its clinical applications, as it cannot be employed in cases of sepsis. In this review, we established that earlier studies supported the use of salvaged blood from a cell saver in conjunction with a leukocyte depletion filter (LDF). Furthermore, we highlight the recent emergence of sufficient evidence supporting the use of intraoperative cell salvage without an LDF in MSTS. abstract_id: PUBMED:35045852 COVID-19 myth-busting: an experimental study. Background: COVID-19 misinformation is a danger to public health. A range of formats are used by health campaigns to correct beliefs but data on their effectiveness is limited. We aimed to identify A) whether three commonly used myth-busting formats are effective for correcting COVID-19 myths, immediately and after a delay, and B) which is the most effective. Methods: We tested whether three common correction formats could reduce beliefs in COVID-19 myths: (i) question-answer, ii) fact-only, (ii) fact-myth. n = 2215 participants (n = 1291 after attrition), UK representative of age and gender, were randomly assigned to one of the three formats. n = 11 myths were acquired from fact-checker websites and piloted to ensure believability. Participants rated myth belief at baseline, were shown correction images (the intervention), and then rated myth beliefs immediately post-intervention and after a delay of at least 6 days. A partial replication, n = 2084 UK representative, was also completed with immediate myth rating only. Analysis used mixed models with participants and myths as random effects. Results: Myth agreement ratings were significantly lower than baseline for all correction formats, both immediately and after the delay; all β's &gt; 0.30, p's &lt; .001. Thus, all formats were effective at lowering beliefs in COVID-19 misinformation. Correction formats only differed where baseline myth agreement was high, with question-answer and fact-myth more effective than fact-only immediately; β = 0.040, p = .022 (replication set: β = 0.053, p = .0075) and β = - 0.051, p = .0059 (replication set: β = - 0.061, p &lt; .001), respectively. After the delay however, question-answer was more effective than fact-myth, β = 0.040, p =. 031. Conclusion: Our results imply that COVID-19 myths can be effectively corrected using materials and formats typical of health campaigns. Campaign designers can use our results to choose between correction formats. When myth belief was high, question-answer format was more effective than a fact-only format immediately post-intervention, and after delay, more effective than fact-myth format. abstract_id: PUBMED:7203667 Myth and reality: the older person's relationship to his past. To study relationship to the past, reminiscence data were gathered from thirty-five persons, sixty-five to 103 years old, and twenty-five middle-aged persons, forty-four to fifty-five years old. It was hypothesized that the elderly would not only be more involved in their past but also mythicize their past as reflected in ratings of dramatization, consistency and certainty. Contrasts between the two samples revealed significantly greater involvement and dramatization but no differences in consistency, certainty or reconciliation. The mythicizing of significant figures, evidenced in reminiscence among the elderly, can be interpreted as an adaptational response that is different from making sense out of one's life. Later in life the past becomes unique, and in the myth becoming the reality, one's life becomes justified. abstract_id: PUBMED:33218550 Virtual reality in the rehabilitation process for individuals with cerebral palsy and Down syndrome: A systematic review. Background: Childhood neurological diseases result in neuromotor impairment, which affects selective motor control, compromising the acquisition of motor skills and functional independence. The positive results achieved with virtual reality are believed to be related to training in an interactive environment that provides a broad range of activities and scenarios with multiple sensory channels, enabling the creation of exercises at an intensity based on individual need. Therefore, a review was conducted to answer the following question: What are the possible effects of virtual reality for the pediatric population, specifically children with cerebral palsy and Down syndrome? Objective: The objective of the present study was to conduct a systematic review of the literature to determine the possible effects of virtual reality therapy in children with cerebral palsy and Down's syndrome. Methods: The PubMed, Bireme, Scielo and PEDro electronic databases were searched in the period from January to March 2016 using the following keywords: Down syndrome and virtual reality, virtual reality and cerebral palsy, virtual reality and neuropediatrics, and Down's syndrome and virtual reality. Only randomized controlled trials published in English in the previous 10 years (2007-2016) that addressed the specific purpose of this review and achieved a score of at least 4 points on the PEDro methodological quality scale were eligible for inclusion. Results: The initial research led to the retrieval of 214 articles, which were analyzed considering the inclusion criteria. Eighteen articles were submitted to an appraisal of methodological quality using the PEDro scale, only five of which received a score of four or more points and were described in the present review. Three of the studies selected analyzed children with cerebral palsy and two analyzed children with Down syndrome. Despite the different physiopathological characteristics of the two conditions, the authors employed similar therapeutic methods and evaluations. The results of the studies demonstrated that virtual reality training either alone or combined with motor training leads to improvements in sensory-motor functions and can be used as a complement to other successful rehabilitation interventions in the two populations. Conclusion: Based on the results of the studies included in the present systematic review, despite differences in the characteristics of each population, the objectives and methods proposed by the authors were similar and virtual reality demonstrated promising effects for individuals with cerebral palsy and Down syndrome. abstract_id: PUBMED:32646265 Chinese University Students' Attitudes Toward Rape Myth Acceptance: The Role of Gender, Sexual Stereotypes, and Adversarial Sexual Beliefs. The present study constructs and tests models that examine the relations between variables of "gender," "sex role stereotyping," and "adversarial sexual beliefs" on rape myth acceptance. The sample is 975 Chinese university students from seven universities in China. Measures include Chinese Rape Myth Acceptance (CRMA), Sex Role Stereotyping (SRS) Scale, and Adversarial Sexual Beliefs (ASB). We use structural equation modeling to investigate whether gender directly affects the acceptance of rape myth, or that these influences are mediated by SRS and ASB, after controlling for several demographic characteristics. Results suggest that SRS and ASB have a direct effect on rape myth acceptance. Gender has no direct effect on rape myth acceptance in three out of the four models, but it significantly (β = -.02, p &lt; .05) predicts the acceptance of rape-violence myth. We also discuss the implications and limitations of the study. abstract_id: PUBMED:28953811 Genetics and Psychiatry: Myth or Reality? Greek mythology and philosophical speculations were the first human productions on madness and psychiatry. Likewise, the origins of genetics sink their roots in a very remote and difficult time. This work tries to give an idea of the relationship between genetics and psychiatry through the myth and reality. abstract_id: PUBMED:26181463 The "Beauty Myth" Is No Myth : Emphasis on Male-Female Attractiveness in World Folktales. The phenomenon of apparently greater emphasis on human female physical attractiveness has spawned an array of explanatory responses, but the great majority can be broadly categorized as either evolutionary or social constructivist in nature. Both perspectives generate distinct and testable predictions. If, as Naomi Wolf (The beauty myth: How images of female beauty are used against women. New York: William Morrow, [originally published in 1991], 2002) and others have argued, greater emphasis on female attractiveness is part of a predominantly Western "beauty myth," then an analysis of a culturally diverse sample should reveal marked fluctuation in gendered attractiveness emphasis: there should be significant numbers of cultures in which male and female attractiveness are equally emphasized, and cultures in which male attractiveness receives more emphasis. On the other hand, an evolutionary perspective suggests that disproportionate emphasis on female attractiveness will be a universal or near-universal phenomenon. To test these hypotheses, we tallied references to male versus female attractiveness in 90 collections of traditional folktales from 13 diverse cultural areas. The results are consistent with the evolutionary predictions and inconsistent with the constructivist predictions. Across culture areas information on physical attractiveness was much more likely to be conveyed for female characters. Together with other recent studies, these results suggest that the main elements of the beauty myth are not myths: there are large areas of overlap in the attractiveness judgments of diverse populations, and cross-cultural emphasis on physical attractiveness appears to fall principally upon women. Answer: The "salvaged cell syndrome" refers to the hypothesis that intraoperative autotransfusion (IAT) can precipitate, aggravate, or exacerbate a coagulopathy. According to the study with PUBMED:8795524, the incidence of coagulopathy in over 36,000 multispecialty cases of IAT was low (0.05%). The study identified 18 cases of disseminated intravascular coagulation (DIC) and adult respiratory syndrome (ARDS), which were associated with conditions such as ruptured aneurysms, massive trauma, and complex redo cardiac surgery. All patients in these cases experienced shock and profound hypothermia, and the mortality rate was 100%. The conclusion drawn from this study is that coagulopathy occurs infrequently and is a result of a complex interaction of shock, hypothermia, and multiple transfusions, rather than being triggered by the autotransfusor itself. The ARDS was suggested to result from reperfusion injury following a profound ischemic event, often associated with multiorgan failure. Therefore, based on this study, the "salvaged cell syndrome" as a direct consequence of IAT appears to be more myth than reality, with other factors playing a more significant role in the development of coagulopathy and ARDS.
Instruction: Do age and comorbidity affect quality of life or PTA-induced quality-of-life improvements in patients with symptomatic pad? Abstracts: abstract_id: PUBMED:15943516 Do age and comorbidity affect quality of life or PTA-induced quality-of-life improvements in patients with symptomatic pad? Purpose: To elucidate whether age or comorbidity influences (1) quality of life (QoL) or (2) the gain in QoL due to percutaneous angioplasty procedures in symptomatic peripheral arterial disease (PAD) patients. Methods: One hundred two consecutive patients (81 men; mean age 66.3+/-9.1 years, range 44-83) with symptomatic PAD (83.2% with intermittent claudication, 16.8% with chronic critical limb ischemia) were tested cross-sectionally for their self-reported QoL and degree of leg pain during walking or rest according to standardized scales. A subgroup of 48 patients with a comparable clinical profile was prospectively investigated before and shortly after angioplasty. Quantitative indices of comorbidity burden and hemodynamic parameters were also obtained. Results: QoL impairment significantly increased with a greater comorbidity burden and greater degree of leg pain during exercise or at rest, but decreased with advancing age (R = 0.91, p &lt; 0.0001, stepwise regression). The angioplasty-associated gain in QoL, however, was positively correlated with the initial degree of QoL impairment and the degree of improvement in intermittent claudication (R = 0.873, p &lt; 0.0001, multiple regression). By contrast, neither age nor comorbidity burden adversely affected angioplasty-induced QoL improvements. Conclusions: PAD patients with higher levels of leg pain or a greater comorbidity burden generally have more pronounced QoL impairment. They can gain considerable QoL benefits from percutaneous procedures, especially if they have greater QoL impairment initially, and experience substantial improvements in exertional leg pain after treatment. In general, neither age nor comorbidity should negatively influence the decision for percutaneous therapy in symptomatic PAD patients. abstract_id: PUBMED:36856375 Quality of life in caregivers of melanoma patients Background: The impact of melanoma on quality of life (QoL) is not limited to the patient but may also affect caregivers. Objectives: To investigate the impact of melanoma on caregivers' QoL. Materials & Methods: Caregivers of melanoma patients were recruited at the melanoma unit of our hospital. The impact on caregivers' QoL was measured using the Family Dermatology Life Quality Index (FDLQI). Results: Data were collected for 120 caregivers, of whom 51.7% were men and the mean age was 56.9 years. Breslow thickness of melanoma was &lt;0.8 mm in 70.8% of cases. Mean FDLQI score was 5.7 (SD: 2.4). Among the single items of the FDLQI, the highest mean score corresponded to emotional distress. The impact on QoL was greater when the caregiver was a son/daughter, and increased relative to the age of the patient and number of years since diagnosis. Conclusion: To our knowledge, this is the first study to quantitatively evaluate the impact of melanoma on caregivers. Such impact was not negligible and mostly concerned emotional aspects. Caregivers need to be supported by structured educational and psychological interventions. abstract_id: PUBMED:23184901 The PADQOL: development and validation of a PAD-specific quality of life questionnaire. Understanding the impact of peripheral artery disease (PAD) requires broad evaluation of how functional limitations of PAD affect patients' perceptions of health-related quality of life (HRQL). The objective of this study was to describe the development, testing, and psychometric properties of the PAD Quality of Life Questionnaire (PADQOL). The PADQOL was developed in three steps: (1) interviews of symptomatic PAD patients provided content of the initial questionnaire; (2) co-administration with the SF-36 (a 36-item short-form health survey), Walking Impairment Questionnaire, and Profile of Mood States examined construct validity; and (3) a three-phased factor analysis identified factors and shortened the questionnaire. Data analyses from 297 symptomatic PAD patients resulted in a 38-item questionnaire of five factors: Social relationships and interactions, Self-concept and feelings, Symptoms and limitations in physical functioning, Fear and uncertainty, and Positive adaptation (α = 0.92-0.73) and items related to sexual function, intimate relationships and employment. Between-instrument correlations established construct validity. In conclusion, PADQOL is a validated measure to assess the disease-specific physical, psychosocial and emotional effects of PAD for research and practice. abstract_id: PUBMED:14612857 The role of comorbidity burden for patients with symptomatic peripheral arterial disease (PAD). A quantitative approach. Aim: Comorbidity will play an increasingly important role in PAD management. Therefore, we aimed 1) to address the comorbidity load of PAD patients quantitatively and comprehensively, 2) to delineate the effects of composite comorbidity measures on general and disease-specific aspects of quality of life and on crude markers of resource allocation. Methods: One hundred and one consecutive symptomatic PAD patients (80 males, 21 females; mean age: 66.4 +/- 9.1 years) were assessed by means of a cumulative illness rating scale (CIRS: 14 items; individual item rating from 1 for no through 5 for life-threatening impairment). Outcome measures were the illness severity score (CIRS-IS, mean of all single item scores) and the comorbidity index (CIRS-CI, number of single items with a score of 3, 4 or 5). These comorbidity indices of the PAD patients were compared with those of both 89 elderly community-dwelling volunteers (44 males, 45 females, mean age 77.4 +/- 5.3 years) we had investigated previously and 439 geriatric residents (121 males, 318 females; mean age 84.1 +/- 5.7 years) reported by Parmelee et al. in 1995. Furthermore, the PAD patients estimated their quality of life (LQ), subjective health status (HS), leg pains during either rest or walking on rating scales (ranging from 0 for the best to 10 for the worst result). Results: PAD patients showed very high comorbidity indices and illness severity scores (CIRS-CI: 3.98 +/- 1.60; CIRS-IS: 1.86 +/- 0.29). These proved to be greater than those of both fit community-dwelling elderly subjects (CIRS-CI: 1.82 +/- 1.46; CIRS-IS: 1.62 +/- 0.22) and older institution residents (CIRS-CI: 2.17 +/- 1.85; CIRS-IS: 1.64 +/- 0.34). Furthermore, both CIRS-CI and CIRS-IS were significantly greater in patients with more advanced PAD stage. These indices of comorbidity load were also associated with quality-of-life impairments (r=0.253, p=0.0186), degree of ischemic rest pain (r=0.251, p=0.0196), extent of medication usage (r=0.511, p&lt;0.0001) as well as duration of in-hospital stay (r=0.271, p=0.0084). Conclusion: PAD patients have a markedly increased comorbidity burden. This was greater in the more advanced PAD stages and significantly associated with quality-of-life reductions and the extent of resource allocation. A quantitative approach to the comorbidity load in PAD patients appears to be possible by means of such a scale and useful for both clinical and socioeconomic purposes. abstract_id: PUBMED:29933307 Impact of Comorbidity on Symptoms and Quality of Life Among Patients Being Treated for Breast Cancer. Background: Cancer patients often have other noncancer medical conditions. Presence of comorbidities negatively affects cancer survival. Objective: The aim of this study was to investigate comorbidity, risk factors for comorbidity, and how comorbidity was associated with symptoms and quality of life in patients being treated for breast cancer. Methods: One hundred and one breast cancer chemotherapy outpatients completed this study. Comorbid conditions, weight, height, and smoking status were identified by chart review. Symptoms and quality of life were self-reported using psychometrically sound instruments. Log-linear regression analyses with age as the covariate examined impact of ethnicity, body mass index (BMI), and smoking on comorbidities. Results: Approximately 84% of the participants had 1 or more comorbid conditions. Adjusting for age, number of comorbidities differed by BMI (P = .000); the obese group had significantly more comorbidities than the normal and overweight groups. The interaction between BMI and smoking was significant (P = .047). The obese participants who smoked had significantly more comorbidities compared with those who were obese but did not smoke (P = .001). More comorbid conditions were associated with greater pain (P &lt; .05) and poorer sleep quality (P &lt; .05). Comorbidity significantly correlated with symptoms and functional aspects of quality of life (P &lt; .01 and P &lt; .05, respectively). A greater number of comorbidities was associated with lower physical and role functioning and worse fatigue, dyspnea, appetite loss, and nausea and vomiting (all P &lt; .05). Conclusions: Comorbidity exerts negative impacts on symptoms and quality of life. Weight and smoking status are strong determinants of breast cancer comorbidity. Implications For Practice: Personalized care planning, weight management, and smoking cessation may lead to better cancer outcomes. abstract_id: PUBMED:35465291 Psychiatric comorbidity and quality of life in patients with epilepsy on anti-epileptic monotherapy and polytherapy. Objective: The bio-psycho-social factors affecting the quality of life in patients with epilepsy can be numerous but are often overlooked. The behavioral side effects of anti-seizure medications can be one such potential factor. The aim of the study is to address the effect of the number of anti-seizure medications on the development of psychiatric comorbidity and quality of life in patients with adequate seizure control. Materials And Methods: The study recruited 100 participants with generalized tonic-clonic seizures from a tertiary care center in North India, who were seizure-free from the last 1 month. The study participants were divided into two groups based on whether they were on monotherapy or polytherapy. The two groups were matched for their socio-demographic and clinical profile. We assessed for psychiatric comorbidity in each group using Mini International Neuropsychiatric Interview. All the study participants were given Hindi translated version of quality of life in the epilepsy-31 questionnaire for objective assessment of the quality of life. Results: The patients receiving anti-epileptic polytherapy had significantly higher prevalence of psychiatric comorbidity than patients on monotherapy. Furthermore, the patients on polytherapy scored significantly less on the cognitive domain of quality of life as well as the overall quality of life domain in the epilepsy-31 questionnaire. Conclusion: The patients with epilepsy must be evaluated for psychiatric comorbidity and side effect profile of anti-seizure medications to improve the quality of life. This is particularly more important for patients who are on anti-epileptic polytherapy even if the seizure control is adequate. abstract_id: PUBMED:17479526 Anxiety comorbidity and quality of life in bipolar disorder patients. Objective: To assess the impact of anxiety comorbidity on the quality of life of patients with bipolar disorder (BD). Methods: We undertook a cross-Sectional survey of 162 BD outpatients interviewed with the Structured Clinical Interview for DSM-IV. The primary outcome measure was quality of life, assessed with the 26-item WHO Quality of Life Instrument (WHOQOL-BREF). Results: Anxiety comorbidity in BD patients was associated with lower scores in all domains of quality of life. The impact of anxiety comorbidity on the psychological domain of the WHOQOL-BREF was kept, even when the current level of depression was added to the model as a confounding factor. Current anxiety comorbidity was also associated with lifetime alcohol abuse and dependence, rapid cycling, lifetime psychosis, number of suicide attempts, and a lower score in the Global Assessment of Functioning measure. Conclusion: Our findings suggest that anxiety comorbidity in BD patients is related to lower quality of life, particularly on the psychological domain. BD-anxiety comorbidity may be associated with such markers of illness severity as number of suicide attempts, rapid cycling, lifetime alcohol abuse, and psychosis. The recognition and treatment of anxiety comorbidity may help patients with BD to relieve their psychological pain and improve their overall quality of life. abstract_id: PUBMED:28360807 Symptomatic Remission Determines Functional Improvement and Quality of Life in Schizophrenia. Introduction: Schizophrenia is a chronic illness that negatively affects the quality of life and psychosocial functions. Defined criteria to assess remission in schizophrenia are considered to be useful in the long-term follow-up of patients and in discriminating diagnostic factors. This study investigated the quality of life and functionality in schizophrenia patients in symptomatic remission (R-Sch) and not in remission (Non-R-Sch). Methods: Sociodemographic data were collected for 40 R-Sch and 40 Non-R-Sch patients, and the following scales were administered: the Clinical Global Impression-Severity (CGI-S) Scale; Positive and Negative Syndrome Scale (PANSS), World Health Organization Quality of Life Questionnaire-Short Form, Turkish Version (WHOQOL-BREF-TR), Quality of Life Scale for Schizophrenia Patients (QLS), and Global Assessment of Functioning Scale (GAF). Results: The total and all subscale scores of PANSS and the CGI-S score were significantly lower in the R-Sch group than in the Non-R-Sch group, whereas the GAF scores and all subscales of QLS and WHOQOL-BREF-TR were significantly higher. Conclusion: This study demonstrates that improvement in symptoms in schizophrenia patients improves quality and functionality in all areas of life, suggesting that an improvement in symptoms is the most important determinant of functional recovery in the treatment of schizophrenia. abstract_id: PUBMED:36621792 Influence of Pain-depression Comorbidity on Quality of Life in Patients with Neuromyelitis Optica Spectrum Disorder Objective To investigate the incidence,treatment,therapeutic effect,and impact on quality of life of pain-depression comorbidity in the patients with neuromyelitis optica spectrum disorder(NMOSD).Methods The NMOSD patients comorbid with pain visiting the out-patient clinic of Peking Union Medical College Hospital were interviewed in a face-to-face manner by neurologists for evaluating the neurologic disease.Pain,depression,and quality of life were evaluated by anesthesiologists based on the brief pain inventory,the Beck depression inventory-Ⅱ,and the MOS 36-item short form health survey scales,respectively.Results Totally 81 patients were included in this study,among which 38(46.9%)patients comorbid with depression were enrolled in the comorbidity group.The pain severity showed no significant difference between the two groups(3.3±3.1 vs. 2.3±2.2,P=0.087).The daily life activities and health-related quality of life were more severely influenced in the patients with comorbidity than in the patients without comorbidity as illustrated by the breif pain inventory[general activity and normal work(P=0.001);mood,relationship with other people,and enjoyment of life(all P&lt;0.001);walking ability(P=0.004);sleep(P=0.016)]and the MOS 36-item short form health survey scales[physical functioning(P=0.001);role-physical,bodily pain,general health,vitality,social functioning,role-emotional,mental health(all P&lt;0.001)].Only 30.9% of pain sufferers received analgesic medications,with the mean pain reduction percentage of(53.6±28.9)%.None of the patients in this study took antidepressants.Conclusions Pain-depression comorbidity was common in NMOSD patients and severely influenced the quality of life of the patients.However,only a few patients received analgesic or antidepressant medications.Pain-depression comorbidity in NMOSD patients should be scrutinized. abstract_id: PUBMED:30160172 Sleep disturbance, negative affect and health-related quality of life in patients with maintenance hemodialysis. The study was designed to examine the relationships between sleep disturbance, negative affect and quality of life among patients with maintenance hemodialysis. A total of 118 patients in Shanghai, China, with maintenance hemodialysis (&gt;40 years old) participated in the study. They completed three questionnaires: the Positive and Negative Affect Scale (PANAS), the Pittsburgh Sleep Quality Index (PSQI) and the European Quality of 5-Dimensions (EQ-5D-5L). Results showed that male participants' negative affect scored significantly higher than female participants'.The mean global PSQI score was 8.64 (poor sleepers score 6 or more); sleep disturbance affected 63.6% of the patients, which indicated that in general the participants' sleep quality was relatively poor.The patients with worse sleep quality and negative affect were more likely to have a lower quality of life. In addition, the effect of the sleep disturbance on quality of life was mediated by negative affect (The Z value of Sobel test was -3.18, p&lt; .01).These findings suggest sleep disturbance and negative affect were common among patients with maintenance hemodialysis and had a negative impact on their quality of life; negative affect may mediate the relationship between sleep disturbance and quality of life. Answer: Age and comorbidity do affect the quality of life (QoL) in patients with symptomatic peripheral arterial disease (PAD), but they do not adversely affect the improvements in QoL due to percutaneous transluminal angioplasty (PTA). Specifically, a study found that QoL impairment in PAD patients significantly increased with a greater comorbidity burden and greater degree of leg pain during exercise or at rest. However, the QoL impairment decreased with advancing age. The angioplasty-associated gain in QoL was positively correlated with the initial degree of QoL impairment and the degree of improvement in intermittent claudication. Importantly, neither age nor comorbidity burden adversely affected angioplasty-induced QoL improvements (PUBMED:15943516). This suggests that while patients with higher levels of leg pain or a greater comorbidity burden generally have more pronounced QoL impairment, they can still gain considerable QoL benefits from percutaneous procedures, especially if they have greater QoL impairment initially and experience substantial improvements in exertional leg pain after treatment. Therefore, age and comorbidity should not negatively influence the decision for percutaneous therapy in symptomatic PAD patients.
Instruction: Prostate cancer pathology audits: is central pathology review still warranted? Abstracts: abstract_id: PUBMED:22704309 Prostate cancer pathology audits: is central pathology review still warranted? Introduction: Estimating the risk of extraprostatic extension and the probability of recurrence with different treatment modalities is common practice in cancer management. A strong predictor of recurrence and organ-confined disease is tumor grade. However, differences exist between genitourinary and non-specialist pathologists in grading prostate cancer. As such, the primary objective of this study was to assess the accuracy of non-specialist prostate cancer biopsies at our institution by analyzing the proportion of cases changing pathologic risk category upon expert review. Materials And Methods: Log books from 2003 where our genitourinary pathologists reviewed prostate needle-core biopsies were used to identify cases. A retrospective chart review was completed and descriptive statistics were used to summarize the results for the following synoptic variables: 10 and 20 Gleason Score, number of biopsy sites, overall % involvement, perineural invasion--PNI (present/absent), extracapsular extension--ECE (present/absent). Results: A total of 151 patients were reviewed. Twenty eight percent of cases (42/151) had a change in risk category after expert review. Of the 98 low risk cases, 33% were upgraded in risk category. Of the 24 intermediate risk cases, 12% were upgraded to high risk and none were downgraded. Of the 29 high risk cases, 24% were downgraded in risk category. Conclusion: All referred patients should continue to have their pathology centrally reviewed. This practice will help facilitate optimal prostate cancer management and improve quality of care. While these findings are dated given pathologic practice change, such changes do not necessarily equate with disparity elimination or reduction; conclusions can only be drawn with a more recent audit to see if such disparities still exist. abstract_id: PUBMED:35720846 A Review on Canine and Feline Prostate Pathology. Prostatic diseases are very common in male dogs, accounting for 3-10% of cases submitted to the veterinary practitioners. Commonly reported canine prostatic disorders include prostatic hyperplasia, prostatitis, prostatic cysts and prostatic carcinoma. However, clinical signs may be non-specific, or many cases are asymptomatic, thus leading to a difficult estimation of the actual prevalence of clinical cases. On the other side, because of the rare occurrence of prostate disease in cats, very little is known about pathogenesis, diagnostic approaches and treatment. The goal of this review is to provide detailed clinical and pathological overview of the feline and canine prostatic pathology, including the most up-to-date classification systems and histological findings. Emphasis is places on gross, cytological and histological features that are critical to reach a definitive diagnosis for a proper treatment and prognosis. abstract_id: PUBMED:17434107 Clinical impact of second pathology opinion: a longitudinal study of central genitourinary pathology review before prostate brachytherapy. Purpose: To evaluate the clinical impact of pathology review before prostate brachytherapy. Methods And Materials: Original and reviewing pathologists' reports were retrospectively collected from 1323 men treated with prostate brachytherapy between July 1998 and October 2005 at one institution. Statistical analysis was performed pre- and post-January 2002. The clinical impact of pathology review was evaluated. Results: Gleason Score (GS) change (GS(Delta)) occurred in 25.2% (334) of cases; GS increased in 21.6%, decreased in 2.4%, and diagnosed malignancy in 1.2% of cases. Post-2002, concordance in attributed GS improved, with GS(Delta) of 31.9-20.6%, respectively (p&lt;0.001), and a reduction in the average GS(Delta) (p&lt;0.001). The clinical impact was substantial with management changing in 14.8% of cases. Conclusion: Concordance between the original and reviewing pathologists' GS has improved during the study period. Nevertheless, discordance persists in one of five cases. Pathology review remains essential, if treatment decisions hinge on GS. abstract_id: PUBMED:23522911 Phase 3 study of adjuvant radiotherapy versus wait and see in pT3 prostate cancer: impact of pathology review on analysis. Background: In a randomised trial, radical prostatectomy (RP) followed by adjuvant radiotherapy (aRT) was compared with RP alone in patients with pT3 pN0 prostate cancer with or without positive margin at local pathology (German Cancer Society trial numbers ARO 96-02/AUO AP 09/95). Objective: A pathology review was performed on 85% of RP specimens of patients to investigate the influence of pathology review on the analysis. Design, Setting, And Participants: Patients post-RP (n=385) were randomised before achieving an undetectable prostate-specific antigen (PSA) level to either wait and see (n=192) or 60Gy aRT (n=193). Of 307 patients with undetectable PSA after RP, 262 had pathology review. These results were included prospectively into the analysis. Outcome Measurements And Statistical Analysis: Agreement between local and review pathology was measured by the total percentage of agreement and by simple kappa statistics. The prognostic reliability for the different parameters was analysed by Cox regression model. Event-free rates were determined by Kaplan-Meier analysis with a median follow-up of 40 mo for the wait-and-see arm and 38.5 mo for the aRT arm. Results And Limitations: There was fair concordance between pathology review and local pathologists for seminal vesicle invasion (pT3c: 91%; κ=0.76), surgical margin status (84%; κ=0.65), and for extraprostatic extension (pT3a/b: 75%; κ=0.74). Agreement was much less for Gleason score (47%; κ=0.42), whereby the review pathology resulted in a shift to Gleason score 7. In contrast to the analysis of progression-free survival with local pathology, the multivariate analysis including review pathology revealed PSMs and Gleason score &gt;6 as significant prognostic factors. Conclusions: Phase 3 studies of postoperative treatment of prostate cancer should be accomplished in the future with a pathology review. In daily practice, a second opinion by a pathologist experienced in urogenital pathology would be desirable, in particular, for high-risk patients after RP. abstract_id: PUBMED:29147082 Impact of Pathology Review for Decision Therapy in Localized Prostate Cancer. Background: The Gleason score is an essential tool in the decision to treat localized prostate cancer. However, experienced pathologists can classify Gleason score differently than do low-volume pathologists, and this may affect the treatment decision. This study sought to assess the impact of pathology review of external biopsy specimens from 23 men with a recent diagnosis of localized prostate cancer. Methods: All external biopsy specimens were reviewed at our pathology department. Data were retrospectively collected from scanned charts. Results: The median patient age was 63 years (range: 46-74 years). All patients had a Karnofsky performance score of 90% to 100%. The median prostate-specific antigen level was 23.6 ng/dL (range: 1.04-13.6 ng/dL). Among the 23 reviews, the Gleason score changed for 8 (35%) patients: 7 upgraded and 1 downgraded. The new Gleason score affected the treatment decision in 5 of 8 cases (62.5%). Conclusions: This study demonstrates the need for pathology review in patients with localized prostate cancer before treatment because Gleason score can change in more than one-third of patients and can affect treatment decision in almost two-thirds of recategorized patients. abstract_id: PUBMED:16941153 Impact of pathology review of stage and margin status of radical prostatectomy specimens (EORTC trial 22911). Pathological staging and surgical margin status of radical prostatectomy specimens are next to grading the most important prognosticators for recurrence. A central review of pathological stage and surgical margin status was performed on a series of 552 radical prostatectomy specimens of patients, participating in the European Organisation for Research and Treatment of Cancer trial 22911. Inclusion criteria of the trial were pathological stage pT3 and/or positive surgical margin at local pathology. All specimens were totally embedded. Data of the central review were compared with those of local pathologists and related to clinical follow-up. Although a high concordance between review pathology and local pathologists existed for seminal vesicle invasion (94%, kappa=0.83), agreement was much less for extraprostatic extension (57.5%, kappa=0.33) and for surgical margin status (69.4%, kappa=0.45). Review pathology of surgical margin status was a stronger predictor of biochemical progression-free survival in univariate analysis [hazard ratio (HR)=2.16 and p=0.0002] than local pathology (HR=1.08 and p&gt;0.1). The review pathology demonstrated a significant difference between those with and without extraprostatic extension (HR=1.83 and p=0.0017), while local pathology failed to do so (HR=1.05 and p&gt;0.8). The observations suggest that review of pathological stage and surgical margin of radical prostatectomy strongly improves their prognostic impact in multi-institutional studies or trials. abstract_id: PUBMED:17211424 Technology insight: will systems pathology replace the pathologist? By using systems pathology, it might be possible to provide a predictive, personalized therapeutic recommendation for patients with prostate cancer. Systems pathology integrates quantitative data and information from many sources to generate a reliable prediction of the expected natural course of the disease and response to different therapeutic options. In other words, through the integration of relatively large data sets and the use of knowledge engineering, systems pathology aims at predicting the future behavior of tumors and their interaction with the host. In this Review, we introduce the methods used in systems pathology and summarize a recent study providing the first evidence of a concept for this strategy. The results show that systems pathology can provide a personalized prediction of the risk of recurrence after prostatectomy for cancer. abstract_id: PUBMED:32101057 Frontiers in Genitourinary Pathology. This special section includes 4 articles as the proceedings of the Fifth Princeton Integrated Pathology Symposium (PIPS): Genitourinary Pathology, and an update on neuroendocrine tumor of the prostate. The symposium took place at the Princeton Medical Center, Plainsboro, New Jersey, on Sunday April 15, 2018. We hope again that this collection of outstanding reviews will serve as a handy reference for your daily practice. abstract_id: PUBMED:37919231 Engineering the future of 3D pathology. In recent years, technological advances in tissue preparation, high-throughput volumetric microscopy, and computational infrastructure have enabled rapid developments in nondestructive 3D pathology, in which high-resolution histologic datasets are obtained from thick tissue specimens, such as whole biopsies, without the need for physical sectioning onto glass slides. While 3D pathology generates massive datasets that are attractive for automated computational analysis, there is also a desire to use 3D pathology to improve the visual assessment of tissue histology. In this perspective, we discuss and provide examples of potential advantages of 3D pathology for the visual assessment of clinical specimens and the challenges of dealing with large 3D datasets (of individual or multiple specimens) that pathologists have not been trained to interpret. We discuss the need for artificial intelligence triaging algorithms and explainable analysis methods to assist pathologists or other domain experts in the interpretation of these novel, often complex, large datasets. abstract_id: PUBMED:32128929 Artificial intelligence as the next step towards precision pathology. Pathology is the cornerstone of cancer care. The need for accuracy in histopathologic diagnosis of cancer is increasing as personalized cancer therapy requires accurate biomarker assessment. The appearance of digital image analysis holds promise to improve both the volume and precision of histomorphological evaluation. Recently, machine learning, and particularly deep learning, has enabled rapid advances in computational pathology. The integration of machine learning into routine care will be a milestone for the healthcare sector in the next decade, and histopathology is right at the centre of this revolution. Examples of potential high-value machine learning applications include both model-based assessment of routine diagnostic features in pathology, and the ability to extract and identify novel features that provide insights into a disease. Recent groundbreaking results have demonstrated that applications of machine learning methods in pathology significantly improves metastases detection in lymph nodes, Ki67 scoring in breast cancer, Gleason grading in prostate cancer and tumour-infiltrating lymphocyte (TIL) scoring in melanoma. Furthermore, deep learning models have also been demonstrated to be able to predict status of some molecular markers in lung, prostate, gastric and colorectal cancer based on standard HE slides. Moreover, prognostic (survival outcomes) deep neural network models based on digitized HE slides have been demonstrated in several diseases, including lung cancer, melanoma and glioma. In this review, we aim to present and summarize the latest developments in digital image analysis and in the application of artificial intelligence in diagnostic pathology. Answer: Based on the provided abstracts, central pathology review for prostate cancer pathology is still warranted. Several studies have highlighted the importance and impact of central pathology review on the management and treatment decisions for prostate cancer patients. For instance, a study found that 28% of cases had a change in risk category after expert review, with a significant proportion of low-risk cases being upgraded and some high-risk cases being downgraded upon central review by genitourinary pathologists (PUBMED:22704309). This suggests that central pathology review can lead to more accurate risk stratification, which is crucial for optimal management of prostate cancer. Another study reported that the clinical impact of pathology review before prostate brachytherapy was substantial, with management changing in 14.8% of cases due to changes in Gleason Score after review (PUBMED:17434107). This indicates that central pathology review can directly influence treatment decisions. Furthermore, a pathology review performed on radical prostatectomy specimens in a randomized trial revealed significant prognostic factors that were not identified by local pathology, emphasizing the need for central pathology review in postoperative treatment of prostate cancer (PUBMED:23522911). The impact of pathology review on decision therapy in localized prostate cancer was also demonstrated, with more than one-third of patients experiencing a change in Gleason score after review, affecting treatment decisions in almost two-thirds of recategorized patients (PUBMED:29147082). Additionally, a central review of pathological stage and surgical margin status in a multi-institutional trial improved the prognostic impact of these factors compared to local pathology (PUBMED:16941153). In summary, the evidence from these studies supports the continued need for central pathology review in prostate cancer to ensure accurate diagnosis, staging, and treatment planning, which ultimately improves patient outcomes.
Instruction: Does knowledge about sexually transmitted infections increase the likelihood of consistent condom use? Abstracts: abstract_id: PUBMED:15539039 Does knowledge about sexually transmitted infections increase the likelihood of consistent condom use? Background: We aimed to assess the association between knowledge about sexually transmitted infections (STIs) and consistent condom use among university students controlling for selected socio-demographic factors. Method: An anonymous questionnaire survey was carried out among 720 undergraduate students (76% women) at the University of Tirana, Albania, in October-November 2002. The questionnaire included socio-demographic data, 10 multiple-choice test questions on knowledge about STIs, sexual activity and use of condoms. Two hundred seventy-nine students reported having sexual experience and provided data on condom use as well as knowledge about STIs. Binary logistic regression was used to assess the "independent" association between knowledge about STIs and condom use. Results: In multivariable models, there was an overall association of knowledge about STIs with consistent condom use (OR = 1.74, 95% CI = 1.37-2.23, P &lt; 0.001). There was evidence for an interaction between knowledge and parental education (P = 0.04): there was a positive association among students with highly educated parents (OR = 1.48, 95% CI = 1.11-1.99, P = 0.01), and particularly so among students with low-and-middle educated parents (OR = 4.75, 95% CI = 2.03-11.08, P &lt; 0.01). Conclusions: Knowledge about STIs is an independent predictor of consistent condom use among university students. Augmentation of knowledge per se should be a component of school-based prevention programs along with the intensification of efforts toward improving students' sexual practices. abstract_id: PUBMED:36042428 Determinants of consistent condom use among Filipino women: Results from the 2017 Philippine National Demographic and Health Survey. Background: Consistent condom use in women, defined as the self-reported usage of male condom in every sexual encounter of the respondent with her most recent partner in the last 12 months, had been perennially low in the Philippines. This is despite consistent condom use being a tested and proven public health intervention to prevent unwanted pregnancy and sexually transmitted infections. Considering the high fertility rate, teenage pregnancy rate, and the rapidly increasing incidence of HIV in the country, we identified the determinants of consistent condom use in the Philippines. Methods: We used data from the individual recode of the 2017 Philippine National Demographic and Health Survey. We used logistic regression for survey data to identify factors associated with consistent condom use. Results: Out of 25,074 respondents, only 261 (1.13%) have used condoms consistently with their most recent partner. Reach of information and education campaigns on contraceptive use via different media ranged from 62% via television to 7% via short messaging service. After adjusting for confounders, those who were able to ask their partners to use condoms during sexual intercourse have 6.18 times (adjusted odds ratio (aOR): 6.18; 95% Confidence Interval (95% CI): 2.02. 18.94) greater odds of consistent condom use than those who were unable to ask their partners to use condoms during sexual intercourse. Meanwhile, HIV knowledge (aOR: 1.16; 95% CI: 0.98, 1.38) and hearing about contraception in television (aOR: 1.54; 95% CI: 1.00, 2.38) have weaker associations with consistent condom use. Conclusions: The low percentage of those who use condoms consistently, together with the low reach of information and education campaigns, highlight the need to implement multi-faceted and context-specific interventions to promote sexual agency and/or consistent condom use to address the burden of unwanted pregnancies and HIV in the Philippines. abstract_id: PUBMED:31477068 Factors associated with consistent condom use: a cross-sectional survey of two Nigerian universities. Background: Consistent condom use is central to the prevention of transmission of human immunodeficiency virus (HIV) and other sexually transmitted diseases (STDs), especially among young adults. This study drew from a cross-sectional survey of two Nigerian universities to determine the level of consistent condom use, explored the determinants of condom use consistency and reasons for inconsistent condom use. Methods: We adopted a descriptive cross-sectional design, which involves the recruitment of 800 male and female students using stratified random sampling. Adjusted and unadjusted logistic regression models were used to examine the factors associated with consistent condom use among a final sample of 498 students who engaged in sex in the last year preceding the study. Results: Only 38.6% of sexually active participants (n = 498) used condoms consistently in the previous year. High condom self-efficacy score (AOR: 2.40; 95% CI: 1.58-3.64), discussion of HIV/STIs with sexual partner (AOR: 1.91; 95%CI: 1.29-2.83), knowing partner's HIV status (AOR: 1.48; 95% CI: 1.02-2.16), being students of university located in a high HIV prevalence area (AOR: 2.86; 95% CI: 1.92-4.28) and engaging in sex with only steady partner (AOR: 1.74; 95% CI: 1.17-2.60) were associated with a higher odds of consistent condom use. Trust, unavailability of condoms, dislike of condoms and a perception that condoms reduced sexual pleasure were the main reasons for inconsistent use of condoms. Conclusion: The study found a low level of consistent condom use among study participants. Counselling young adults in Nigeria on condom self-efficacy, providing condoms on campuses and encouraging the discussion of sexually transmitted infections with sexual partners are central to improving the level of consistent condom use among Nigerian university students. abstract_id: PUBMED:26668671 Condom Use Determinants and Practices Among People Living with HIV in Kisii County, Kenya. The male condom remains the single, most efficient and available technology to reduce sexual transmission of HIV as well as sexually transmitted infections. This study sought to establish condom use determinants and practices among people living with HIV (PLHIVs) in Kisii County, Kenya. We interviewed 340 PLHIVs and 6 health workers. Although most PLHIVs had correct knowledge and approved condoms as effective for HIV prevention, consistent use and condom use at last sex were notably low especially among PLHIVs aged 18 - 24, those who depended on remittances from kin as main source of income, as well as during sex with secondary and casual partners. This study notes that knowledge on various benefits of using condoms is associated with enhanced condom use practices. Non-disclosure of HIV status to secondary and casual partners remains a key barrier to condom use among PLHIVs Our observations highlight the need to further promote condom use among specific PLHIVs socio-demographic groups who continue to exhibit low condom use rates. abstract_id: PUBMED:36978037 Psychosocial determinants of consistent condom use among university students in Sudan: findings from a study using the Integrated Change Model. Unprotected sex is common among university students in Sudan, thus increasing risks for sexually transmitted diseases (STDs) and human immunodeficiency virus (HIV). As little is known about the psychosocial determinants of consistent condom use among this population, this study was designed to identify them. The Integrated Change Model (ICM) was applied in a cross-sectional design to identify in 218 students (aged 18-25 years) from Khartoum which items distinguish condom users from non-condom users. Condom users differed significantly from non-condom users in having more HIV and condom use-related knowledge, higher perception of susceptibility to HIV, reporting more exposure to condom use cues, having a less negative attitude towards condom use (attitude cons), experiencing social support and norms favouring condom use and having higher condom use self-efficacy. Binary logistic regression showed that peer norms favouring condom use in addition to HIV-related knowledge, condom use cues, negative attitude and self-efficacy were the factors uniquely associated with consistent condom use among university students in Sudan. Interventions seeking to promote consistent condom use among sexually active students could benefit from increasing knowledge about HIV transmission and prevention, raising HIV-risk perception, using condom use cues, addressing perceived condom disadvantages and enhancing students` self-efficacy to avoid unprotected sex. Moreover, such interventions should raise students` perceptions of their peers` beliefs and behaviours favouring condom use and seek health care professionals` and religious scholars` support for condom use. abstract_id: PUBMED:36386556 Consistent Condom Use and Associated Factors among sexually active Military Personnel in Eastern Ethiopia: Cross-Sectional Study Design. Background: A condom is a latex-based device used to prevent pregnancy and sexually transmitted infections (STIs). Despite condom availability and promotion for use in STI prevention, consistent condom use remains too low in Ethiopia. A significant proportion of Ethiopian military personnel had multiple sexual partners, with lower rates of condom use with non-regular sexual partners. As a result, determining the pattern of condom use and the factors associated with it among military personnel is critical. Objective: The purpose of this study was to evaluate consistent condom use and associated factors among sexually active military personnel in Eastern Ethiopia from February 1 to 28, 2020. Methods: A cross-sectional study was conducted among 327 sexually active military personnel. To select the study participants, a systematic sampling technique was used. The data were entered into EpiData and exported to the Statistical Package for Social Sciences version 20 for analysis. To identify factors associated with condom use, bivariate and multivariable logistic regression analysis were used, and the strength of the association was measured using odds ratio and 95% confidence interval (CI), with P-values less than 0.05 considered statistically significant. Results: The overall prevalence of consistent condom uses over one year by the respondents was 59.9% (95% CI: 54, 65). Study participants with an age of 30-35 years old (AOR = 3.12; 95% CI: 1.43, 11.38), above 35 years old (AOR = 2.42; 95% CI: 1.83, 9.30), college and above in educational status (AOR = 1.52; 95% CI: 1.20, 5.43), Officer in military rank (AOR = 3.12; 95% CI: 1.14, 10.15) and other military ranks (AOR = 3.08; 2.33, 8.52) were factors facilitate the use of condom consistently. Conclusion: In this study, two in every five military personnel use condoms inconsistently. Designing appropriate intervention and behavioral change strategies, as well as increasing the accessibility and availability of condoms around military camps, will undoubtedly have a significant impact on consistent condom utilization. abstract_id: PUBMED:25615688 Prevalence of consistent condom use with various types of sex partners and associated factors among money boys in Changsha, China. Introduction: Money boys with inconsistent condom use (less than 100% of the time) are at high risk of infection by human immunodeficiency virus (HIV) or sexually transmitted infection (STI), but relatively little research has examined their risk behaviors. Aim: We investigated the prevalence of consistent condom use (100% of the time) and associated factors among money boys. Methods: A cross-sectional study using a structured questionnaire was conducted among money boys in Changsha, China, between July 2012 and January 2013. Main Outcome Measure: Independent variables included socio-demographic data, substance abuse history, work characteristics, and self-reported HIV and STI history. Dependent variables included the consistent condom use with different types of sex partners. Results: Among the participants, 82.4% used condoms consistently with male clients, 80.2% with male sex partners, and 77.1% with female sex partners in the past 3 months. A multiple stepwise logistic regression model identified four statistically significant factors associated with lower likelihoods of consistent condom use with male clients: age group, substance abuse, lack of an "employment" arrangement, and having no HIV test within the prior 6 months. In a similar model, only one factor associated significantly with lower likelihoods of consistent condom use with male sex partners was identified in multiple stepwise logistic regression analyses: having no HIV test within the prior six months. As for female sex partners, two significant variables were statistically significant in the multiple stepwise logistic regression analysis: having no HIV test within the prior 6 months and having STI history. Conclusions: Interventions which are linked with more realistic and acceptable HIV prevention methods are greatly warranted and should increase risk awareness and the behavior of consistent condom use in both commercial and personal relationship. abstract_id: PUBMED:28300337 Parents' knowledge, attitudes, behaviors, and barriers to promoting condom use among their adolescent sons. Human immunodeficiency virus (HIV) is increasing in Thai youth. Consequently, a school-based National Condom Strategy program was launched in 2015 to reduce the rate of HIV and sexually transmitted infections. We conducted in-depth interviews of 31 parents/adoptive guardians of high-school age Thai boys to explore parents' knowledge, attitudes, behaviors, and barriers to promoting condom use and its role in preventing HIV/sexually transmitted infections. A descriptive phenomenological approach and thematic analysis was used. Most participants had knowledge and positive attitudes about condom use. Half disagreed with the National Condom Strategy of placing condom vending machines in schools. More than half did not initiate teaching male youth about condom use until the youth had broached the question or only intended to do so when the youths were older. Barriers included parents' lack of condom experience and knowledge, embarrassment/discomfort when talking about condoms, and belief that youth were not sexually active. Parents are a key factor for supporting condom use in Thailand. This study supports the need for education programs for parents. abstract_id: PUBMED:26277905 Predictors of consistent condom use among Portuguese women attending family planning clinics. Women account for 30% of all AIDS cases reported to the Health Ministry in Portugal and most infections are acquired through unprotected heterosexual sex with infected partners. This study analyzed socio-demographic and psychosocial predictors of consistent condom use and the role of education as a moderator variable among Portuguese women attending family planning clinics. A cross-sectional study using interviewer-administered fully structured questionnaires was conducted among 767 sexually active women (ages 18-65). Logistic regression analyses were used to explore the association between consistent condom use and the predictor variables. Overall, 78.7% of the women were inconsistent condom users. The results showed that consistent condom use was predicted by marital status (being not married), having greater perceptions of condom negotiation self-efficacy, having preparatory safer sexual behaviors, and not using condoms only when practicing abstinence. Living with a partner and having lack of risk perception significantly predicted inconsistent condom use. Less educated women were less likely to use condoms even when they perceive being at risk. The full model explained 53% of the variance in consistent condom use. This study emphasizes the need for implementing effective prevention interventions in this population showing the importance of taking education into consideration. abstract_id: PUBMED:26137977 Sexual Activity and Condom Use among Israeli Adolescents. Introduction: In Israel, as in other industrialized countries, the age of sexual debut among adolescents has declined, and the rate of sexually transmitted infections (STI) has risen, but the motivations and attitudes of Israeli adolescents toward carrying condoms have yet to be studied. Aims: The aims of this study were to establish the associations (if any) between demographic characteristics and the knowledge held by Jewish Israeli adolescents about HIV transmission, their attitudes toward condom use and sexual experience, and to explore their recommendations to increase condom use. Methods: The method used was an analysis of sexual experience and practices, attitudes toward condom carrying and condom use among a national representative sample of Jewish adolescents aged 15-18. Main Outcome Measures: Two dichotomized measures were assessed: (i) sexual experience (defined as having had previous consensual oral/vaginal/anal sex); and (ii) the practice of carrying a condom on a regular basis. Results: Of all 410 participants, 14.6% carried condoms, 18.3% had sexual experience, and 70.7% of those used condoms. Those who thought condoms to be protective against HIV, and those who thought they are difficult to wear, were more likely to have sexual experience. The perception of condom use as important, and the perception that condoms are difficult to wear, were predictors of condom carrying. The participants' knowledge of the risk of HIV in vaginal intercourse was deficient. Participants did not consider school sex education to be effective in promoting condom use, and recommended the use of graphic, deterrent personal accounts told by youths to encourage wider use of condoms. Conclusions: Health educators should consider the barriers cited by adolescents and the deterrent techniques they recommend when planning interventions to encourage condom use. Answer: Yes, knowledge about sexually transmitted infections (STIs) does increase the likelihood of consistent condom use. This is supported by several studies that have found a positive association between STI knowledge and condom use. For instance, a study among university students in Albania found that knowledge about STIs was an independent predictor of consistent condom use, particularly among students with parents of low to middle education levels (PUBMED:15539039). Similarly, a study among university students in Sudan using the Integrated Change Model found that HIV and condom use-related knowledge were significantly associated with consistent condom use (PUBMED:36978037). Additionally, a study among Nigerian university students showed that high condom self-efficacy score and discussion of HIV/STIs with a sexual partner were associated with a higher odds of consistent condom use (PUBMED:31477068). However, it is important to note that while knowledge is a crucial factor, it is not the only determinant of consistent condom use. Other factors such as the ability to negotiate condom use with partners, perceptions of condom negotiation self-efficacy, and preparatory safer sexual behaviors also play significant roles (PUBMED:36042428, PUBMED:26668671, PUBMED:25615688, PUBMED:26137977). Moreover, interventions that increase risk awareness and promote consistent condom use in both commercial and personal relationships are warranted (PUBMED:25615688). Additionally, addressing barriers such as lack of condom experience, discomfort in discussing condoms, and beliefs about sexual activity can further support consistent condom use (PUBMED:28300337). In summary, knowledge about STIs is associated with an increased likelihood of consistent condom use, but it should be complemented with other supportive measures and interventions to address various psychosocial and behavioral factors.
Instruction: Is response to prophylactic lithium a familial trait? Abstracts: abstract_id: PUBMED:12416605 Is response to prophylactic lithium a familial trait? Background: Selecting a drug according to the treatment response in a relative has been widely accepted advice in the management of mood disorders. However, this recommendation has not been adequately substantiated in the literature. We tested the hypothesis that response to long-term lithium treatment is a familial trait. Method: We compared response to long-term lithium treatment in bipolar relatives of bipolar lithium responders and bipolar controls. Twenty-four relatives with bipolar disorder (as determined using the Schedule for Affective Disorders and Schizophrenia-Lifetime version [SADS-L] and Research Diagnostic Criteria [RDC]) were identified in families of 106 patients with lithium-responsive bipolar disorder. A consecutive series of 40 lithium-treated patients in a bipolar clinic (meeting RDC and DSM-IV criteria for bipolar disorder) served as a comparison group. Lithium response was evaluated on a rating scale reflecting the quality and quantity of available data. Results: The prevalence of unequivocal response among the relatives was 67%, as compared with the response rate of 35% in the comparison group (chi2 = 6.04, df = 1, p = .014). Conclusion: This highly significant difference in response between relatives and the control group supports the view that the response to lithium prophylaxis clusters in families. abstract_id: PUBMED:18922242 Predictors of prophylactic response to lithium Introduction: Since their first utilization in psychiatry as mood stabilizers in the 1940s, lithium salts have been widely studied in the medical literature. The considerable amount of data available to date, supports the use of lithium salts as first-line mood stabilizing agents, with acute antimanic and antidepressant properties and proven efficacy in the long term prevention of manic and depressive relapses. Literature Findings: Several predictors were reported by different authors in early articles and were confirmed later on by the medical literature. All the psychopathological, environmental, biological, neurophysiologic and genetic predictors known to date are reviewed here. PSYCHOLOGICAL PREDICTORS: Psychopathological predictors of a good response to lithium prophylaxis include: the initial good response to lithium during the first 6-12 months of treatment, considered to date to be the most reliable predictor of a favourable response to lithium; the classical pattern of elated manic episodes; a positive familial history of bipolar disorders, especially those known to be responsive to lithium; the absence of comorbid personality disorders; bipolar type I disorders; melancholic features during depressive episodes; MDI pattern in the illness course and early onset of lithium treatment. In contrast, the following have been confirmed as psychopathological predictors of poor prophylactic lithium response: mixed episodes, considered to be one of the most reliable predictors of poor response to lithium since Kraeplin's description; rapid cycling bipolar disorders; comorbid alcohol and/or drug abuse; mood disorders with incongruent psychotic features; early onset bipolar disorder before the age of 18; discontinuation of lithium treatment; high number of previous affective episodes in the illness course before lithium initiation and DMI pattern. ENVIRONMENTAL PREDICTORS: Among environmental factors, being single was found to be the only predictor of a poor response to lithium treatment in prophylaxis. BIOLOGICAL PREDICTORS: Biological predictors of a good prophylactic response to lithium include a high RBC/plasma-lithium ratio, one of the most controversial predictors of a favourable response to lithium in the literature, a higher platelet serotonin-induced calcium mobilization, and a high rate of red blood cell membrane phospholipids, especially of phosphatidylcholine, and a phospholipid implicated in lithium intracellular transport. Among neurophysiologic predictors of a favourable response to lithium, the following have been reported: brain lithium concentrations above 0.2 mEq/L when measured by 7Li-MRS; decreased cerebral intracellular pH and white matter hyper intensity at (31)P-MRS and a high intensity of loudness dependence auditory-evoked potentials (LDAEP), the latter being one of the best indicators of human cerebral serotoninergic functioning. In contrast, the following have been reported as neurophysiological predictors of a poor lithium response in prophylaxis: epileptiform anomalies with diffuse theta waves on electroencephalography, a predictor of poor response to lithium known since the descriptions of Dalen in 1965 and decreased cerebral phosphocreatine levels at (31)P-MRS, the latter being an indicator of cerebral mitochondrial dysfunction. GENETIC PREDICTORS: Genetic predictors of good response to lithium in prophylaxis include a lower-inositol-monophosphatase (IMPase-2) mRNA expression, IMPase-2 being a key enzyme of the calcium-intracellular-signalling pathway and IMPase-2 gene being studied recently as a candidate gene in bipolar disorder. A higher frequency of phospholipase C isoenzyme gamma1 (PLCG1)-5 repeat allele genes has also been associated with a good response to lithium, PLCG1 being a major enzyme of the phosphatidylinositol second messenger system. Genetic predictors of negative prophylactic lithium response include the homozygotic forms of the short allele of the serotonin transporter gene (5-HTT), the presence of the A/A subtype of tryptophan hydroxylase (TPH) gene and a high frequency of human leukocyte antigens type A3 (HLA-A3), this genotype being associated with cellular membrane anomalies implicated in alteration of lithium intracellular transport. Discussion: The search for new predictors of lithium prophylactic response is currently facing several methodological problems: lack of representativity of the samples of bipolar patients enrolled in research studies, poor reliability of retrospective reconstructions of the course of the bipolar disorder before initiation of lithium treatment, absence of consensus on tools used to assess response to lithium prophylaxis in study designs, difficult access and high costs of most of the laboratory and neuroimaging techniques used in recent studies such as magnetic resonance spectroscopy and LDAEP measures, and problematic evaluation of the impact of treatment on a disorder whose natural intrinsic course is often irregular. abstract_id: PUBMED:2941468 Who responds to prophylactic lithium? Response to prophylactic lithium was studied in relation to clinical and psychological characteristics in a large series of patients with recurrent affective disorders. The findings were that bipolar patients with a family history of mania or depression had more favourable responses than those with no family history of affective disorders. Unipolar patients with more endogenous illnesses and those with pure familial depressive disease had more favourable responses than those with less endogenous illnesses and those with sporadic and depression spectrum diseases. Good responders showed generally less personality disturbance on a variety of measures than fair-to-poor responders. Response to lithium over 6 months in unipolar illness and over the first year in bipolar illness was strongly associated with long-term response. abstract_id: PUBMED:9285078 Relationship between prophylactic effect of lithium therapy and family history of affective disorders. Lithium therapy response and age of onset (AOO) were studied in 98 patients with bipolar affective disorder (BPAD) who were divided into subgroups depending on type of family history of affective disorders. The highest (33.0 years) and lowest (25.5 years) age of onset were found in nonfamilial patients and in familial patients with a first-degree relative of BPAD, respectively. Nonfamilial patients showed the best response to lithium. There were 0.9 episodes/year off lithium compared to 0.3 episodes/year on lithium (an 88% decrease). A poorer response (a 71% decrease; a reduction from 1.39 episodes per year off lithium to 0.65 on lithium) was found in familial patients with a first-degree relative of BPAD. Differences in serum lithium values between the groups could not explain the observed differences. Thus, familial patients showed a more severe manifestation of the disease with an earlier AOO and a lower prophylactic effect of lithium. abstract_id: PUBMED:3325026 Lithium in depression: a review of the antidepressant and prophylactic effects of lithium. The therapeutic effects of lithium in depression are reviewed. The acute antidepressant effect of lithium alone is neither as impressive nor as predictable as its antimanic action, nor is it equivalent to that of tricyclic antidepressants. In patients who are 'refractory' to tricyclics or monoamine oxidase inhibitors, combined treatment with lithium may augment antidepressant response. Lithium is an effective prophylactic treatment in both unipolar and bipolar disorder and in the latter is the drug of choice. Aspects of monitoring, such as range of therapeutic plasma levels, dosage regimen and adverse effects, are discussed. Current evidence suggests that, in patients who fail to respond to lithium or are unable to tolerate side-effects, carbamazepine should be considered. abstract_id: PUBMED:7333789 Poor prophylactic lithium response due to antidepressants. 50 manic-depressive patients with rapid cycles received lithium for more than 1 year, during depression they received antidepressant drugs. Response was poor in 36, partial in 6, and good in 8. 21 of the poor responders were persuaded to endure depression without antidepressants; anxiolytics were allowed, 15 stabilized after the end of the untreated depression or after a few milder, shorter episodes; 4 improved partially; 2 were unchanged. 15 other rapid cycle patients started on lithium and stopped antidepressants at the same time. Response was good in 13, partial in 1, and poor in 1. Patients with a course of depression-hypomania (or mania)-free interval also responded poorly to prophylactic lithium when the depression was treated with antidepressants. They responded well when antidepressants were withdrawn. Antidepressants often cause or accentuate a switch from depression to hypomania or mania, and temporary refractoriness to lithium of the hypomania or mania. In this way lithium fails to prevent depression. abstract_id: PUBMED:10212559 Pharmacogenetics of lithium response in bipolar disorder. Lithium is the first-line treatment for bipolar disorder. In the past, genetic studies have attempted to identify factors associated with positive treatment response or side effects. Several research groups have shown that familial factors, family history of primary bipolar disorder, and negative family history of schizophrenia in particular, correlate well with prophylactic lithium response. Conversely, studies of lithium responsive patients and their families can assist genetic research of bipolar disorder. Lithium responders appear to suffer from a form of bipolar disorder that is more genetically based and more homogeneous. In a series of family studies, the author and his colleagues have confirmed the differences in family histories of lithium responders and nonresponders and shown that the mode of inheritance in lithium responders is compatible with a major-gene model. Subsequently, they initiated an international collaborative study to map the gene(s) predisposing to the illness or treatment response, or both, using both linkage and association strategies. To date, a sample of 32 families, 138 unrelated patients and 163 control subjects has been studied. In these studies, they found support for the role of phospholipase C in lithium responsive bipolar disorder. abstract_id: PUBMED:16176433 Which clinical factors predict response to prophylactic lithium? A systematic review for bipolar disorders. Objectives: The aim of this study was to systematically integrate the available evidence on response prediction to prophylactic lithium based on clinical factors. Methods: Each clinical variable that was related to lithium response in at least one prior study was examined with respect to response prediction. If several studies were located for the same variable, results were integrated using the meta-analytic approach as suggested by DerSimonian and Laird which was developed for substantial heterogeneity in primary studies. Results: Of 42 potential clinical predictors investigated, five variables were identified as possible response predictors of prophylactic lithium: [1] An episodic pattern of mania-depression-interval, and [2] a high age of illness onset were identified as potentially protective against a recurrence under lithium. [3] A high number of previous hospitalizations, [4] an episodic pattern of depression-mania-interval, and [5] continuous cycling were identified as potential risk factors. Six further variables were found to be significantly related to lithium response, though calculation of fail-safe numbers indicates that current evidence is not sufficient to hold these variables as reliable predictors of lithium response. All effect-sizes relating clinical predictors to response were small to moderate. Conclusions: Although a few variables are quite robustly supported as response-predictors in this review, a more in-depth analysis of each potential predictor is needed. As none of the potential predictors had a very strong impact on response, prediction of lithium response should be based on a multitude of variables. abstract_id: PUBMED:29904359 Analysis of the Influence of microRNAs in Lithium Response in Bipolar Disorder. Bipolar disorder (BD) is a common, highly heritable neuropsychiatric disease characterized by recurrent episodes of mania and depression. Lithium is the best-established long-term treatment for BD, even though individual response is highly variable. Evidence suggests that some of this variability has a genetic basis. This is supported by the largest genome-wide association study (GWAS) of lithium response to date conducted by the International Consortium on Lithium Genetics (ConLiGen). Recently, we performed the first genome-wide analysis of the involvement of miRNAs in BD and identified nine BD-associated miRNAs. However, it is unknown whether these miRNAs are also associated with lithium response in BD. In the present study, we therefore tested whether common variants at these nine candidate miRNAs contribute to the variance in lithium response in BD. Furthermore, we systematically analyzed whether any other miRNA in the genome is implicated in the response to lithium. For this purpose, we performed gene-based tests for all known miRNA coding genes in the ConLiGen GWAS dataset (n = 2,563 patients) using a set-based testing approach adapted from the versatile gene-based test for GWAS (VEGAS2). In the candidate approach, miR-499a showed a nominally significant association with lithium response, providing some evidence for involvement in both development and treatment of BD. In the genome-wide miRNA analysis, 71 miRNAs showed nominally significant associations with the dichotomous phenotype and 106 with the continuous trait for treatment response. A total of 15 miRNAs revealed nominal significance in both phenotypes with miR-633 showing the strongest association with the continuous trait (p = 9.80E-04) and miR-607 with the dichotomous phenotype (p = 5.79E-04). No association between miRNAs and treatment response to lithium in BD in either of the tested conditions withstood multiple testing correction. Given the limited power of our study, the investigation of miRNAs in larger GWAS samples of BD and lithium response is warranted. abstract_id: PUBMED:6149669 RBC-choline: changes by lithium and relation to prophylactic response. Red blood cell (RBC)- and plasma-choline levels were measured in patients on lithium (n = 96), antidepressants (n = 32) and neuroleptics (n = 51), and in 25 healthy drug-free controls. Lithium patients exhibited highly increased RBC- and slightly increased plasma-choline levels compared with controls (P less than 0.001 and P less than 0.05, respectively); the choline ratio (RBC-/plasma-choline) was elevated almost to the same extent as RBC-choline (P less than 0.001). With antidepressants RBC-choline and choline ratios were slightly reduced (P less than 0.05), whereas neuroleptics showed no effect on choline levels. 79% of lithium patients were responders (reduction in hospitalizations with lithium), 21% were non-responders (no reduction or increase in hospitalizations). Choline ratio exhibited a significant relation to prophylactic lithium response, but lithium ratio did not. The percentage of non-responders was significantly higher in patients with a choline ratio exceeding 100 than in patients with a choline ratio below this cut-off (P less than 0.01). Thus, the increase of RBC-choline and choline ratios appears to be an effect specific for lithium and might be related to the outcome of lithium prophylaxis. Answer: Yes, response to prophylactic lithium appears to be a familial trait. A study comparing the response to long-term lithium treatment in bipolar relatives of bipolar lithium responders and bipolar controls found that the prevalence of unequivocal response among the relatives was 67%, compared with the response rate of 35% in the comparison group. This significant difference supports the view that the response to lithium prophylaxis clusters in families (PUBMED:12416605). Additionally, several studies have reported that a positive familial history of bipolar disorders, especially those known to be responsive to lithium, is a psychopathological predictor of a good response to lithium prophylaxis (PUBMED:18922242). Furthermore, bipolar patients with a family history of mania or depression had more favorable responses to prophylactic lithium than those with no family history of affective disorders (PUBMED:2941468). Genetic studies have also shown that familial factors and family history of primary bipolar disorder correlate well with prophylactic lithium response (PUBMED:10212559). These findings collectively suggest that familial traits and genetic factors play a role in the response to prophylactic lithium treatment in individuals with bipolar disorder.
Instruction: Do elderly people at more severe activity of daily living limitation stages fall more? Abstracts: abstract_id: PUBMED:22561383 Do elderly people at more severe activity of daily living limitation stages fall more? Objective: The aim of this study was to explore how activity of daily living (ADL) stages and the perception of unmet needs for home accessibility features associate with a history of falling. Design: Participants were from a nationally representative sample from the Second Longitudinal Survey of Aging conducted in 1994. The sample included 9250 community-dwelling persons 70 yrs or older. The associations of ADL stage and perception of unmet needs for home accessibility features with a history of falling within the past year (none, once, or multiple times) were explored after accounting for sociodemographic characteristics and comorbidities using a multinomial logistic regression model. Results: The adjusted relative risk of falling more than once peaked at 4.30 (95% confidence interval, 3.29-5.61) for persons with severe limitation (ADL-III) compared those with no limitation (ADL-0) then declined for those at complete limitation (ADL-IV). The adjusted relative risks of falling once and multiple times were 1.42 (95% confidence interval, 1.07-1.87) and 1.85 (95% confidence interval, 1.44-2.36), respectively, for those lacking home accessibility features. Conclusions: Risk of falling appeared greatest for those whose homes lacked accessibility features and peaked at intermediate ADL limitation stages, presumably at a point when people have significant disabilities but sufficient function to remain partially active. abstract_id: PUBMED:28747158 Characteristics of disability in activity of daily living in elderly people associated with locomotive disorders. Background: Ageing is associated with a decline of motor function and ability to perform daily activities. Locomotive disorders are one of the major disorders resulting in adverse health condition in elderly people. Concept of Locomotive syndrome (LoS) was proposed to tackle the problems and prolong healthy life expectancy of people with locomotive disorders. To develop intervention strategy for LoS it is mandatory to investigate impairments, functional disabilities which people with locomotive disorder experience and to examine relationships among these parameters. For this purpose we have developed Geriatric Locomotive Function Scale-25 (GLFS-25). Though several physical performance tests were reported for identification or monitoring the severity of LoS, there are few studies reported on characteristics of disability which people with locomotive disorders experience. The aim of this study was to report the characteristics of ADL disabilities in elderly people with locomotive disorders in terms of numbers and degree of activity limitations. Methods: We organized a cohort study and recruited 314 participants aged 65 years and over from five orthopedic clinics or nursing care facilities. This was a cross-sectional study to use the baseline data of such cohort. ADL disabilities were assessed using GLFS-25 scale arranging the GLFS-25 scores in ordinal levels using "R language" program. Numbers and degrees of activity limitations were determined and compared among the levels. Frequency of limitation in activities regarding social activity, housework, locomotion, mobility and self-care was compared among across the disability level. Results: The GLFS-25 score was mathematically categorized into 7 levels. The number of activity limitations and the degrees of each activity limitation were significantly greater in high GLFS-25 levels than in low levels. Difficulties in mobility appeared in less severe level, difficulties in domestic and social life appeared in moderately severe level, and difficulties in self-care appeared in advanced level. Conclusions: High GLFS-25 score represented high degree of disability on ADLs. Concordant increase of numbers of activity limitation and severity progression in activity limitation may contribute to progression of disability. Activity limitation may occur in the following order: sports activity, walking, transferring, and self-care. abstract_id: PUBMED:28910753 Fall prevalence, time trend and its related risk factors among elderly people in China. Objectives: To study the fall prevalence, time trends and related risk factors among elderly people in the Chinese mainland from 2011 to 2013. Methods: Our data were from China Health and Retirement Longitudinal Study in 2011 and 2013. The population sample included people aged 60 years and over. Whether the person had experienced fall accident in the last two years was used to measure fall incidence. The time trend and age groups were investigated through the chi-square test. The related risk factors were examined based on the binary logistic regression model. Results: In 2011, 19.64% (95% CI, 18.66%, 20.67%) of elderly people experienced fall incidents and in 2013, 19.28% (95% CI, 18.46%, 20.13%) of elderly people experienced fall incidents. However, no significant difference was seen in the fall prevalence between 2011 and 2013. The fall prevalence among elderly people aged 66-70 declined significantly while that among people aged over 80 showed an increasing time trend. The fall prevalence was affected significantly by factors including age (66-70), gender, marital status, self-rated health, quantity of chronic diseases, quantity of disability items, activities of daily living and physical functioning. Conclusions: It is revealed the fall prevalence showed no increment from 2011 to 2013 but at a high level. More efforts should be made to reduce the fall prevalence, and special attention should be paid to the elderly people aged over 80 and older. abstract_id: PUBMED:35045809 Prevalence and determinants of activity of daily living and instrumental activity of daily living among elderly in India. Background: The increase in life expectancy has proliferated the number of elderly and subsequently increased the prevalence of disability among the elderly. This study assesses the prevalence of Activity of Daily Living (ADL) and Instrumental Activity of Daily Living (IADL) and analyzes determinants of ADL and IADL among elderly aged 60 and over living in India. Methods: The study utilized the Longitudinal Ageing Study in India (LASI, 2017-18) data, and information was sought from 31,464 elderly aged 60 years and above. An index of ADL and IADL was created on a scale of three levels, exhibiting no, moderate, or severe levels of ADL/IADL disability. Multinomial logistic regression was used to determine the effect of socio-demographic parameters on ADL and IADL disability among the elderly. Results: Around 3% of the elderly reported severe ADL disability, and 6% elderly reported severe IADL disability. Elderly who were not involved in any physical activity than their counterparts were more likely to report severe ADL (RRR = 2.68, C.I. = 1.66-4.32) and severe IADL (RRR = 2.70, C.I. = 1.98-3.67) than no ADL and no IADL, respectively. Conclusion: Amidst the study finding, the study emphasizes the importance of setting-up of geriatric care centers in rural and urban areas. It would be feasible to provide geriatric care under the umbrella of already functioning government health facilities in different parts of the country. Community interventions earmarking the elderly with a focus on physical activity, specifically based in group physical exercise and implemented through existing networks, are rewarding for the elderly. abstract_id: PUBMED:27664405 Activity Limitation Stages Are Associated With Risk of Hospitalization Among Medicare Beneficiaries. Background: Activity limitation stages based on activities of daily living (ADLs) and instrumental activities of daily living (IADLs) are associated with 3-year mortality in elderly Medicare beneficiaries, yet their associations with hospitalization risk in this population have not been studied. Objective: To examine the independent association of activity limitation stages with risk of hospitalization within a year among Medicare beneficiaries aged 65 years and older. Design: Cohort study. Setting: Community. Participants: A total of 9447 community-dwelling elderly Medicare beneficiaries from the Medicare Current Beneficiary Survey for years 2005-2009. Methods: Stages were derived for ADLs and IADLs separately. Associations of stages with time to first hospitalization and time to recurrent hospitalizations within a year were assessed with Cox proportional hazards models, with which we accounted for baseline sociodemographics, smoking status, comorbidities, and the year of survey entry. Main Outcomes: Time to first hospitalization and time to recurrent hospitalizations within 1 year. Principle Findings: The adjusted risk of first hospitalization increased with greater activity limitation stages (except stage III). The hazard ratios (95% confidence intervals) for ADL stages I-IV compared with stage 0 (no limitations) were 1.49 (1.36-1.63), 1.61 (1.44-1.80), 1.54 (1.35-1.76), and 2.06 (1.61-2.63), respectively. The pattern for IADL stages was similar. For recurrent hospitalizations, activity limitation stages were associated with the risk of the first hospitalization but not with subsequent hospitalizations. Conclusion: Activity limitation stages are associated with the risk of first hospitalization in the subsequent year among elderly Medicare beneficiaries. Stages capture clinically interpretable profiles of ADL and IADL functionality and describe preserved functions and activity limitation in an aggregated measure. Stage can inform interventions to ameliorate disability and thus reduce the risk of a subsequent hospitalization in this population. Level Of Evidence: IV. abstract_id: PUBMED:24798263 Activity Limitation Stages empirically derived for Activities of Daily Living (ADL) and Instrumental ADL in the U.S. Adult community-dwelling Medicare population. Background: Stages quantify severity like conventional measures but further specify the activities that people are still able to perform without difficulty. Objective: To develop Activity Limitation Stages for defining and monitoring groups of adult community-dwelling Medicare beneficiaries. Design: Cross-sectional. Setting: Community. Participants: There were 14,670 respondents to the 2006 Medicare Current Beneficiary Survey. Methods: Stages were empirically derived for the Activities of Daily Living (ADLs) and the Instrumental Activities of Daily Living (IADLs) by profiling the distribution of performance difficulties as reported by beneficiaries or their proxies. Stage prevalence estimates were determined, and associations with demographic and health variables were examined for all community-dwelling Medicare beneficiaries. Main Outcome Measurements: ADL and IADL stage prevalence. Results: Stages (0-IV) define 5 groups across the separate ADL and IADL domains according to hierarchically organized profiles of retained abilities and difficulties. For example, at ADL-I, people are guaranteed to be able to eat, toilet, dress, and bathe/shower without difficulty, whereas they experience limitations getting in and out of bed or chairs and/or difficulties walking. In 2006, an estimated 6.0, 2.9, 2.2, and 0.5 million beneficiaries had mild (ADL-I), moderate (ADL-II), severe (ADL-III), and complete (ADL-IV) difficulties, respectively, with estimates for IADL stages even higher. ADL and IADL stages showed expected associations with age and health-related concepts, supporting construct validity. Stages showed the strongest associations with conditions that impair cognition. Conclusions: Stages as aggregate measures reveal the ADLs and IADLs that people are still able to do without difficulty, along with those activities in which they report having difficulty, consequently emphasizing how groups of people with difficulties can still participate in their own lives. Over the coming decades, stages applied to populations served by vertically integrated clinical practices could facilitate large-scale planning, with the goal of maximizing personal autonomy among groups of community-dwelling people with disabilities. abstract_id: PUBMED:31204521 Activities of daily living among elderly persons with severe aortic stenosis. Purpose: To determine the level of limitations in activities of daily living (ADL) and instrumental ADL, and identify related factors to instrumental ADL among elderly persons with severe aortic stenosis (AS). Materials And Methods: We prospectively enrolled 112 consecutive elderly persons (mean age, 84.6 ± 4.4 y) with severe AS. We assessed ADL and instrumental ADL using the Barthel index (BI) and the Frenchay activities index (FAI), respectively. Cardiac, physical, cognitive and kidney functions, nutritional status, medical histories, and comorbidities were assessed. A multiple logistic analysis was developed to explore related factors to the FAI. Results: The medians (ranges) of BI and FAI were 100 (15-100) and 21 (0-38), respectively. The multiple logistic analysis revealed that a history of heart failure (sβ = -0.189), a history of cerebrovascular disease (sβ = -0.233), the short physical performance battery score ≤ 8 (sβ = -0.272), the mini-mental state examination score ≤ 23 (sβ = -0.168) were significantly related to the FAI (p &lt; 0.05). Conclusions: Physical frailty, a history of heart failure, cerebrovascular disease, and cognitive decline were independently related to declined instrumental ADL. IMPLICATIONS FOR REHABILIATION Activities of daily living was well-preserved among elderly persons with severe aortic stenosis. Instrumental activities of daily living varied among patients with aortic stenosis and was sometimes impaired. Physical frailty, a history of heart failure, and a history of cerebrovascular disease were independently associated with instrumental activities of daily living in elderly persons with severe aortic stenosis. abstract_id: PUBMED:28847053 Association between the change of daily living activities and symptoms of depression in Chinese middle-aged and elderly people Objective: To explore the association between the change of daily living activities and the suymtoms of depression in Chinese middle-age and elderly people. Methods: People without symptoms on depression, emotional or spiritual diseases or memories related diseases in baseline survey (2011), those who were aged ≥45 years in the China Health and Retirement Longitudinal Survey (CHARLS) with two waves from 2011 to 2013 were invited to participate in this CHARLS study. Information related to demographic characteristics and health status of the subjects were collected through household-interview. Depression symptom was assessed with the Center for Epidemiological Studies of Depression (CES-D10) scale. Activities of daily living function was assessed by both Basic Activities of Daily Living (BADL) and Instrument Activities of Daily Living (IADL) scales. Symptoms of depression was compared and the differences in various BADL/IADL changing groups were analyzed with Log-rank test. Association between the changes of daily living activities and depression symptom was analyzed by Cox regression model. Results: Results from the log-rank test showed that the differences of depression symptom hazard ratio were statistically significant in different BADL/IADL changing groups. Results from the Cox regression analysis showed that the BADL/IADL in the deteriorating groups was associated with the higher hazard ratio of depression symptom (P&lt;0.01), with the HR value (95%CI) for BADL and IADL deteriorate groups as 1.45 (1.20-1.76) and 1.64 (1.36-1.98), respectively, when comparing to the 'stablized' group. This phenomenon appeared an urban-rural difference. Statistical significances on IADL also happened in both urban and rural areas while BADL only occurred in rurals. Conclusion: There appeared an association between the change of daily living activities and depression symptom in Chinese middle-age and elderly people. Difference between IADL and BADL was also noticed between urban and rural areas. abstract_id: PUBMED:37545989 Prevalence of activity limitation and its associated predictor among the elderly in Tamil Nadu, India: A community-based cross-sectional study. Background: Functional disability develops in a dynamic and progressive manner. As people age, their health problems worsen and they lose the ability to perform activities of daily living (ADLs) such as dressing, using the toilet, bathing, and eating. With a better understanding of the underlying risk factors and the related mechanisms of the disablement process, it is possible to make more effective and more efficient programs to prevent or delay the onset of disability in older people. Materials And Methods: A community-based, descriptive, cross-sectional study was carried out among 453 elderly from Tamil Nadu, India. A door-to-door interview was used to administer the Katz Index of Independence on ADL. Simple random selection was employed to select the study individuals. To determine the association for particular risk factors, Chi-squared test and binary logistic regression were used. Results: First, our study findings show the prevalence of activity limitation among the elderly using the The Katz Index of Independence. This study shows that the prevalence of activity limitation was 23% among the elderly. Seventy-seven percent had no significant limitations: they are independent. There is a statistically significant association between activity limitation as opposed to gender, visual impairment, depression, urinary incontinence, unintentional injury in the past year, and alcohol consumption. Conclusion: The study found that the self-reported prevalence of activity limitation was 23% among the elderly and 77% had no significant limitations. The Katz Index of Independence in activities of daily living may be used as a handy tool to identify the activity limitation in community-based checkups. abstract_id: PUBMED:22248806 Staging activity limitation and participation restriction in elderly community-dwelling persons according to difficulties in self-care and domestic life functioning. Objective: This study aimed to describe the conceptual foundation and development of an activity limitation and participation restriction staging system for community-dwelling people 70 yrs or older according to the severity and types of self-care (activities of daily living [ADLs]) and domestic life (instrumental ADLs (IADLs)) limitations experienced. Design: Data from the second Longitudinal Study of Aging (N = 9447) were used to develop IADL stages through the analyses of self- and proxy-reported difficulties in performing IADLs. An analysis of activity limitation profiles identified hierarchical thresholds of difficulty that defined each stage. IADL stages are combined with ADL stages to profile status for independent living. Results: IADL stages define five ordered thresholds of increasing activity limitations and a ``not relevant'' stage for those who normally have someone else do those activities. Approximately 42% of the population experience IADL limitations. To achieve a stage, a person must meet or exceed stage-specific thresholds of retained functioning defined for each activity. Combined ADL and IADL stages define 29 patterns of activity limitations expressing the individual's potential for participating in life situations pertinent to self-care and independent community life. Conclusions: ADL and IADL stages can serve to distinguish between groups of people according to both severity and the types of limitations experienced during home or outpatient assessments, in population surveillance, and in research. Answer: Yes, elderly people at more severe activity of daily living (ADL) limitation stages do tend to fall more. The study with the objective to explore how ADL stages and the perception of unmet needs for home accessibility features associate with a history of falling found that the risk of falling appeared greatest for those whose homes lacked accessibility features and peaked at intermediate ADL limitation stages. Specifically, the adjusted relative risk of falling more than once peaked at 4.30 for persons with severe limitation (ADL-III) compared to those with no limitation (ADL-0) and then declined for those at complete limitation (ADL-IV) (PUBMED:22561383). This suggests that the risk of falling is higher among those with significant disabilities but who still retain some level of function that allows them to remain partially active.
Instruction: Is antiarrhythmia device implantation safe under dual antiplatelet therapy? Abstracts: abstract_id: PUBMED:20025706 Is antiarrhythmia device implantation safe under dual antiplatelet therapy? Background: Device implantations in patients on dual antiplatelet-therapy (DA-therapy) continue to rise. The aim of our study was to compile and analyze data on complications of antiarrhythmia device implantation under DA-therapy. Methods: We prospectively collected data on all device implantations in our department from January 2008 until February 2009. The control group was comprised of patients on acetylsalicylic acid alone or no antiplatelet medication at all (318 subjects). The DA-therapy group consisted of 109 patients of whom 71 were analyzed retrospectively (implantations from 2002 to 2007). Results: Procedure times were significantly longer in DA-therapy patients receiving a pacemaker for the first time. In contrast, procedure times did not differ significantly between the two study groups for implantable cardioverter defibrillator (ICD) implantations and for pacemaker replacements. Fluid losses via drainage systems and drainage times were significantly increased in the DA-therapy group as compared with the control group after pacemaker but not after ICD implantations. Importantly, there were no significant differences in complication rates, particularly the hematoma rate, between the DA-therapy and the control group. Conclusions: When drainage systems are used, antiarrhythmia device implantation is safe and can be performed without significantly increased risk of clinically relevant hematoma in patients on continued DA-therapy. (PACE 2010; 394-399). abstract_id: PUBMED:38021921 Single-Center Outcomes of WATCHMAN™ Implantation with Comparison to Oral Anticoagulant and Dual Antiplatelet Therapy. Background The WATCHMAN™ device is a Food and Drug Administration (FDA)-approved device that reduces the risk of stroke from atrial fibrillation (AF) in those who have a contraindication to taking oral anticoagulation. A key aspect of this device implantation is the choice of medical therapy in the months after device implantation with Vitamin K antagonist oral anticoagulants (OAC) being the mainstay of therapy but dual antiplatelet therapy (DAPT) poses as a potential alternative to patients who have a contraindication to OAC use. Methods Our single-center study retroactively followed 150 patients post-WATCHMAN™ implantation and evaluated outcomes at 12 months post-implantation in two cohorts, those treated with OAC or DAPT. Our results were obtained via chart review of a single-center electronic medical records system. Results In our study, 67.33% of study patients were males and 49.33% were on OAC compared to 36.00% that were on DAPT. Ten patients were not able to undergo device implantation. With this analysis, we found similarly low rates of complications such as stroke and device-associated thrombosis (DAT) in both groups. Our DAPT cohort did have a higher number of gastrointestinal (GI) bleeding but this was not significant in our analysis. Discussion Our study compares to larger trials that show similar outcomes between OAC and DAPT post-implantation of the WATCHMAN™ device. The increased number of GI bleeding in our DAPT cohort could be the result of the underlying advanced age and comorbidity of that patient cohort. Conclusion Our results suggest that DAPT is a safe alternative to OAC for patients undergoing WATCHMAN™ implantation. abstract_id: PUBMED:26347068 Dual antiplatelet therapy increases pocket hematoma complications in Chinese patients with pacemaker implantation. Objective: To assess the prevalence of the bleeding complications in pacemaker implanted patients receiving different antiplatelet regimens, and the influence of each regimen on hospital stays after device implantation. Methods: We prospectively enrolled 364 patients receiving the cardiac rhythm device implantations in Fuwai Hospital from July 2012 to December 2013. Bleeding complications including pocket hematoma, hemothorax, cardiac tamponade and blood transfusion requirement were measured as endpoints. Post operation hospital stay was also included in the endpoints. Results: Bleeding complications were detected in 15 patients (14 with hematoma, one with hemothorax) out of all 364 patients (4.12%). Dual antiplatelet therapy (DAT) significantly increased hematoma (19.3%) compared with aspirin treatment (ASA) (3.2%, P = 0.001) and no antiplatelet therapy (1.9%, P &lt; 0.001). There was no significant difference in incidence of pocket hematoma between the ASA group and the control group (P = 0.45). The post procedure hospital stay was longer in DAT group (5.45 ± 2.01 days) compared to those in the ASA group (3.65 ± 1.37 days, P &lt; 0.05) or control group (3.99 ± 2.27 days, P &lt; 0.05). Pocket hematoma was considered an independent predictor of hospital stay prolongation (OR: 5.26; 95% CI: 1.56-16.64; P = 0.007). Conclusions: Among the Chinese patients undergoing device implantation in this study, the use of dual antiplatelet agents significantly increased the risk of pocket hematoma complications and led to a longer hospital stay. Use of aspirin alone did not increase the risk. abstract_id: PUBMED:29588706 Dual Antiplatelet Therapy After Drug-eluting Stent Implantation. The current guidelines for percutaneous coronary intervention use recommend dual antiplatelet therapy (DAPT) with aspirin and a P2Y12 inhibitor after drug eluting stent (DES) implantation. The optimal duration of DAPT is however area of debate. Recent clinical trials and meta-analyses suggest that the choice of DAPT duration should be tailored individually, based on the balance between ischemic and bleeding risk carried by the patient. abstract_id: PUBMED:32804330 Dual Antiplatelet Therapy in Coronary Artery Disease. Acute coronary syndrome (ACS) is principally driven by platelet aggregation. Dual antiplatelet therapy (DAPT) has demonstrated a reduction in recurrent ischemic events. The newer antiplatelets ticagrelor and prasugrel have demonstrated superiority over clopidogrel. While prasugrel demonstrated benefit in patients scheduled for percutaneous intervention (PCI), benefits of ticagrelor were seen irrespective of the treatment strategy. Current guidelines recommend the use of DAPT for 1 year in all patients with ACS. Ticagrelor 60 mg is recommended for up to 3 years in high-risk patients. DAPT and Predicting Bleeding Complications in Patients Undergoing Stent Implantation and Subsequent Dual Antiplatelet Therapy (PRECISE DAPT) scores are tools to support decision-making in deciding duration of dual antiplatelet therapy. abstract_id: PUBMED:38213748 Shortened Duration of Dual Antiplatelet Therapy Following Percutaneous Coronary Intervention: A Contemporary Clinical Review. Percutaneous coronary intervention with stent implantation is an integral aspect of minimally interventional cardiac procedures. The technology and techniques behind stent design and implantation have evolved rapidly over several decades. However, continued discourse remains around optimal peri- and post-interventional management with dual antiplatelet therapy to minimise both major cardiovascular or cerebrovascular events and iatrogenic bleeding risk. Standard guidelines around dual antiplatelet therapy historically recommended long-term dual antiplatelet therapy for 12 months (with consideration for &gt;12 months in certain patients); however, emerging data and generational improvements in the safety of drug-eluting stents have ushered in a new era of short-term therapy to reduce the incidence of major bleeding events. This case review will provide an overview of the current state of guidelines around duration of dual antiplatelet therapy and examine recent updates and continued gaps in existing research. abstract_id: PUBMED:37511955 Dual Antiplatelet Therapy: A Concise Review for Clinicians. Dual antiplatelet therapy (DAPT) combines two antiplatelet agents to decrease the risk of thrombotic complications associated with atherosclerotic cardiovascular diseases. Emerging data about the duration of DAPT is being published continuously. New approaches are trying to balance the time, benefits, and risks for patients taking DAPT for established cardiovascular diseases. Short-term dual DAPT of 3-6 months, or even 1 month in high-bleeding risk patients, is equivalent in terms of efficacy and effectiveness compared to long-term DAPT for patients who experienced percutaneous coronary intervention in an acute coronary syndrome setting. Prolonged DAPT beyond 12 months reduces stent thrombosis, major adverse cardiovascular events, and myocardial infarction rates but increases bleeding risk. Extended DAPT does not significantly benefit stable coronary artery disease patients in reducing stroke, myocardial infarction, or cardiovascular death. Ticagrelor and aspirin reduce cardiovascular events in stable coronary artery disease with diabetes but carry a higher bleeding risk. Antiplatelet therapy duration in atrial fibrillation patients after percutaneous coronary intervention depends on individual characteristics and bleeding risk. Antiplatelet therapy is crucial for post-coronary artery bypass graft and transcatheter aortic valve implantation; Aspirin (ASA) monotherapy is preferred. Antiplatelet therapy duration in peripheral artery disease depends on the scenario. Adding vorapaxar and cilostazol may benefit secondary prevention and claudication, respectively. Carotid artery disease patients with transient ischemic attack or stroke benefit from antiplatelet therapy and combining ASA and clopidogrel is more effective than ASA alone. The optimal duration of DAPT after carotid artery stenting is uncertain. Resistance to ASA and clopidogrel poses an incremental risk of deleterious cardiovascular events and stroke. The selection and duration of antiplatelet therapy in patients with cardiovascular disease requires careful consideration of both efficacy and safety outcomes. The use of combination therapies may provide added benefits but should be weighed against the risk of bleeding. Further research and clinical trials are needed to optimize antiplatelet treatment in different patient populations and clinical scenarios. abstract_id: PUBMED:29466171 Duration of Dual Antiplatelet Therapy in Patients With Ischemic Heart Disease After Implantation of Endovascular Stents Optimal duration of dual antiplatelet therapy (DAPT) after stent implantation is uncertain. Some patients have an extended risk of thrombotic complications including that of very late stent thrombosis after cessation of recommended course of DAPT (6-12 months). On the other hand, there is a real risk of bleeding on DAPT. In this review, we present and discuss results of clinical trials of long-term DAPT and data of their meta-analyses. The review also contains consideration of some aspects of new AHA/ACC recommendations (2016) on duration of DAPT. abstract_id: PUBMED:34926613 Duration of Dual Antiplatelet Therapy After Implantation of Drug-Coated Balloon. The drug-coated balloon (DCB) is an emerging percutaneous coronary intervention (PCI) device with theoretical advantages and promising results. Recent clinical observations have demonstrated that DCB tends to have both good efficacy and a good safety profile in the treatment of in-stent restenosis (ISR) for both bare-metal and drug-eluting stents (DES), de novo coronary artery disease (CAD), and other situation, such as high bleeding risk, chronic total occlusion, and acute coronary syndrome (ACS). Dual antiplatelet therapy (DAPT) has become an essential medication in daily clinical practice, but the optimal duration of DAPT after the implantation of a DCB remains unknown. At the time of the first in vivo implantation of paclitaxel-DCB for the treatment of ISR in 2006, the protocol-defined DAPT duration was only 1 month. Subsequently, DAPT duration ranging from 1 to 12 months has been recommended by various trials. However, there have been no randomized controlled trials (RCTs) on the optimal duration of DAPT after DCB angioplasty. Current clinical guidelines normally recommend the duration of DAPT after DCB-only angioplasty based on data from RCTs on the optimal duration of DAPT after stenting. In this review, we summarized current clinical trials on DCB-only angioplasty for different types of CADs and their stipulated durations of DAPT, and compared their clinical results such as restenosis, target lesion revascularization (TLR) and stent thrombosis event. We hope this review can assist clinicians in making reasonable decisions about the duration of DAPT after DCB implantation. abstract_id: PUBMED:27646646 Short versus prolonged dual antiplatelet therapy duration after bare-metal stent implantation: 2-month landmark analysis from the CREDO-Kyoto registry cohort-2. One-month duration of dual antiplatelet therapy (DAPT) has widely been adopted after bare-metal stent (BMS) implantation in the real clinical practice. However, it has not been adequately addressed yet whether DAPT for only 1-month could provide sufficient protection from ischemic events beyond 1-month after BMS implantation. We assessed the effects of short DAPT relative to prolonged DAPT on clinical outcomes with the landmark analysis at 2 month after BMS implantation. Among 13,058 consecutive patients enrolled in the CREDO-Kyoto registry cohort-2, this study population consisted of 4905 patients treated with BMS only in whom the information on the status of antiplatelet therapy was available at 2 month after stent implantation [single-antiplatelet therapy (SAPT) group: N = 2575 (acute myocardial infarction (AMI): N = 1257, and non-AMI: N = 1318), and DAPT group: N = 2330 (AMI: N = 1304, and non-AMI: N = 1026)]. Cumulative 3-year incidence of the primary outcome measure (a composite of cardiovascular death, myocardial infarction, stroke, definite stent thrombosis, and GUSTO moderate/severe bleeding) was not significantly different between the SAPT and DAPT groups (9.8 versus 10.6 %, P = 0.34). After adjusting confounders, the risk of SAPT relative to DAPT for the primary outcome measure remained insignificant in the entire cohort (HR 0.97, 95 % CI 0.79-1.19, P = 0.77), and in both AMI and non-AMI strata without any significant interaction between clinical presentation (AMI versus non-AMI) and the effect of SAPT relative to DAPT (P interaction = 0.56). In conclusion, short DAPT &lt;2 month after BMS implantation was as safe as prolonged DAPT ≥2-month in both AMI and non-AMI patients. Answer: Antiarrhythmia device implantation under dual antiplatelet therapy (DA-therapy) has been evaluated in several studies with varying results. According to the data compiled from a study that prospectively collected data on all device implantations in a department from January 2008 until February 2009, antiarrhythmia device implantation is considered safe and can be performed without significantly increased risk of clinically relevant hematoma in patients on continued DA-therapy when drainage systems are used (PUBMED:20025706). This study found no significant differences in complication rates, particularly the hematoma rate, between the DA-therapy group and the control group. However, another study focusing on Chinese patients with pacemaker implantation reported that dual antiplatelet therapy (DAT) significantly increased the risk of pocket hematoma complications and led to a longer hospital stay compared to patients on aspirin alone or no antiplatelet therapy. This suggests that the use of dual antiplatelet agents in this patient population may not be as safe, indicating a higher risk of bleeding complications (PUBMED:26347068). The safety of antiarrhythmia device implantation under dual antiplatelet therapy may also depend on the specific patient population and the type of device being implanted. For instance, a study on WATCHMAN™ device implantation found that dual antiplatelet therapy (DAPT) was a safe alternative to oral anticoagulant therapy for patients undergoing the procedure, with similarly low rates of complications such as stroke and device-associated thrombosis (DAT) in both groups (PUBMED:38021921). In summary, while some studies suggest that antiarrhythmia device implantation can be safe under dual antiplatelet therapy, there is evidence that the risk of bleeding complications, such as pocket hematoma, may be increased in certain populations. Therefore, the decision to proceed with device implantation under DA-therapy should be individualized, taking into account the patient's specific risk factors and the type of device being implanted.
Instruction: Can cord formation in BACTEC MGIT 960 medium be used as a presumptive method for identification of M. tuberculosis complex? Abstracts: abstract_id: PUBMED:21114173 Can cord formation in BACTEC MGIT 960 medium be used as a presumptive method for identification of M. tuberculosis complex? Background: Serpentine cord formation in BACTEC MGIT 960 medium was evaluated as a rapid method for the presumptive identification of M. tuberculosis complex (MTBC). Material & Methods: Total 2527 samples were processed for AFB culture using MGIT 960 TB system over a period of three months. AFB smears were prepared from 1000 MGIT tubes flagged positive by the MGIT instrument and stained by ZN method to examine presence or absence of serpentine cording. The cord formation was compared with PNBA [p-nitro benzoic acid] test on MGIT system and all controversial cases were further evaluated by NAP [p-nitro-a-acetylamino-phydroxypropiophenone] test on BACTEC 460 TB system. Results & Discussion: Of the 1000 culture positives, 904 (90.4%) were identified as mycobacteria, of which 869 (96%) showed cording by smear microscopy. One (0.1%) was identified as nocardia. In the remaining 95 (9.5%) cases, primary smear made from MGIT vial was negative. Of 869 cultures showing serpentine cord formation, 842 were confirmed as MTBC and 27 as NTM by PNBA assay on MGIT 960 TB system. The sensitivity, specificity, positive and negative predictive values are found to be 99.6%, 54%, 96% and 91% respectively. An average detection time for PNBA assay was found to be eight days whereas cording results were available on the same day of culture positivity. Conclusion: Though highly sensitive it is not very specific and hence cannot be the only test for presumptive diagnosis of MTBC. abstract_id: PUBMED:12710097 Evaluation of Bactec MGIT 960 fluorescent method in diagnosis of tuberculosis We evaluated the fluorescent Bactec MGIT 960 the new system in Poland, which is a fully automated, non-invasive, system for growth and detection of Mycobacterium with a capacity to incubate and continuously monitor 960 of 7-ml culture tubes. This system is equipped with special oxygen-quenching fluorescent sensor, which permits to continuously monitoring of microbial growth. Processed specimens were inoculated into Bactec MGIT 960, Bactec 460 Tb and MB/BacT as well as on to Lowenstein-Jensen slants. The greatest number of isolates of M. tuberculosis complex was recovered by using Bactec MGIT 960 system (49/19.5%). Other systems detected M. tuberculosis complex as follows: Bactec 460 Tb (47/18.7%), MB/BacT (43/17/1%), L-J (38/15.1%). Detection mean time of mycobacterial growth in smear-positive specimens were 15.4 days for Bactec MGIT 960, 16.2 days for Bactec 460 Tb, 15.1 days for MB/BacT and 28.2 days for L-J medium The rates of contamination for each of the system were: 3.8% for Bactec MGIT 960 and Bactec 460 Tb, 3.6% for MB/BacT and 2.9% for L-J. In conclusion Bactec MGIT 960 system is a valuable alternative of radiometric, semi automated Bactec 460 Tb system. abstract_id: PUBMED:26786125 Evaluation of the effectiveness of BACTEC MGIT 960 for the detection of mycobacteria in Bangladesh. Objective: Tuberculosis (TB) caused by Mycobacterium tuberculosis has been identified as a re-emerging infectious disease with public health importance globally. Exploitation of new laboratory techniques for precise identification of mycobacteria in clinical specimens is of great importance to improve the diagnosis as part of the global TB control efforts. Methods: The current study was conducted for the evaluation of BACTEC MGIT 960 method in comparison with Lowenstein-Jensen (LJ) culture and light emitting diode (LED) fluorescence microscopy for isolation of mycobacteria among TB suspects from Bangladesh. A total of 421 specimens were tested with these methods. Results: Among the tested samples, 3.6% (n=15) were LED fluorescence microscopy positive; while 18 (4.2%) and 45 (10.6%) were recovered from LJ and MGIT 960 culture. The relative positivity found through MGIT 960 system were 60% and 66.7% higher than that of LJ culture and LED fluorescence microscopy, respectively. Recovery rate of Mycobacterium tuberculosis complex ([MTC], 21 by MGIT and 16 by LJ culture) and non-tubercular mycobacteria ([NTM], 24 by MGIT and 2 by LJ culture) by MGIT 960 was 24% and 96% greater, respectively than LJ culture. Moreover, MGIT 960 was found to be highly sensitive (100%), specific (93.3%), accurate (93.6%) and a more rapid method in detecting mycobacteria when compared with LJ culture. Conclusion: Extended recovery of NTM and MTC through MGIT 960 urged frequent application of this method to detect mycobacteria more effectively and rapidly. abstract_id: PUBMED:22664442 Presumptive identification of Mycobacterium tuberculosis complex based on cord formation in BACTEC MGIT 960 medium. We considered samples received for culture of mycobacteria using BACTEC MGIT 960 system over a period of 1 year. Tubes flagged positive by MGIT were evaluated for presence of serpentine cording. The cord formation was compared with isolates identified as Mycobacterium tuberculosis complex (MTC) based on p-nitrobenzoic acid (PNB) test. Cords were found in 591 isolates of which 584 (98.8%) were confirmed as MTC. The sensitivity and specificity of cord formation were found to be 99.7% and 89.9%, respectively. abstract_id: PUBMED:31594129 Evaluation of GeneXpert MTB/RIF and BACTEC-MGIT 960 for the detection of tuberculosis among pneumoconiosis-associated tuberculosis patients Objective: To evaluate the performence of GeneXpert MTB/RIF and BACTEC-MGIT 960 on detecting Mycobacterium tuberculosis and rifampicin resistance for pneumoconiosis-associated tuberculosis patients. Methods: The recruited 133 suspected active pneumoconiosis-associated tuberculosis hospitalized cases, morning sputum samples were collected to do modified L-J culture, conventional proportion method drug susceptibility test, GeneXpert MTB/RIF and BACTEC-MGIT 960. Analyze the sensitivity and specificity of the 133 sputum from patients, the positive rates of patients with tuberculosis in GeneXpert MTB/RIF test, BACTEC-MGIT 960 and modified L-J culture were 37.59%, 34.59% and 30.08% respectively. There was no significant difference among the three tests respectively (P&gt;0.05) . According to the modified L-J culture, the sensitivity of GeneXpert MTB/RIF and BACTEC-MGIT 960 in detecting tuberculosis were 92.5% and 95.0% respectively, and specificity in rifampicin resistance were 86.0% and 91.4% respectively. There was no significant difference between GeneXpert MTB/RIF and BACTEC-MGIT 960 (P&gt;0.05) . According to conventional proportion method drug susceptibility test, the sensitivity of GeneXpert MTB/RIF and BACTEC-MGIT 960 in detecting rifampicin resistance were 90.0% and 100%, and specificity were 92.6% and 96.4%. There was no significant difference between GeneXpert MTB/RIF and BACTEC-MGIT 960 (P&gt;0.05) . Conclusion: The GeneXpert MTB/RIF has good performence of detecting tuberculosis and rifampicin resistance. It has good application value among pneumoconiosis-associated tuberculosis patients. abstract_id: PUBMED:24709416 Determination of the activity of standard anti-tuberculosis drugs against intramacrophage Mycobacterium tuberculosis, in vitro: MGIT 960 as a viable alternative for BACTEC 460. BACTEC 460 has now been phased out, so the search for an alternative is imperative. We have determined the activity of standard anti-tuberculosis drugs against intramacrophage Mycobacterium tuberculosis, in vitro, by using BACTEC 460 and MGIT 960 methods. The minimum inhibitory concentrations of isoniazid, rifampicin, ethambutol and streptomycin against intracellular M. tuberculosis H37Rv were found to be 0.2, 0.8, 8.0, and 5.0 μg/mL, respectively, by both methods. These results show a significant (p&lt;0.001) concordance between minimum inhibitory concentrations obtained by these two different methods. MGIT 960 system uses a robust florescence quenching-based oxygen sensor, requires no radioisotope, is safe, and relatively easy to operate. Apparently, this is the first report wherein MGIT 960 has been validated for anti-tubercular susceptibility testing against intracellular M. tuberculosis H37Rv. Our preliminary data thus clearly demonstrate that the MGIT 960 method can be considered as a promising alternative to BACTEC 460 method. abstract_id: PUBMED:24091105 Multicenter research on the BACTEC MGIT 960 system for the second-line drugs susceptibility testing of Mycobacterium tuberculosis in China. The reliability of the BACTEC MGIT 960 system for the second-line drugs (capreomycin [CPM], kanamycin [KAN], ofloxacin [OFX] and ethionamide [ETH]) susceptibility testing (DST) of Mycobacterium tuberculosis (M. tuberculosis) was compared to that of traditional Lowenstein-Jensen (L-J) proportion method (PM) among four different sites in China. After resolution of discrepant results by retesting the strains using both methods in the National Reference Laboratory of tuberculosis, the overall concordance values between the 2 systems were 99.7% (kappa value: 0.97) for CPM, 99.7% (kappa value: 0.97) for KAN, 100.0% (kappa value: 1.00) for OFX, and 98.6% (kappa value: 0.95) for ETH. The average turnaround time with BACTEC MGIT 960 system among four sites was 8.9 ± 1.7 days, significantly shorter than 28 days with the traditional L-J PM. Therefore, the BACTEC MGIT 960 system is a reliable and rapid method for the second-line drug susceptibility testing of tuberculosis in China. Notably, a stricter quality control program should be routinely carried out when clinical laboratories perform the second-line DST with BACTEC MGIT 960 system. abstract_id: PUBMED:28601653 Evaluation of the BACTEC MGIT 960 system and the resazurin microtiter assay for susceptibility testing of Mycobacterium tuberculosis to second-line drugs. Drug resistance in tuberculosis is a major threat to public health and control of the disease worldwide. Given the need of a rapid and accurate detection of Mycobacterium tuberculosis resistance to second-line drugs, this study evaluated the performance of the BACTEC MGIT 960 for second-line, drug susceptibility testing in comparison with the resazurin microtiter assay (REMA), in order to implement the automated methodology in the diagnostic routine of a reference laboratory. Drug susceptibility testing (DST) for second-line drugs of 151 MDR M. tuberculosis clinical isolates was performed by both BACTEC MGIT 960 and REMA, and a panel of 26 M. tuberculosis reference isolates from a proficiency test was tested by the BACTEC MGIT 960. DST for second-line drugs by the BACTEC MGIT 960 system was more rapid, highly reproducible and showed 100% of proficiency. After these results, this methodology was successfully implemented in our diagnostic routine for all MDR-TB patients. abstract_id: PUBMED:2116448 Cord formation in BACTEC 7H12 medium for rapid, presumptive identification of Mycobacterium tuberculosis complex. We evaluated cord formation in BACTEC 7H12 medium as a criterion for rapid identification of Mycobacterium tuberculosis complex. Kinyoun-stained smears, prepared from 270 radiometrically positive BACTEC 7H12 bottles, were examined independently by three observers. Smears from 93.2, 88.6, and 83.0% of the M. tuberculosis complex cultures were read as cord positive, and smears from 97.3, 97.8, and 99.5% of the mycobacteria other than M. tuberculosis cultures were read as cord negative by the three observers, respectively. There was 93.3% agreement between the observers. The presence of cords in BACTEC 7H12 medium can be a reliable criterion for rapid, presumptive identification of M. tuberculosis complex. abstract_id: PUBMED:9705435 Cord formation in BACTEC medium is a reliable, rapid method for presumptive identification of Mycobacterium tuberculosis complex. Serpentine cord formation in BACTEC 12B medium was evaluated as a rapid method for the presumptive identification of M. tuberculosis complex. Kinyoun acid-fast stained smears were prepared from 666 positive BACTEC 12B bottles and examined for the presence or absence of serpentine cording. Cord formation had a sensitivity, specificity, positive predictive value, and negative predictive value of 89.2, 99.2, 98.5, and 94.2%, respectively. The evaluation of the presence of cord formation in BACTEC 12B medium is reliable and permits the rapid presumptive reporting of M. tuberculosis. Answer: Yes, cord formation in BACTEC MGIT 960 medium can be used as a presumptive method for the identification of M. tuberculosis complex (MTBC). Studies have shown that serpentine cord formation observed in BACTEC MGIT 960 medium is a characteristic of MTBC. One study found that of 1000 culture positives, 869 (96%) showed cording by smear microscopy, and of these, 842 were confirmed as MTBC by the p-nitro benzoic acid (PNBA) assay on the MGIT 960 TB system. The sensitivity, specificity, positive and negative predictive values of cord formation for identifying MTBC were found to be 99.6%, 54%, 96%, and 91%, respectively (PUBMED:21114173). Another study reported that out of 591 isolates showing cord formation, 584 (98.8%) were confirmed as MTBC, with the sensitivity and specificity of cord formation being 99.7% and 89.9%, respectively (PUBMED:22664442). However, while cord formation is highly sensitive, it is not very specific and hence cannot be the only test for the presumptive diagnosis of MTBC (PUBMED:21114173). This is because non-tuberculous mycobacteria (NTM) can occasionally form cords, which may lead to false-positive results. Therefore, while cord formation can be a useful initial indicator, it should be supplemented with additional confirmatory tests such as the PNBA assay or molecular methods for accurate identification of MTBC.
Instruction: Does the psychological request of the patient justify the surgery for hepatic hemangioma? Abstracts: abstract_id: PUBMED:15966176 Does the psychological request of the patient justify the surgery for hepatic hemangioma? Background/aims: The incidental finding of hemangiomas has increased, but the problem of the correct surgical indications of this tumor has yet to be solved. The aim of this work is to establish whether the psychological request of surgery from patients known to have a benign tumor of the liver must be avoided or not. Methodology: Age, sex, symptoms, estroprogestinic oral therapy, methods of diagnosis, surgical procedures, morbidity, mortality, postoperative hospital stay and follow-up of the patients affected by hepatic hemangioma, observed from 1992 to 2002 in our institution, have been considered. Results: Seventeen patients, with a mean age of 44 years (range 26-72), were hospitalized for hepatic hemangioma, 8 (47%) of them were operated on and 9 (53%) were managed by observation. The operated patients presented various symptoms. One patient was operated on for traumatic rupture of the hemangioma. Non-operated patients were asymptomatic or with slight dyspeptic symptoms not related with the tumor. The first diagnostic radiological examination was ultrasonography (US) in all cases. All lesions were larger than 4cm. The types of surgical procedures were 5 enucleations, and 3 hepatic resections. All operated patients resolved their clinical symptomatology, except two patients that had requested surgery for psychological implications. These patients presented their symptoms again after 2and 3 years of follow-up respectively. Conclusions: Our results suggest that liver hemangiomas should be operated for symptoms well related to the tumor or for bleeding. Psychological requests from the patients should be avoided every time. abstract_id: PUBMED:8714238 Indications for surgery in the treatment of hepatic hemangioma. Background/aims: Hepatic hemangiomas are controversial tumors, especially as far as treatment is concerned. This paper analyses a series of 26 cavernous hemangiomas of the liver diagnosed between 1982 and 1993. Materials And Methods: A descriptive study is made of cases, their treatment and subsequent follow-up averaging 5 years. Results: A preoperative diagnosis was made in 18 patients. Fifteen of the 26 patients underwent operation, and no deaths occurred. One patient with Kasabach-Merritt syndrome (3.8%) had complications due to rupture of the hemangioma. Postoperative complications were limited to two abscesses (13%). There were no recurrences in the follow-up (average 4.4 years). Three of the 15 surgery patients revealed residual hemangiomas. There were no complications in the 11 non-surgery patients (five of which had giant hemangiomas) during a follow-up averaging 5.9 years. One patient initially without pain and with a hemangioma of 7.4 cm became symptomatic, and the hemangioma grew to 9.6 cm; another patient with pain became asymptomatic, with no change in hemangioma size. Conclusion: Most hemangiomas can be managed conservatively. Indication for surgery should be assessed in units with experience in liver surgery, where low morbidity and mortality rates can justify the intervention. abstract_id: PUBMED:27112250 Hepatic Hemangioma with Kasabach-Merritt Syndrome in an Adult Patient. Hemangiomas are the most common benign tumors of the liver. They are generally asymptomatic, but giant hemangiomas can lead to abdominal discomfort, bleeding, or obstructive symptoms. Kasabach-Merritt syndrome is a rare but life-threatening complication of hemangioma, characterized by consumptive coagulopathy with large vascular tumors. More than 80% of Kasabach-Merritt syndrome cases occur within the first year of life. However, there are few reports of Kasabach-Merritt syndrome with giant hepatic hemangioma in adults and, as far as we know, no reports of Kasabach-Merritt syndrome with hepatic hemangioma treated with first line medical treatment only. The most important treatment for this syndrome is removal of the large vascular tumor. However, surgical treatment entails risk of bleeding, and the patient's condition can mitigate against surgery. We herein present a case of unresectable giant hepatic hemangioma with disseminated intravascular coagulopathy. The patient was a 60-year-old woman who complained of hematochezia, ecchymosis, and abdominal distension. She refused all surgical management and was therefore treated with systemic glucocorticoids and beta-blockers. After two weeks of steroid therapy, she responded partially to the treatment. Her laboratory findings and hematochezia improved. She was discharged on hospital day 33 and observed without signs of bleeding for three months. abstract_id: PUBMED:24278075 Hereditary haemorrhagic telangiectasia in a patient taking anticoagulant drugs who has sustained facial trauma. The authors present the case of a 41-year-old patient with hereditary hemorrhagic telangiectasia (HHT), who in the past had an aortic valve replacement surgery, currently takes anticoagulant drugs and has sustained an extensive trauma to the nose as a result of a dog bite. The HHT is diagnosed basing on the presence of at least three out of four symptoms or signs: spontaneous epistaxis, vascular lesions in the internal organs, skin telangiectasias and a family history of the disease. The presented patient showed hepatic angioma, history of recurrent bleeding from the tongue and spontaneous epistaxis as well as numerous skin telangiectasias. In his case, HHT coincided with chronic treatment with coagulants implemented after an implantation of the artificial aortic replacement valve, what substantially modified the clinical picture and course of treatment. abstract_id: PUBMED:18277578 Anesthetic management of a patient with Osler-Weber-Rendu disease undergoing mitral valvuloplasty Osler-Weber-Rendu disease (OWRd) is an autosomal dominant disease with an incidence of 1-2 per 100,000 of population. The triad of OWRd is fibrovascular dysplasia characterized by telangiectases, arteriovenous malformations and recurrent bleeding. We experienced anesthetic management of a patient with OWRd undergoing mitral valvuloplasty. The patient was a 79-year-old woman admitted to our hospital with diagnosis of heart failure, mitral regurgitation, and chronic atrial fibrillation. Massive nasal bleeding before admission caused severe anemia (Hb 4.8 g dl(-1)), and she received transfusion. Preoperaitve examination revealed moderate mitral regurgitation with prolapsed anterior leaflet, pulmonary hypertension, high cardiac output (6.85 l min(-1)), and multiple hepatic angiomas. After induction of anesthesia, a pulmonary catheter was inserted. High cardiac output and high saturation of mixed-venous oxygen were found until the end of surgery. Blood gas analyses of samples from several sites suggested left to right shunt below the inferior vena cava. Mitral valvuloplasty and tricuspid annuloplasty were completed under mild hypothermic cardiopulmonary bypass. Despite our attention to abnormal bleeding, perioperative course was uneventful. For anesthetic management of OWRd, we must recognize the abnormal bleeding and the systemic vascular malformations preoperatively, and prepare for the possible complications. abstract_id: PUBMED:15952499 Fever as a sign of inflammatory syndrome in a female patient with hepatic haemangioma A 41-year-old patient presented with fever, night sweats, general malaise, abdominal pain, and substantial weight loss. Laboratory analysis suggested an inflammatory process. Diagnostic imaging revealed a hepatic haemangioma with a diameter of 20 cm. Because such giant haemangiomas of the liver can lead to inflammatory syndrome, the tumour was surgically removed. Pathological analysis confirmed the clinical diagnosis and evidence of extensive thrombosis and other vascular defects was found. Following treatment, the symptoms resolved without further complications. In patients with a giant haemangioma in the liver who present with an inflammatory syndrome, the haemangioma should be considered as the causal factor. For these patients, resection is the treatment of choice. abstract_id: PUBMED:33206631 Giant Cavernous Hemangioma of the Liver in a Patient with Autosomal Dominant Polycystic Kidney Disease. BACKGROUND Autosomal dominant polycystic kidney disease (ADPKD) is frequently associated with liver cysts, but an association with giant cavernous liver hemangioma is not mentioned in the literature. CASE REPORT We report the case of a 41-year-old man with ADPKD, secondary arterial hypertension, and stage 4 chronic kidney disease who presented with a 2-week history of persistent pain at the base of the right hemithorax and in the right hypochondrium. An ultrasound examination and a contrast-enhanced computed tomography scan revealed a giant cavernous liver hemangioma. Surgery was intially taken into account (however, twice delayed because of the COVID-19 pandemic) but later refused because it would have left the patient with dangerously few liver parenchyma. CONCLUSIONS To our knowledge, this is the first reported case of ADPKD associated with cavernous liver hemangioma. Vascular endothelial growth factor could be the pathophysiological link between the 2 conditions. Further research may unravel the molecular biology that underlies this possible association, pointing to new therapeutic avenues for ADPKD. abstract_id: PUBMED:26318403 Ambulatory laparoscopic minor hepatic surgery: Retrospective observational study. Introduction: Over the last decade, laparoscopic hepatic surgery (LHS) has been increasingly performed throughout the world. Meanwhile, ambulatory surgery has been developed and implemented with the aims of improving patient satisfaction and reducing health care costs. The objective of this study was to report our preliminary experience with ambulatory minimally invasive LHS. Methods: Between 1999 and 2014, 172 patients underwent LHS at our institution, including 151 liver resections and 21 fenestrations of hepatic cysts. The consecutive series of highly selected patients who underwent ambulatory LHS were included in this study. Results: Twenty patients underwent ambulatory LHS. Indications were liver cysts in 10 cases, liver angioma in 3 cases, focal nodular hyperplasia in 3 cases, and colorectal hepatic metastasis in 4 cases. The median operative time was 92 minutes (range: 50-240 minutes). The median blood loss was 35 mL (range: 20-150 mL). There were no postoperative complications or re-hospitalizations. All patients were hospitalized after surgery in our ambulatory surgery unit, and were discharged 5-7 hours after surgery. The median postoperative pain score at the time of discharge was 3 (visual analogue scale: 0-10; range: 0-4). The median quality-of-life score at the first postoperative visit was 8 (range: 6-10) and the median cosmetic satisfaction score was 8 (range: 7-10). Conclusion: This series shows that, in selected patients, ambulatory LHS is feasible and safe for minor hepatic procedures. abstract_id: PUBMED:34124204 Case Report: Giant Biatrial Myxoma Mimicking Malignant Cardiac Tumor in a Patient With a Hepatic Angiomatous Mass. Cardiac myxomas, primarily originating from the left atrium, are the most prevalent types of benign cardiac tumors; however, biatrial myxomas are extremely rare. Herein, we present a rare case of a 55-year old male with exertional dyspnea and intermittent chest discomfort due to a giant biatrial mass with concomitant atrial fibrillation and hepatic hemangioma. The giant tumor with its peduncle at the interatrial septum involved both atria; however, bulging through the tricuspid valve to the right ventricle during systole. Hence, excision of the giant cardiac tumor (which grossly composed of three parts: stiff, fleshy, and soft) and Cox-Maze IV procedure was performed with the resected specimen measuring 100 × 80 × 40 mm. The patient who was in a stable condition was discharged home on the 12th post-operative day. Thus, given the excellent post-operative results achieved, surgical treatment in large multi-cavitary benign cardiac tumors is feasible and should be considered a potentially curative therapy. abstract_id: PUBMED:16483185 Unique association of non-functioning pheochromocytoma, ganglioneuroma, adrenal cortical adenoma, hepatic and vertebral hemangiomas in a patient with a new intronic variant in the VHL gene. We analyzed the clinical, hormonal, immunohistochemical and genetic features in a 69-yr-old Caucasian woman with a very rare "composite and mixed pheochromocytoma". This was characterized by right adrenal pheochromocytoma associated with homolateral ganglioneuroma and controlateral adrenal cortical adenoma. The three tumors, incidentally discovered, proved to be non-functioning (normal secretion of catecholamines and of other neuroendocrine peptides, glucocorticoids, mineralcorticoids and androgens). Accordingly, the patient showed no sign or symptom of endocrine disease. Computed tomography (CT) and magnetic resonance (MR) demonstrated a typical adenomatous lesion on the left adrenal gland with precocious uptake of the radiotracer on radioidine (131I)-norcholesterol adrenal scintigraphy, while the controlateral gland showed hyperdensity on CT, hyperintensity on MR and no uptake at adrenal scintigraphy. In addition, CT and MR revealed a vertebral and two hepatic hemangiomas. The right adrenal gland was surgically removed and, microscopically, pheochromocytoma and ganglioneuroma areas appeared intermixed without a predominant component. The former showed strong immunoreactivity for chromogranin, synaptophysin, vascular endothelial growth factor (VEGF) and CD34, while the latter appeared positive for neuron-specific enolase (NSE) and S-100. Peripheral blood genomic DNA analysis revealed a new intronic variant (5557A &gt; G) in the von Hippel-Lindau gene (VHL) not observed in our control population. Answer: The psychological request of the patient alone does not justify surgery for hepatic hemangioma. According to the study by PUBMED:15966176, while all operated patients resolved their clinical symptomatology, two patients who had requested surgery for psychological reasons presented their symptoms again after 2 and 3 years of follow-up, respectively. The conclusion drawn from this study suggests that liver hemangiomas should be operated on for symptoms well related to the tumor or for bleeding, and psychological requests from the patients should be avoided. This indicates that the psychological distress of knowing they have a benign liver tumor is not considered a sufficient reason for surgery, especially when considering the potential for symptom recurrence and the lack of physical symptoms directly attributable to the hemangioma.
Instruction: Fitness in the fit: does physical conditioning affect cardiovascular risk factors in middle-aged marathon runners? Abstracts: abstract_id: PUBMED:8732372 Fitness in the fit: does physical conditioning affect cardiovascular risk factors in middle-aged marathon runners? Objective: The study was designed to assess cardiovascular risk factors in marathon runners with different degrees of fitness. Design: A total of 30 male middle-aged marathon runners were divided according to their marathon running time into fit (265 +/- 8 min), fitter (222 +/- 5 min) and fittest (178 +/-12 min). The three groups of 10 runners each were comparable in age, weight, and body surface area. Cardiovascular risk factors were assessed by measuring arterial pressure before and during exercise (150 watts) and determination of plasma lipoproteins, uric acid, glucose and white blood cell count before and after a marathon run. Results: All measured laboratory values such as high-density lipoprotein cholesterol (P &lt; 0.05), low-density lipoprotein cholesterol (P &lt; 0.05), total cholesterol (non-significant), triglycerides (non-significant), blood sugar (non-significant), uric acid (P &lt; 0.01) and white blood cell count (P &lt; 0.05) indicated a lower cardiovascular risk in the fastest when compared with the slowest runners. Resting blood pressure was similar in the three groups but consistently lower at all levels of exercise in the fittest when compared with the less fit runners. The fittest runners also showed greater increases in high-density lipoprotein cholesterol after the marathon run (14% vs 8% in the slowest runners, P &lt; 0.005). Conclusions: We conclude that even at the extreme end of a continuum such as represented by well-conditioned, middle-aged marathon runners, cardiovascular risk factors are related to the degree of fitness, as measured by the marathon running time. abstract_id: PUBMED:24408890 Marathon run: cardiovascular adaptation and cardiovascular risk. The first marathon run as an athletic event took place in the context of the Olympic Games in 1896 in Athens, Greece. Today, participation in a 'marathon run' has become a global phenomenon attracting young professional athletes as well as millions of mainly middle-aged amateur athletes worldwide each year. One of the main motives for these amateur marathon runners is the expectation that endurance exercise (EE) delivers profound beneficial health effects. However, with respect to the cardiovascular system, a controversial debate has emerged whether the marathon run itself is healthy or potentially harmful to the cardiovascular system, especially in middle-aged non-elite male amateur runners. In this cohort, exercise-induced increases in cardiac biomarkers-troponin and brain natriuretic peptide-and acute functional cardiac alterations have been observed and interpreted as potential cardiac damage. Furthermore, in the cohort of 40- to 65-year-old males engaged in intensive EE, a significant risk for the development of atrial fibrillation has been identified. Fortunately, recent studies demonstrated a normalization of the cardiac biomarkers and the functional alterations within a short time frame. Therefore, these alterations may be perceived as physiological myocardial reactions to the strenuous exercise and the term 'cardiac fatigue' has been coined. This interpretation is supported by a recent analysis of 10.9 million marathon runners demonstrating that there was no significantly increased overall risk of cardiac arrest during long-distance running races. In conclusion, intensive and long-lasting EE, e.g. running a full-distance Marathon, results in high cardiovascular strain whose clinical relevance especially for middle-aged and older athletes is unclear and remains a matter of controversy. Furthermore, there is a need for evidence-based recommendations with respect to medical screening and training strategies especially in male amateur runners over the age of 35 years engaged in regular and intensive EE. abstract_id: PUBMED:36431252 Exercise-Associated Hyponatremia in Marathon Runners. Exercise-associated hyponatremia (EAH) was first described as water intoxication by Noakes et al. in 1985 and has become an important topic linked to several pathological conditions. However, despite progressive research, neurological disorders and even deaths due to hyponatremic encephalopathy continue to occur. Therefore, and due to the growing popularity of exercise-associated hyponatremia, this topic is of great importance for marathon runners and all professionals involved in runners' training (e.g., coaches, medical staff, nutritionists, and trainers). The present narrative review sought to evaluate the prevalence of EAH among marathon runners and to identify associated etiological and risk factors. Furthermore, the aim was to derive preventive and therapeutic action plans for marathon runners based on current evidence. The search was conducted on PubMed, Scopus and Google Scholar using a predefined search algorithm by aggregating multiple terms (marathon run; exercise; sport; EAH; electrolyte disorder; fluid balance; dehydration; sodium concentration; hyponatremia). By this criterion, 135 articles were considered for the present study. Our results revealed that a complex interaction of different factors could cause EAH, which can be differentiated into event-related (high temperatures) and person-related (female sex) risk factors. There is variation in the reported prevalence of EAH, and two major studies indicated an incidence ranging from 7 to 15% for symptomatic and asymptomatic EAH. Athletes and coaches must be aware of EAH and its related problems and take appropriate measures for both training and competition. Coaches need to educate their athletes about the early symptoms of EAH to intervene at the earliest possible stage. In addition, individual hydration strategies need to be developed for the daily training routine, ideally in regard to sweat rate and salt losses via sweat. Future studies need to investigate the correlation between the risk factors of EAH and specific subgroups of marathon runners. abstract_id: PUBMED:37249222 Cardiovascular Disease Risk Factor Profile of Experienced Male Amateur Marathon Runners: A Systematic Review. Context: Male amateur marathon runners represent a unique subset of the population who may be at increased risk of cardiovascular disease (CVD) due to their underlying risk factors and their involvement in vigorous exercise such as marathon running. Objective: To assess the modifiable risk factors (MRFs) of CVD in experienced male amateur marathon runners and health interventions on CVD risk factors. Data Sources: CINAHL, Cochrane Library, Embase, Medline, and SPORTDiscus. Study Selection: Studies selected according to the inclusion criteria. Study Design: Systematic review. Level Of Evidence: Level 3. Data Extraction: The publication dates included were from June 1, 2008 to February 29, 2020.Published primary epidemiological, observational, randomized controlled trial (RCT) and/or non-RCT studies assessing the MRFs of CVD and health interventions on CVD risk factors in male amateur marathon runners aged ≥18 years and written in the English language were included in the review. Results: Five studies met the inclusion criteria for analysis. These included male amateur marathon runners (n = 862), aged 42 to 77 years. Hypertension, hyperlipidemia, smoking, and alcohol use were MRFs positively associated with an increased risk of coronary atherosclerosis found in a subset of male marathon runners. No studies examined health interventions on CVD risk factors in any of the included studies. All 5 studies were of good quality from the National Heart, Lung, and Blood Institute quality assessment tools used. The risk of bias was low to moderate. Conclusion: There is a paucity of observational studies evaluating the CVD MRFs. Negative lifestyle behaviors exist within this population despite their engagement in physical exercise through marathon running. Marathon running does not negate the long-term effects caused by past negative lifestyle behaviors. This systematic review identifies that this population may not be aware of their possible risk of atherosclerosis and, consequently, CVD. abstract_id: PUBMED:30744458 Cardiovascular stress biomarker assessment of middle-aged non-athlete marathon runners. Background: Cardiovascular safety of marathon running in recreational runners remains unclear. We set up this study to comprehensively evaluate the effects of a marathon run on the profile of cardiovascular stress biomarkers in connection with transthoracic echocardiography. Design And Methods: Thirty-three healthy male amateur runners, aged ≥50 years (mean age 57 ± 7) were enrolled. Venous blood samples were obtained before the marathon, just after the race, 2-4 and seven days after the marathon. Using novel single molecule counting technology we measured: plasma concentrations of high-sensitivity cardiac troponin I (hs-cTnI) and endothelin-1. N-terminal pro B-type natriuretic peptide was measured using electrochemiluminescence. Each participant had transthoracic echocardiography before and immediately after the race. Results: We observed a sharp rise in the levels of all biomarkers after the race (all p &lt; 0.01), which subsequently normalized after 2-4 days and stayed within normal range after seven days. Runners with intensive training programmes (&gt;169 km/month, a median for the studied group) had lower hs-cTnI leak after the race (median 15.11 ng/ml and 31.2 ng/ml, respectively; p &lt; 0.05). Neither transthoracic echocardiography measures of ventricles nor strain rates changed after the run. We observed a decrease in the left atrial volume index and increase in the maximal right atrial volumes (all p &lt; 0.05). Changes in Doppler indices of diastolic function suggestive of alteration in left ventricular relaxation were observed. Conclusions: Marathon run is associated with sharp and significant rises in the biomarkers of cardiovascular stress. The profile of these changes, however, along with echocardiographic parameters, does not suggest irreversible myocardial damage. abstract_id: PUBMED:36398372 Inflammatory activation biomarker profile after marathon running and its impact on cardiovascular stress in amateur middle-aged male runners. Background: Cardiovascular safety of marathon running middle-aged amateurs remains unclear. We previously hypothesized that transient release of cardiac troponin I (cTnI) and N-terminal pro-B-type natriuretic peptide (NT-proBNP), in addition to an acute inflammatory response to exercise, may be the cause. Objectives: To evaluate the effects of running a marathon on inflammatory biomarkers, and its impact on cardiovascular function. Material And Methods: Thirty-three healthy male amateur runners aged ≥50 (mean age: 57 ±7 years) were enrolled in the study. Venous blood samples were obtained before the marathon, just after the race, and 2-4 days and 7 days after the marathon. Using novel single molecule counting (SMC) technology, we measured plasma concentrations of interleukin-6 (IL-6) and tumor necrosis factor alpha (TNF-α). White blood cell (WBC) count was measured using a certified hematology analyzer. The results were related to previous analyses on cardiovascular stress and endothelial function biomarkers. Transthoracic echocardiography (TTE) and cardiac magnetic resonance (CMR) were used to determine myocardial function. Results: We observed a sharp rise of all studied biomarkers after the race, which subsequently normalized after 2-4 days and stayed within the normal range 7 days after the race. We found no correlation between inflammatory and cardiovascular stress biomarkers. Transthoracic echocardiography and CMR did not show ischemic or inflammatory myocardial damage. Conclusions: Marathon running is associated with a sharp and significant rise in inflammatory and cardiovascular stress biomarkers. We found no connection between immune activation and cardiac biomarker release. Cardiovascular imaging showed no myocardial damage due to ischemia or inflammation. abstract_id: PUBMED:32256389 Cardiovascular Remodeling Experienced by Real-World, Unsupervised, Young Novice Marathon Runners. Aims: Marathon running is a popular ambition in modern societies inclusive of non-athletes. Previous studies have highlighted concerning transient myocardial dysfunction and biomarker release immediately after the race. Whether this method of increasing physical activity is beneficial or harmful remains a matter of debate. We examine in detail the real-world cardiovascular remodeling response following competition in a first marathon. Methods: Sixty-eight novice marathon runners (36 men and 32 women) aged 30 ± 3 years were investigated 6 months before and 2 weeks after the 2016 London Marathon race in a prospective observational study. Evaluation included electrocardiography, cardiopulmonary exercise testing, echocardiography, and cardiovascular magnetic resonance imaging. Results: After 17 weeks unsupervised marathon training, runners revealed a symmetrical, eccentric remodeling response with 3-5% increases in left and right ventricular cavity sizes, respectively. Blood pressure (BP) fell by 4/2 mmHg (P &lt; 0.01) with reduction in arterial stiffness, despite only 11% demonstrating a clinically meaningful improvement in peak oxygen consumption with an overall non-significant 0.4 ml/min/kg increase in peak oxygen consumption (P = 0.14). Conclusion: In the absence of supervised training, exercise-induced cardiovascular remodeling in real-world novice marathon runners is more modest than previously described and occurs even without improvement in cardiorespiratory fitness. The responses are similar in men and women, who experience a beneficial BP reduction and no evidence of myocardial fibrosis or persistent edema, when achieving average finishing times. abstract_id: PUBMED:26159054 Urinary incontinence in non-professional female marathon runners Objective: To determine the prevalence of urinary incontinence within a population of female recreational runners during a marathon. Methods: Observational study carried on a marathon from the analysis of questionnaire handed to the participants before the beginning of the marathon. The questionnaire was handed to 800 participants and among them, 517 (64.6%) agreed to fill it. Among the participants, 268 (52.4%) were marathon runners and 243 (47.5%) were relay runners. Results: Mean age of the runners was 41.1 (±9.7), 479 (93.7%) of them were caucasian, mean body mass index was 20,7 (±1.9) kg/m(2) and 173 (34%) were nulliparous. Among responders, 157 (30.7%) runners declared to have urinary incontinence symptoms (any circumstances). Among 157 runners who declared an urinary incontinence, 83 (52,9%) presented with urinary leaks during the running. In half of the cases, these urinary leaks usually arose at the end of race. Urinary incontinence during coughing, sneezing or laughing was reported by 96/517 (18,5%) women. The prevalence of urge urinary incontinence was 63/517 (12%). Concerning the frequency of urinary incontinence, 39/517 (7.5%) women reported at least once weekly. For urinary incontinence bother, scores on the Visual Analogue Scale (VAS 0 to 100) was 1.6 (±1.7). Conclusion: In this series, the prevalence of urinary incontinence was 30.7% within a population of female recreational runners. This rate seems to be similar to the current prevalence in the general population. abstract_id: PUBMED:36910804 Impact of marathon performance on muscles stiffness in runners over 50 years old. Introduction: The research examines the relationship between marathon performance and muscle stiffness changes from pre to marathon in recreational runners aged 50+ years. Methods: Thirty-one male long-distance runners aged 50-73 years participated in the experiment. The muscle stiffness of quadriceps and calves was measured in two independent sessions: the day before the marathon and 30 min after the completed marathon run using a Myoton device. Results And Discussion: The 42.195-km run was completed in 4.30,05 h ± 35.12 min, which indicates an intensity of 79.3% ± 7.1% of HRmax. The long-term, low-intensity running exercise (marathon) in older recreational runners and the low level of HRmax and VO2max showed no statistically significant changes in muscle stiffness (quadriceps and calves). There was reduced muscle stiffness (p = 0.016), but only in the triceps of the calf in the dominant (left) leg. Moreover, to optimally evaluate the marathon and adequately prepare for the performance training program, we need to consider the direct and indirect analyses of the running economy, running technique, and HRmax and VO2max variables. These variables significantly affect marathon exercise. abstract_id: PUBMED:25831468 Prevalence of Subclinical Coronary Artery Disease in Middle-Aged, Male Marathon Runners Detected by Cardiac CT. Purpose: To evaluate the prevalence of coronary artery disease (CAD) in middle-aged, male marathon runners using coronary dual source CT angiography (DSCTA). Materials And Methods: 50 male marathon runners older than 45 years (mean age: 52.7, standard deviation: 5.9 years, range: 45 to 67 years) received DSCTA including calcium scoring (CS) in addition to standard pre-participation screening. Based on standard risk factors, the risk for coronary events was calculated using the PROCAM score. Coronary status was defined using the following system: 1. absence of CAD (CS zero, no coronary plaques) 2. mild coronary atherosclerosis (CS &gt; 0, coronary plaques with luminal narrowing &lt; 50 %), 3. moderate coronary atherosclerosis (CS &gt; 0, luminal narrowing &gt; 50 %), 4. significant CAD (CS &gt; 0, luminal narrowing &gt; 75 %). Results: The mean PROCAM score was 1.85 % (standard deviation = 1.56, range 0.39 to 8.47 %). 26/50 marathon runners had no atherosclerosis. 1 of the remaining 24 participants had significant CAD, 3 had moderate coronary atherosclerosis and 20 had mild coronary atherosclerosis. Treadmill exercise testing was unremarkable in terms of myocardial ischemia in all participants. Age, systolic blood pressure, personal minimum time, family history of cardiovascular disease and PROCAM score were factors associated with an increased risk for coronary atherosclerosis. Conclusion: Coronary atherosclerosis can be detected in almost 50 % of male marathon runners aged older than 45 years. In 24 % of the participants plaques were located in the proximal coronary system. However, only a minority of these persons have obstructive CAD. As expected, treadmill exercise testing failed to detect these persons that possibly have a higher risk for coronary events. Key Points: • Coronary atherosclerosis can be detected in ~50 % of male marathon runners &gt; 45 years. • Only a minority of these persons have obstructive CAD. • Treadmill exercise testing failed to detect these persons. • Cardiac CT might help to identify athletes with elevated risk for coronary events, especially in persons with a family history of coronary artery. Answer: Yes, physical conditioning does affect cardiovascular risk factors in middle-aged marathon runners. A study designed to assess cardiovascular risk factors in marathon runners with different degrees of fitness found that laboratory values such as high-density lipoprotein cholesterol, low-density lipoprotein cholesterol, uric acid, and white blood cell count indicated a lower cardiovascular risk in the fastest runners compared to the slowest runners. Additionally, the fittest runners showed greater increases in high-density lipoprotein cholesterol after the marathon run, and resting blood pressure was similar in all groups but consistently lower at all levels of exercise in the fittest compared to the less fit runners (PUBMED:8732372). This suggests that even among well-conditioned, middle-aged marathon runners, cardiovascular risk factors are related to the degree of fitness as measured by marathon running time.
Instruction: Disease management for diabetes among family physicians and general internists: opportunism or planned care? Abstracts: abstract_id: PUBMED:11573720 Disease management for diabetes among family physicians and general internists: opportunism or planned care? Background And Objectives: Diabetes requires substantial ongoing medical management and use of monitoring tests. However, physicians' performance of these tests is often suboptimal. This study explored primary care physicians' management of diabetes in the context of both planned diabetes visits and acute visits for conditions unrelated to diabetes. Methods: Semi-structured depth interviews were conducted with 12 primary care physicians in 9 family practice and internal medicine practices distributed throughout the state of South Carolina. All interviews were tape recorded and transcribed. Themes, divergences, and trends were identified and discussed by the investigators. Results: Although all participants reported a preference toward planned diabetes management, because most patients fail to adhere to scheduled care, opportunistic disease management tended to be the default mode of diabetes care. Participants reported performing appropriate tests during scheduled visits but acknowledged that when confined to acute visits, diabetes care was difficult to perform. Reasons included time constraints and patient agenda. Participants reported that inadequate tracking of completion of diabetes standards of care influenced their adherence to guidelines. Conclusions: The current system of delivering diabetes care opportunistically in the context of non-diabetes acute visits may need to be more closely examined in an effort to improve the delivery of services. abstract_id: PUBMED:34931522 The contribution of family physicians to chronic disease management through continuity of care in Ghana. Chronic non-communicable diseases contribute significantly to Ghana's disease burden. Ghana's ability to achieve universal health coverage is threatened by the rising burden of chronic non-communicable diseases. There is a high unmet need for cardiovascular diseases care, with primary health care for cardiovascular diseases not being readily available, equitable, or sensitive to the requirements of target populations. The contribution of family physicians in the management of the chronic disease burden through care continuity cannot be overemphasised. This is a short report of the implementation of a chronic care clinic by a family physician in Manna Mission Hospital, which is located in the Greater Accra region of Ghana. Before the implementation, there was no such clinic in the hospital and patients with chronic conditions who visited the facility were sometimes lost to follow-up. The clinic which commenced in January 2019 has provided care for patients with chronic non-communicable diseases to date. The most common chronic diseases managed at the clinic include hypertension and heart failure, diabetes, stroke, asthma, sickle cell disease, and joint disorders. This report gives an account of the contribution of family physicians to chronic disease burden management through continuity of care in a low-resource setting like Ghana. abstract_id: PUBMED:30947595 The Provision of Health Care by Family Physicians in Taiwan as Illustrated With Population Pyramids. Family physicians serve as personal doctors for individuals and their families and also act as gatekeepers of the health care system. If no special status is accorded to family physicians, however, then the rates at which health care recipients utilize their service might be affected. In the present cross-sectional study, representative claims data sets for 2010 from Taiwan's National Health Insurance program, a health care system in which beneficiaries are not required to register with a family physician, were used to investigate the provision of health care to the population by family physicians. Among 919 206 beneficiaries with a total of 13 713 199 ambulatory visits, 49.1% had visited family physicians, 34.1% had visited internists, 24.3% had visited pediatricians, and 38.9% had visited otolaryngologists. Women (χ2(1) = 538, P &lt; .001) and patients aged 65 and above (χ2(1) = 16 000, P &lt; .001) had a higher proportion of visiting family physicians rather than visiting other specialties. The onion-shaped population pyramid with family medicine visits was compatible with the general population, and the proportion of visiting family physicians increased with increasing age. Among 112 289 patients with essential hypertension, 63 379 patients with diabetes mellitus, and 80 090 patients with hyperlipidemia, only 35.3%, 32.0%, and 31.1%, respectively, had visited family physicians. The age and sex distributions of these patients were illustrated with population pyramids for data visualization and direct comparisons. Taken together, the results of this study indicate that the utilization of family physicians in Taiwan and the effectiveness of their associated role in chronic disease management still have room for improvement. abstract_id: PUBMED:17626549 Better for ourselves and better for our patients: chronic disease management in primary care networks. Capital Health in Edmonton, Alberta, implemented a system-wide chronic disease management model to support people with chronic disease and their primary care physicians. Groups of family physicians, in partnership with the health region, developed primary care networks to provide services that are customized to meet the priorities of the local community. Management of chronic disease is a cornerstone service, and diabetes management is the most fully developed program. Key to its success are standardized protocols, consistent follow-up and patient education by trained primary care nurses. This model will be used as a template for the management of other chronic diseases. abstract_id: PUBMED:34666754 The impact that family members' health care experiences have on patients' trust in physicians. Background: A family member's negative experiences with medical care have long-term effects on a patient's attitudes and emotions. However, the impact of family members' experiences on patients' trust in their own physicians and in physicians generally is poorly understood. This study aims to quantify these associations. Methods: A cross-sectional online survey involving adults with non-communicable diseases (cardiac disease, diabetes, cancer, depression, and rheumatic disease) was conducted in Japan during April 2020. The main exposure variable was dissatisfaction with the medical care that family members had received. The main outcomes were patients' (N = 661) own trust in their personal physicians and in physicians generally. The study adopted the Japanese version of the Abbreviated Wake Forest Physician Trust Scales. Both 5-item scales (general and individual physician trust) were translated and validated for the study. The total scores were transformed into a scale of 0-100 points. A series of linear mixed-effects models with consideration for clustering effect by prefectures were fit. Results: The results showed a lower rating for trust in physicians generally as compared to trust in the respondent's personal physician (mean 57.0 vs. 66.4 points; p &lt; 0.001). Furthermore, dissatisfaction with a family member's medical care was associated with lower trust in physicians generally (mean difference - 9.58, 95 %CI -12.4 to -6.76). Interestingly, dissatisfaction with a family member's care was also associated with lower trust in the respondent's personal physician (mean difference - 3.19, 95 %CI -6.02 to -0.36), but the magnitude of this association was weaker. The lower trust in personal physicians may be mediated by reduced trust in physicians generally. Conclusions: We suggest that physicians enquire about past patients' negative experiences, including dissatisfaction with family members' medical care, to repair hidden loss of trust, when they sense that patients doubt them or physicians generally. abstract_id: PUBMED:10351265 Seamless health care for chronic diseases in a dual health care system: managed care and the role of family physicians. Neither private nor state run health care systems are perfect. Although there is increasing evidence that Health Maintenance Organizations (HMOs) provide comparable care at lower cost, HMOs tend to select healthy patients. The dual health care system in Hong Kong spends about 3.9 per cent of GDP, with health indices among the best in the world. Hong Kong still faces the problem of escalating health care expenditure. One should take advantage of the dual health care system to evolve a new paradigm for a primary-led seamless health care service. The Diabetes Centre of a university teaching hospital together with the University of Community and Family Medicine has started a structured shared care programme in diabetes mellitus, involving general practitioners in both the private and public sectors integrating the primary and secondary care, and the private and public sectors. This programme starts to develop an infrastructure for providing quality care at an affordable cost for a large pool of patients with chronic disease. Unlike other "managed care schemes", this one is not run by profit-oriented companies, but by health professionals with an interest in providing best possible care at an affordable cost. The "disease management" approach needs a care delivery system without traditional boundaries; and a continuous improvement process which develops and refines the knowledge base, guidelines and delivery system. abstract_id: PUBMED:29290201 A Definition of Family Medicine and General Practice. Definitions of general practice and family medicine have tended to be difficult to understand. We propose a new definition that is brief and clear. General practice / family medicine is defined as the medical specialty that manages common and long-term illnesses in children and adults, focusing on overall health and well-being. The definition should be followed by local examples of diseases within the scope of primary care such as asthma, diabetes as well as end-of-life care. This definition is based on the concept of common illnesses, reflecting the reality that family physicians manage most health conditions in a community. Unlike 'comprehensive care', common illnesses is a scope of practice that is familiar and credible to laypersons. The new definition focuses on a unique and enduring attribute of the specialty: holistic care. This core characteristic of general practice encompasses patient-centred care and continuity across the family lifecycle. By being sharply focused, the definition can guide the nature of clinical care in general practice as well as the content of undergraduate and postgraduate training in family medicine. abstract_id: PUBMED:38196437 Physicians' Knowledge, Attitudes and Barriers Regarding Chronic Kidney Disease Diagnosis and Management in Saudi Arabia. Background: Chronic kidney disease (CKD) is defined as abnormalities of kidney structure and/or function persisting for a minimum of three months. Methods: An online cross-sectional study was conducted in the Kingdom of Saudi Arabia (KSA) between June and August 2022 to assess the knowledge, attitudes, practices, and barriers among family, internal medicine, and general physicians related to CKD screening, diagnosis, and management. Results: A total of 427 physicians were included in the study. The majority exhibited a strong grasp of the accurate definition of CKD (83%) and recognized common risk factors, such as diabetes (99%), drugs (95%), and hypertension (98%). Two-thirds of physicians were aware of the five stages of CKD and identified estimated glomerular filtration rate (eGFR) and creatinine clearance as the most suitable markers for kidney function. Physicians also displayed knowledge of CKD-related complications. However, a noticeable gap between knowledge and practice was evident. Only one-third of participants reported screening their patients every year, primarily using serum creatinine (92.5%) and eGFR (97%) for diagnosis, while only 16% recognized that stage 4 CKD is the appropriate time to refer patients to nephrologists. In terms of barriers, the majority of physicians reported encountering low barriers to CKD management, but general practitioners working in primary healthcare centers experienced significantly higher levels of barriers. Conclusion: Most participants in our study possess a good level of knowledge and positive attitudes towards CKD diagnosis and management. Nevertheless, a discrepancy between knowledge and practical application, particularly in terms of over-screening and early referral, highlights the need for educational efforts to improve physician practice in KSA. These findings underscore the importance of addressing this gap to promote effective CKD management. abstract_id: PUBMED:30871513 Interprofessional collaboration in diabetes care: perceptions of family physicians practicing in or not in a primary health care team. Background: In Canada, most patients with type 2 diabetes mellitus (T2DM) are cared for in the primary care setting in the practices of family physicians. This care is delivered through a variety of practice models ranging from a single practitioner to interprofessional team models of care. This study examined the extent to which family physicians collaborate with other health professionals in the care of patients with T2DM, comparing those who are part of an interprofessional health care team called a Primary Care Network (PCN) to those who are not part of a PCN. Methods: Family physicians in Alberta, Canada were surveyed to ascertain: which health professionals they refer to or have collaborative arrangements with when caring for T2DM patients; satisfaction and confidence with other professionals' involvement in diabetes care; and perceived effects of having other professionals involved in diabetes care. Chi-squared and Fishers Exact tests were used to test for differences between PCN and non-PCN physicians. Results: 170 (34%) family physicians responded to the survey, of whom 127 were PCN physicians and 41 were non-PCN physicians (2 not recorded). A significantly greater proportion of PCN physicians vs non-PCN physicians referred patients to pharmacists (23.6% vs 2.6%) or had collaborative working arrangements with diabetes educators (55.3% vs 18.4%), dietitians (54.5% vs 21.1%), or pharmacists (43.1% vs 21.1%), respectively. Regardless of PCN status, family physicians expressed greater satisfaction and confidence in specialists than in other family physicians or health professionals in medication management of patients with T2DM. Physicians who were affiliated with a PCN perceived that interprofessional collaboration enabled them to delegate diabetes education and monitoring and/or adjustment of medications to other health professionals and resulted in improved patient care. Conclusions: This study sheds new insight on the influence that being part of a primary care team has on physicians' practice. Specifically, supporting physicians' access to other health professionals in the primary care setting is perceived to facilitate interprofessional collaboration in the care of patients with T2DM and improve patient care. abstract_id: PUBMED:35801256 The impact of family doctor system on patients' utilisation of general practitioner in primary care facilities-Evidence from Hangzhou, China. Objective: This study aims to examine whether participating in the contracted family doctor system increases patients' utilisation of primary care general practitioner for multiple disease outcomes in China. Methods: Binary logistic regression models were estimated using data collected from 372 community residents in nine selected districts of Hangzhou, China. Results: Findings revealed that (1) for patients with influenza, diabetes, upper respiratory infection, and gingivitis, those who participated in the contracted family doctor system were approximately 4.3 times, 98.4%, 92.5%, and 52.8% more likely to choose primary care general practitioners (GP) for their initial diagnosis, respectively, as compared with their counterparts who did not have contracted family doctors; (2) For patients with stroke or cerebrovascular disease and cholecystitis or cholelithiasis, those who had contracted family doctors were 1.111 times and 80.6% more likely to choose primary care GP for their subsequent disease maintenance, respectively, as compared to their counterparts without contracted family doctors. Conclusion: Our findings indicate that the contracted family doctor system not only increases the utilisation of primary care GP for patients with many chronic conditions but also promotes the overall completion of China's hierarchical medical system in the long run. Policy implications were provided to help policymakers actively construct and develop the contracted family doctor system to promote the hierarchical medical system in China. Answer: The management of diabetes among family physicians and general internists appears to be a mix of opportunism and planned care. While physicians report a preference for planned diabetes management, the reality is that many patients do not adhere to scheduled care, leading to opportunistic disease management as the default mode of care (PUBMED:11573720). During scheduled visits, physicians perform appropriate tests, but acute visits for conditions unrelated to diabetes make it difficult to perform diabetes care due to time constraints and patient agendas. Inadequate tracking of the completion of diabetes standards of care also influences adherence to guidelines (PUBMED:11573720). In Ghana, the contribution of family physicians to chronic disease management, including diabetes, through continuity of care has been highlighted. The establishment of a chronic care clinic by a family physician in Manna Mission Hospital has provided care for patients with chronic non-communicable diseases and has emphasized the importance of family physicians in managing the chronic disease burden (PUBMED:34931522). In Taiwan, the utilization of family physicians and their role in chronic disease management, such as diabetes, has room for improvement. Despite a significant portion of the population visiting family physicians, only a small percentage of patients with essential hypertension, diabetes mellitus, and hyperlipidemia had visited family physicians for their care (PUBMED:30947595). In Edmonton, Alberta, a system-wide chronic disease management model was implemented to support people with chronic diseases and their primary care physicians. This model, which includes standardized protocols and consistent follow-up, has been successful in diabetes management and serves as a template for managing other chronic diseases (PUBMED:17626549). Overall, the management of diabetes by family physicians and general internists involves both opportunistic and planned care approaches, with efforts being made to improve the delivery of services and continuity of care for better chronic disease management.
Instruction: Evaluation of the self-reported SDQ in a clinical setting: do self-reports tell us more than ratings by adult informants? Abstracts: abstract_id: PUBMED:15243782 Evaluation of the self-reported SDQ in a clinical setting: do self-reports tell us more than ratings by adult informants? Objectives: The aim of this study was to evaluate the German self-reported Strengths and Difficulties Questionnaire (SDQ) in a clinical setting. We also investigated whether this additional information gathered directly from older children and adolescents improves the prediction of clinical status when external ratings from their parents and/or teachers are already available. Methods: SDQ self-reports were collected from 214 in- and outpatients (81 girls and 133 boys) aged 11 to 17 years who were seen at the department of child and adolescent psychiatry of the University of Göttingen. Results obtained with the self-rated questionnaire were compared with the parent and teacher SDQs, corresponding CBCL/YSR scores, and the clinical diagnostic classification. Finally, the additional diagnostic benefits of the self-reports were examined. Results: The scales of the SDQ self-report proved to be sufficiently homogeneous, and acceptable correlations were found with the equivalent parent and teacher ratings. The self-rated version of the SDQ demonstrated good validity with respect to the differentiation between clinically defined cases and non-cases and in detecting various subcategories of psychiatric disorders within the clinic sample. SDQ self-reports significantly contributed to the prediction of diagnostic status, specifically if only parent or teacher ratings were available. Conclusions: The self-rated version of the SDQ was shown to be a reliable and valid method for the assessment of behavioural problems in children and adolescents. In the absence of adult informant reports from parents and teachers, the diagnostic value of self-ratings was also demonstrated. abstract_id: PUBMED:9501762 Self-report ratings and informants' ratings of personalities of depressed outpatients. Objective: This study sought to determine whether personality traits of depressed patients could be assessed similarly by informants and self-reports of the patients themselves. Method: Forty-six depressed outpatients completed the self-report (first-person) version of the Revised NEO Personality Inventory and nominated informants who knew them well to complete the third-person version of that instrument. Results: Agreement between the self-ratings and informants' ratings on the five factors of the inventory--neuroticism, extraversion, openness-to-experience, agreeableness, and conscientiousness--was high. The only significant difference between the self-ratings and informants' ratings was on the extraversion scale, where the patients rated themselves as significantly more introverted than did the informants. Conclusions: Informants' ratings of personality are similar to self-report ratings of depressed patients. Depressed mood may not influence the self-report of personality traits. abstract_id: PUBMED:26334871 Do We Need Multiple Informants When Assessing Autistic Traits? The Degree of Report Bias on Offspring, Self, and Spouse Ratings. This study focused on the degree of report bias in assessing autistic traits. Both parents of 124 preschoolers completed the Social Communication Questionnaire and the Autism-spectrum Quotient. Acceptable agreement existed between mother and father reports of children's mean scores of autistic traits, but interrater reliability for rank-order correlations was only fair. No evidence was found for report bias regarding parent-offspring autistic traits. However, adult autistic ratings were strongly biased: spouse-ratings were higher than self-ratings, correlations were only fair when both parents reported about the same person, and resemblance was higher for reports from the same person than for spouses' separate self-reports. It is advisable to involve multiple informants when assessing autistic traits, and to use procedural and/or statistical remedies to control for report bias. abstract_id: PUBMED:31114966 Depression-related distortions in maternal reports of child behaviour problems. Associations between maternal depression and child behaviour problems may be biased due to depression-related distortions in the maternal reports. In this study, we compared the agreement between maternal, teacher and child ratings of hyperactivity/inattention (H/I), conduct (CD) and emotional (EM) problems measured with the Strengths and Difficulties Questionnaire in a population with depressed and non-depressed mothers. The sample of 12,961 11-year-old children was drawn from the Danish National Birth Cohort. We also examined the association between maternal depression before and during the child's life and child H/I, CD and EM problems when reported by the different informants. Agreement between the informants was assessed with the Bland-Altman limits of agreement (LOA) and associations were analyzed using multiple linear regression models. We found large discrepancies between maternal and teacher ratings of child behavioural problems measured with the SDQ and less discrepancy between maternal and child self-ratings. Better agreement between informants was found in the population of depressed mothers compared to non-depressed mothers for H/I and CD problems, and not for EM symptoms. The LOA for all three subscales were wide in both populations, with the largest intervals ranging from - 6 (Lower LOA) to 3 (Upper LOA) for H/I. Statistically significant associations were found between maternal depression, and child H/I and CD problems when reported by the mother but not the teacher and only by the child, when maternal depression had been present during the child's life. Our findings emphasize the importance of considering each informants' ratings when obtaining ratings from multiple sources. abstract_id: PUBMED:35222002 Divergence Between Informant and Self-Ratings of Activities of Daily Living Impairments in Parkinson's Disease. Objective: To examine the agreement between self- and informant-reported activities of daily living (ADL) deficits in Parkinson's Disease (PD) patients, and to examine factors influencing ADL ratings. Background: In PD, the loss of functional independence is an important outcome of disease progression. The valid assessment of ADL function in PD is essential, but it is unclear to what extent informants' and patients' perceptions of their daily functions concur, and how other factors may influence both ratings. Methods: Data of 150 PD patients who underwent cognitive and motor testing, as well as their informants were analyzed. The 10-item Functional Activities Questionnaire (FAQ), completed separately by patients (FAQ-S) and their informants (FAQ-I), assessed ADL function. Weighted κ statistics summarized level of agreement, and a discrepancy score (FAQ-I - FAQ-S) quantified agreement. Correlation analyses between FAQ total scores, patient and informant characteristics, and cognitive scores were conducted, with post hoc regressions to determine the associations between both FAQ scores and cognition, independent of patient characteristics. Results: The sample included 87 patients with normal cognition, 50 with mild cognitive impairment, and 13 with dementia. Overall, there was fair to moderate agreement between patients and informants on individual FAQ items (0.27 ≤ κ ≤ 0.61, p &lt; 0.004), with greater discrepancies with increasing cognitive impairment. Patients' age, motor severity, non-motor burden, and depression also affected both ratings (0.27 ≤ r ≤ 0.50, p &lt; 0.001), with motor severity showing the greatest influence on both ratings. Both the FAQ-I and FAQ-S were correlated with almost all cognitive domains. Post hoc regression analyses controlling for patient characteristics showed that the attention domain was a significant predictor of both the FAQ-S and FAQ-I scores, and memory was also a significant predictor of the FAQ-I score. Only 29.3% of patients agreed perfectly with informants on the FAQ total score, with informants most commonly rating ADL impairments as more severe than patients. Conclusions: Patient and informant ratings of ADL function using FAQ items showed moderate agreement, with only few items reaching substantial agreement. Ratings of both were associated with patient cognitive status, but also other characteristics. In addition to patient and informant reports, objective measures are needed to accurately classify ADL deficits in PD. abstract_id: PUBMED:28915470 Emotional and behavioural problems in children with Developmental Coordination Disorder: Exploring parent and teacher reports. Background: Although characterised by motor impairments, children with Developmental Coordination Disorder (DCD) also show high rates of psychopathology (anxiety, depression, low self-esteem). Such findings have led to calls for the screening of mental health problems in this group. Aims: To investigate patterns and profiles of emotional and behavioural problems in children with and without DCD, using the Strengths and Difficulties Questionnaire (SDQ). Methods And Procedures: Teachers and parents completed SDQs for 30 children with DCD (7-10 years). Teacher ratings on the SDQ were also obtained from two typically-developing (TD) groups: 35 children matched for chronological age, and 29 younger children (4-7 years) matched by motor ability. Outcomes And Results: Group and individual analyses compared parent and teacher SDQ scores for children with DCD. Teacher reports showed that children with DCD displayed higher rates of emotional and behavioural problems (overall, and on each subscale of the SDQ) relative to their TD peers. No differences were observed between the two TD groups. Inspection of individual data points highlighted variability in the SDQ scores of the DCD group (across both teacher and parent ratings), with suggestions of elevated hyperactivity but comparably lower levels of conduct problems across this sample. Modest agreement was found between teacher and parent ratings of children with DCD on the SDQ. Conclusions And Implications: There is a need to monitor levels of emotional and behavioural problems in children with DCD, from multiple informants. abstract_id: PUBMED:21928910 Agreement of informants on emotional and behavioral problems from childhood to adulthood. Agreement among informants' ratings of children's and adults' internalizing and externalizing problems is moderate. Each informant contributes unique information about an individual's problems. Thus, it has been advocated to obtain ratings from multiple sources in child psychiatry, whereas adult psychiatry relies mostly on self-reports. Longitudinal studies repeatedly assessing children's psychiatric problems from childhood into adulthood and including reports from multiple informants could serve as benchmarks for studies including only selected time points or informants. We examined the development of agreement among informants' ratings of internalizing and externalizing problems using self-, parent, teacher, and partner reports in a longitudinal study with 7 assessment waves spanning an interval of 24 years and covering an age range of 4 to 40 years. The number of informant pairs is 12,059, who rated 1,875 individuals. The results revealed that correlations among informant ratings of internalizing and externalizing problems depend more on the informant pair than on problem type or age group. Second, differences among informants rating internalizing problems typically become larger when individuals get older. Third, when rating themselves, individuals typically report higher scores than do parents, teachers, or partners. These results were consistent for internalizing and externalizing problems and across age groups. The findings indicate that like in child psychiatry, assessment in adult psychiatry may benefit from a shift to multiple informant reports, as different informants' ratings may contain more information than if informants completely agree. abstract_id: PUBMED:32680325 Measurement Issues: Review of four patient reported outcome measures: SDQ, RCADS, C/ORS and GBO - their strengths and limitations for clinical use and service evaluation. Background: There is an international drive for routine use of Patient Reported Outcome Measures (PROMs) across all health services including in relation to Child and Adolescent Mental Health Services (CAMHS). A number of reviews have summarized the validity and reliability of well-being and mental health measures for children but there are fewer attempts to consider utility for routine use. Method: This review considers four child self-report measures: the Strengths and Difficulties Questionnaire (SDQ), the Revised Child Anxiety and Depression Scale (RCADS), (Child) Outcomes Rating Scale (C/ORS) and Goals Based Outcomes (GBOs). It explores the strengths and limitations of each and considers how they can be used to support both clinical practice and service evaluation. Results: There is evidence for the clinical utility of RCADS, C/ORS and GBOs, although the utility of the SDQ as a feedback measure remains unclear. For service evaluation, the SDQ has the greatest evidence for norms making it useful for comparison and there is evidence that the RCADS may be the most sensitive to change of the measures reviewed; C/ORS has issues around ceiling effect, data error and data manipulation. More research is required around GBOs before their use for service evaluation can be determined. Conclusions: In summary, these different measures may be viewed as complementary tools and determining the best way to make use of them severally and individually in clinical and community settings is a current focus for child mental health practitioners. abstract_id: PUBMED:31216327 The validity of the Strengths and Difficulties Questionnaire (SDQ) for children with ADHD symptoms. Background: The Strengths and Difficulties Questionnaire (SDQ) is widely used to assess child and adolescent mental health problems. However, the factor structure of the SDQ is subject to debate and there is limited evidence investigating measurement equivalence invariance (ME/I) between treatment groups, informants, and across time. Method: A randomised controlled trial (RCT) recruited 250 participants (6-17 years) who had been referred for an attention deficit hyperactivity disorder (ADHD) assessment. Participants and their clinician either received or did not receive a QbTest report (computer task measuring attention, impulsivity and activity). Parents and teachers completed the SDQ at baseline and 6-months later. This study aimed to understand the factor structure of the SDQ in a clinic referred ADHD sample, and validate the scale as a screening/diagnostic aide and as a measure of treatment outcome both in clinical and research settings. Exploratory Structural Equation Modelling (ESEM) was performed to examine the factor structure, and ME/I was assessed between treatment groups, informants, and time points. The criterion validity of the SDQ predictive algorithm for ADHD was compared with clinician and research diagnoses using logistic regression and tests of diagnostic accuracy. Results: A 5-factor structure provided the best fit with strong factorial invariance between treatment groups and across time points, but not across informants (parent and teacher ratings). SDQ ratings of 'probable' hyperactivity disorder were good predictors of clinical (OR = 10.20, 95%CI 2.18-48.71,p = 0.003) and research diagnoses of ADHD (OR = 6.82, 95%CI 1.95-23.84,p = 0.003), and research diagnoses of Hyperkinetic disorder (OR = 4.02, 95%CI 1.13-14.25,p = 0.031). Further examination of the SDQ hyperactivity 'probable' rating showed good specificity (84.5%-74.5%) but poor sensitivity (45.0-42.5%) for ADHD. Conclusion: The findings indicate the SDQ is a valid outcome measure for use in RCTs and clinical settings. However, care should be taken when using the SDQ predictive algorithm to screen for ADHD in clinically referred samples. abstract_id: PUBMED:37787879 Systematic Integration of Multi-Informant Externalizing Ratings in Clinical Settings. Best practice clinical assessment of externalizing problems often necessitates collection of information from parents, youth themselves, and teachers. The present study tested the predictive validity of a psychometrically-driven scoring procedure to integrate multi-informant, dimensional ratings of externalizing problems. Participants were 2264 clinic-referred youth ages 6-18. Parents, teachers, and youth completed questionnaire ratings of externalizing problems (hyperactivity-inattention, conduct problems, and oppositionality-defiance) prior to an initial clinical appointment. The predictive validity of simple (highest informant rating; and all informant ratings separately) and more complex (latent S-1 bifactor model with specific informant factors; and moderated nonlinear factor analysis accounting for child age and sex) methods of informant integration was tested in predicting impairment, comorbidity, and number of clinical encounters. A simple model, in which all informant ratings were included, showed the best predictive validity across outcomes, performing as well or better than the use of the highest informant ratings or more complex latent variable models. The addition of child age and sex as moderators in the factor model did not improve predictive validity. Each informant (parent, teacher, and youth) contributes important information to the prediction of clinically-relevant outcomes. There is insufficient evidence at present to suggest that complex latent variable models should be favored over simpler models that preserve each informant's ratings. Answer: The evaluation of the self-reported Strengths and Difficulties Questionnaire (SDQ) in a clinical setting indicates that self-reports can indeed provide valuable information that may not be captured by ratings from adult informants. The study by Klasen et al. (PUBMED:15243782) found that the self-rated version of the SDQ was reliable and valid for assessing behavioral problems in children and adolescents. It demonstrated good validity in differentiating between clinically defined cases and non-cases and in detecting various subcategories of psychiatric disorders within the clinic sample. Importantly, the self-reports significantly contributed to the prediction of diagnostic status, especially when only parent or teacher ratings were available. This suggests that self-reports can offer additional diagnostic benefits and should be considered in clinical assessments, particularly when adult informant reports are not available. Other studies have also highlighted the importance of considering self-reports alongside informant ratings. For example, the study on depressed outpatients (PUBMED:9501762) found high agreement between self-ratings and informants' ratings on personality traits, suggesting that self-reports can be as reliable as informant reports in certain contexts. However, the study on autistic traits (PUBMED:26334871) showed that adult autistic ratings were strongly biased, with spouse-ratings being higher than self-ratings, indicating the need for multiple informants and methods to control for report bias. In the context of maternal depression (PUBMED:31114966), discrepancies were found between maternal and teacher ratings of child behavioral problems, with less discrepancy between maternal and child self-ratings. This underscores the importance of considering each informant's perspective. Similarly, in Parkinson's Disease (PUBMED:35222002), there was only fair to moderate agreement between self- and informant-reported activities of daily living, with informants often rating impairments as more severe than patients. Overall, these studies suggest that self-reports can provide unique and valuable information that may not be fully captured by adult informants. Therefore, in a clinical setting, it is beneficial to include self-reports in the assessment process to gain a more comprehensive understanding of the individual's difficulties (PUBMED:15243782).
Instruction: Ionizing radiation exposure in patients with inflammatory bowel disease: are we overexposing our patients? Abstracts: abstract_id: PUBMED:31560037 Utilization of Diagnostic Imaging and Ionizing Radiation Exposure-Has the Tide Already Turned? Inflammatory bowel diseases require repetitive abdominal imaging for diagnosis, follow-up, or management of complications. This editorial comments on the large incident-based case-control study on utilization of diagnostic imaging and ionizing radiation exposure by Nguyen and colleagues published in this issue. abstract_id: PUBMED:31560042 Utilization of Diagnostic Imaging and Ionization Radiation Exposure Among an Inflammatory Bowel Disease Inception Cohort. Background: Abdominal imaging is important in managing inflammatory bowel disease (IBD). We characterized utilization of imaging and exposure to ionizing radiation. Methods: We enumerated abdominal diagnostic imaging in a population-based cohort of IBD patients between 1994 and 2016. Trends in utilization of abdominal computed tomography (CT), magnetic resonance imaging (MRI), and ultrasound were characterized. Cumulative doses of ionizing radiation were compared between IBD patients and non-IBD controls and between Crohn's disease (CD) and ulcerative colitis (UC) patients. Regression models were constructed to assess predictors of high ionizing radiation exposure. Results: There were 72,933 incident cases of IBD. During the first 5 years of diagnosis, IBD patients were exposed to nearly 6-fold higher exposure to cumulative ionizing radiation attributable to abdominal imaging compared with non-IBD controls (18.6 mSv vs 2.9 mSv). Cumulative ionizing radiation exposure was higher in CD than UC (26.0 mSv vs 12.0 mSv; P &lt; 0.001). Crohn's disease patients were more than twice as likely as UC patients to exceed 50 mSv (15.6% vs 6.2%; P &lt; 0.001) and 100 mSV (5.0% vs 2.1%; P &lt; 0.001). There was geographic variation in ionizing radiation exposure, and individuals of lower income were more likely to have high exposure. Utilization of abdominal MRI has increased substantially, peaking between 2007 and 2012 and increasing annually at 34%, which coincided with an annual 2% decline in the use of abdominal CT. Conclusions: Crohn's disease patients are at highest risk for high exposure to ionizing radiation, with a subgroup receiving potentially harmful levels. Increasing utilization and access to abdominal MRI may alleviate exposure. abstract_id: PUBMED:18564113 Diagnostic ionizing radiation exposure in a population-based cohort of patients with inflammatory bowel disease. Objective: For diagnosis, assessing disease activity, complications and extraintestinal manifestations, and monitoring response to therapy, patients with inflammatory bowel disease undergo many radiological studies employing ionizing radiation. However, the extent of radiation exposure in these patients is unknown. Methods: A population-based inception cohort of 215 patients with inflammatory bowel disease from Olmsted County, Minnesota, diagnosed between 1990 and 2001, was identified. The total effective dose of diagnostic ionizing radiation was estimated for each patient. Linear regression was used to assess the median total effective dose since symptom onset. Results: The number of patients with Crohn's disease and ulcerative colitis was 103 and 112, with a mean age at diagnosis of 38.6 and 39.4 yr, respectively. Mean follow-up was 8.9 yr for Crohn's disease and 9.0 yr for ulcerative colitis. Median total effective dose for Crohn's disease was 26.6 millisieverts (mSv) (range, 0-279) versus 10.5 mSv (range, 0-251) for ulcerative colitis (P &lt; 0.001). Computed tomography accounted for 51% and 40% of total effective dose, respectively. Patients with Crohn's disease had 2.46 times higher total effective dose than ulcerative colitis patients (P= 0.001), adjusting for duration of disease. Conclusions: Annualizing our data, the radiation exposure in the inflammatory bowel disease population was equivalent to the average annual background radiation dose from naturally occurring sources in the U.S. (3.0 mSv). However, a subset of patients had substantially higher doses. The development of imaging management guidelines to minimize radiation dose, dose-reduction techniques in computed tomography, and faster, more robust magnetic resonance techniques are warranted. abstract_id: PUBMED:20679907 Patients with IBD are exposed to high levels of ionizing radiation through CT scan diagnostic imaging: a five-year study. Goals: The objective of this study was to assess the total effective dose of ionizing radiation from abdominal diagnostic imaging in patients with inflammatory bowel disease (IBD) over a 5-year period. Background: Radiation exposure from diagnostic imaging is becoming increasingly common in IBD patients, in part due the availability of computed tomography (CT). Increased risk of malignancy has been associated with radiation exposure. Study: This is a retrospective chart review. A university-based gastroenterology database was searched for patients with a diagnosis of Crohn's disease (CD) or ulcerative colitis (UC) seen between 2003 and 2008. The cumulative ionizing radiation exposure, expressed in milli-Sieverts (mSv), was then calculated from standard tables and by counting the number of abdominal imaging studies. Results: Patients with CD had higher cumulative radiation exposure from diagnostic imaging than patients with UC (14.3 ± 1.45 mSv/5-y period vs. 5.9 ± 0.81 mSv/5-y period, P=0.00003). Three-quarters of the radiation exposure in both CD and UC was from CT scans. Thirty-four percent (127 of 373) of CD patients had CT scans, compared with just 20% (37 of 182) of UC patients. Importantly, 7% of CD patients were exposed to high levels of radiation (&gt;50 mSv/5 y), in contrast to none of the UC patients. Conclusions: Patients with IBD, and especially CD patients, undergo frequent diagnostic imaging and thus significant exposure to ionizing radiation. This radiation exposure reaches high levels in 7% of CD patients, mainly from CT scanning. Efforts should be made to minimize the radiation exposure from diagnostic imaging by reducing either the number of studies or radiation dose in modalities with ionizing radiation. abstract_id: PUBMED:25420751 Ionizing radiation exposure in patients with inflammatory bowel disease: are we overexposing our patients? Objective: Imaging techniques are accurate and reliable in diagnosing inflammatory bowel disease (IBD). However, the main disadvantage of computed tomography (CT) compared with magnetic resonance imaging (MRI) is radiation exposure and the potential risk of cancer, especially since IBD patients are at increased risk of malignancies. This study aims to quantify and characterize effective radiation exposure of IBD patients. Methods: A cohort of IBD patients were retrospectively enrolled in the Clínica las Condes IBD registry between 2011 and 2013. High cumulative radiation exposure (CED) was defined as ≥-50 mSv. Results: A total of 325 IBD patients were enrolled in our registry, including 243 (74.8%) with UC and 82 (25.2%) with CD. The patients with CD were more commonly to reach a high CED seen compared with those with UC (19.5% vs 2.5%). Higher exposure to radiation was associated with longer duration of disease, ileal involvement, stricturing behavior, treatments with steroids and biological agents and CD-related hospitalization or surgery. Abdominopelvic CT and enteroclysis CT accounted for 93.6% of total CED. Conclusions: A high percentage of IBD patients are exposed to high CED. Radiation-free cross-sectional examinations, such as MRI, should be used, especially in young patients, those who have undergone prior surgery and those with severe IBD. abstract_id: PUBMED:27832710 Radiation exposure in patients with inflammatory bowel disease and irritable bowel syndrome in the years 2001-2011. Objectives: To compare cumulative ionizing radiation in patients with inflammatory bowel disease (IBD) and irritable bowel syndrome (IBS) for the years 2001-2011. To study how radiation exposure change over time in patients with newly diagnosed IBD and factors associated with radiation exposure. Material And Methods: All radiological investigations performed between 1 January 2001 and 31 December 2011 were retrospectively recorded in patients with Crohn's disease (CD) (n = 103), ulcerative colitis (UC) (n = 304) and IBS (n = 149). Analyses were done with Mann-Whitney and Chi-Square test. Results: The median total cumulative radiation exposure in mSv for CD (20.0, inter quartile range (IQR) 34.8), UC (7.01, IQR 23.8), IBS (2.71, IQR 9.15) and the proportion of patients who had been exposed for more than 50 mSv during the study period (CD 19%, UC 11%, IBS 3%) were significantly higher in the patients with CD compared to patients with UC (p &lt; .001) and IBS (p &lt; .001), respectively. In turn, patients with UC had significantly higher doses than patients with IBS (p = .005). Risk factors for radiation exposure were female gender (CD), early onset (UC), ileocolonic location (CD), previous surgery (CD and UC), depression (IBS) and widespread pain (IBS). In newly diagnosed CD, there was a significant decline in median cumulative radiation dose in mSv (17.2 vs. 12.0; p = .048) during the study period. Conclusions: Patients with CD are at greatest risk for high cumulative radiation exposure, but there is a decline in exposure during the late 2000s. Non-colectomized patients with UC and patients with IBS have a relatively low risk of cumulative radiation exposure. abstract_id: PUBMED:23702806 Quantifying exposure to diagnostic radiation and factors associated with exposure to high levels of radiation in Korean patients with inflammatory bowel disease. Background: Several recent studies have revealed that diagnostic imaging can result in exposure to potentially harmful levels of ionizing radiation in patients with inflammatory bowel disease (IBD). However, these studies have been conducted exclusively in Western countries, and no data are available in Asian populations. The aim of this study was to estimate the diagnostic radiation exposure in Korean patients with IBD and to determine the factors associated with high radiation exposure. Methods: Patients with an established diagnosis of IBD between July 1987 and January 2012 were investigated in 13 university hospitals in Korea. The cumulative effective dose (CED) was calculated retrospectively from standard tables. Results: A total of 777 patients with Crohn's disease (CD) and 1422 patients with ulcerative colitis (UC) were included in the study. The mean CED for CD and UC were 53.6 and 16.4 mSv, respectively (P &lt; 0.001). CTof CD and UC accounted for 81.6% and 71.2% of total effective dose, respectively. Importantly, 34.7% of patients with CD and 8.4% of patients with UC were exposed to high levels of radiation (CED &gt; 50 mSv) (P &lt; 0.001). High radiation exposure was associated with long disease duration, ileocolonic disease, upper gastrointestinal tract involvement, surgical intervention, hospitalization, and the requirement for oral steroids in CD, and with surgical intervention, hospitalization, and the requirement for infliximab in UC. Conclusions: A substantial proportion of patients with IBD, especially patients with CD, were exposed to significantly harmful amounts of diagnostic radiation, mainly as a result of CT examination. Given that IBD is a life-long illness, strategies to reduce radiation exposure from diagnostic imaging need to be considered. abstract_id: PUBMED:22827741 Radiation exposure in patients with inflammatory bowel disease--primum non nocere? Background: Patients with Inflammatory Bowel disease (IBD) are frequently exposed to diagnostic medical radiation for the diagnosis and evaluation of their disease. Despite increasing awareness of the potentially serious downstream effects, few studies have quantified radiation exposure in IBD patients. Methods: We conducted a retrospective review of 280 consecutive patients attending IBD clinics at our hospital. All incidences of diagnostic medical radiation from initial diagnosis until 30 June 2010 inclusive were recorded. The radiation dose for each procedure was obtained from standardized tables and the effective dose for each procedure calculated by multiplying this value by the number of procedures during the study period. The sum of doses received was the cumulative effective dose. Results: The mean and median cumulative effective radiation doses were 10.17 mSv and 4.12 mSv respectively. Crohn's disease patients had significantly higher cumulative effective radiation exposure than UC patients (p &lt; 0.001) with exposure exceeding 10 mSv in 58.3%, 25 mSv in 18.1% and 50 mSv in 6.3%, respectively. Smoking status, disease duration, and previous surgery were significant predictors for increased radiation exposure even after adjusting for other predictors. 47 small bowel magnetic resonance (MR) studies were undertaken as an alternative to ionising radiation in the last 3 years. Conclusions: Patients with IBD, particularly those with Crohn's disease, are exposed to significant amounts of diagnostic medical radiation in their lifetime. Clinicians must remain vigilant to the risk of cumulative radiation when evaluating these patients and consider non-ionizing alternatives such as intestinal ultrasound and MR imaging where clinically appropriate. abstract_id: PUBMED:26814500 Risk factors for radiation exposure in newly diagnosed IBD patients. Purpose: Patients with inflammatory bowel disease (IBD) may be exposed to high doses of diagnostic radiation. The purpose of this study is to identify subsets of this population at risk for significant radiation exposure. Methods: This HIPAA compliant, IRB approved study consists of 336 patients (237 adult and 99 pediatric) within the Ocean State Crohn's &amp; Colitis Area Registry (OSCCAR). All were newly diagnosed with IBD and prospectively enrolled between 1/2008 and 12/2012. Comprehensive chart review was performed. Results: 207 (61.6%) patients were diagnosed with Crohn's disease (CD), 120 (35.7%) with ulcerative colitis (UC), and 9 (2.7%) with inflammatory bowel disease, type unspecified (IBDU). 192 (57.1%) patients were exposed to GI-specific radiation. Average GI-specific radiation dose for adult IBD patients was 14.1 mSV and was significantly greater among adult CD than adult UC patients (p = 0.01). Pediatric patients underwent fewer CT scans (p &lt; 0.0001). Risk factors for increased radiation exposure include: GI surgery (p = 0.003), biologic therapy (p = 0.01), pain-predominant symptoms (as compared to diarrhea-predominant symptoms; p &lt; 0.05), and isolated ileal disease (p = 0.02). Patients with stricturing or penetrating disease received higher radiation doses than patients with non-stricturing, non-penetrating disease (p &lt; 0.0001). Conclusions: A variety of risk factors are associated with increased exposure to ionizing radiation after diagnosis of IBD. Knowledge of these risk factors can help physicians prospectively identify patients at risk for elevated radiation exposure and consider low-dose or radiation-free imaging. abstract_id: PUBMED:26122246 Use of Imaging Techniques in Inflammatory Bowel Diseases That Minimize Radiation Exposure. The use of imaging in the management of inflammatory bowel disease (IBD) has grown exponentially in the last few years. This has raised concerns about the risk of high cumulative level of radiation exposure from medical imaging in IBD patients during their lifetime, especially when the disease begins in pediatric age. Physicians caring for IBD children should be aware of the malignant potential of ionizing radiation and of the availability of alternative radiation-free techniques such as magnetic resonance imaging (MRI) and ultrasonography (US), in order to use them whenever possible. This review will focus on the value of US and MRI in pediatric IBD. Answer: The collection of abstracts provided indicates that patients with inflammatory bowel disease (IBD), particularly those with Crohn's disease (CD), are indeed exposed to high levels of ionizing radiation, primarily due to diagnostic imaging such as computed tomography (CT) scans. Studies have shown that IBD patients, especially those with CD, are exposed to significantly higher cumulative ionizing radiation doses compared to non-IBD controls and those with ulcerative colitis (UC) (PUBMED:31560042, PUBMED:18564113, PUBMED:20679907, PUBMED:25420751, PUBMED:27832710, PUBMED:23702806, PUBMED:22827741, PUBMED:26814500). For instance, during the first five years of diagnosis, IBD patients were exposed to nearly six-fold higher exposure to cumulative ionizing radiation attributable to abdominal imaging compared with non-IBD controls (18.6 mSv vs 2.9 mSv), with CD patients having higher exposure than UC patients (26.0 mSv vs 12.0 mSv) (PUBMED:31560042). Another study found that the median total effective dose for CD was 26.6 millisieverts (mSv) versus 10.5 mSv for UC (PUBMED:18564113). Additionally, a significant proportion of CD patients were exposed to high levels of radiation (>50 mSv/5 y) (PUBMED:20679907). The high exposure to ionizing radiation is concerning due to the potential risk of malignancy associated with radiation exposure. As a result, there is a call for the development of imaging management guidelines to minimize radiation dose, dose-reduction techniques in CT, and the use of faster, more robust magnetic resonance techniques (PUBMED:18564113). The increasing utilization and access to abdominal MRI, which does not involve ionizing radiation, may help alleviate exposure (PUBMED:31560042). Moreover, radiation-free cross-sectional examinations, such as MRI, are recommended, especially for young patients, those who have undergone prior surgery, and those with severe IBD (PUBMED:25420751). In summary, the evidence suggests that patients with IBD are at risk of overexposure to ionizing radiation due to frequent diagnostic imaging, and efforts should be made to minimize this exposure by considering alternative imaging modalities that do not involve ionizing radiation.
Instruction: Is there a place for medical students as teachers in the education of junior residents? Abstracts: abstract_id: PUBMED:34886136 Effect of Residents-as-Teachers in Rural Community-Based Medical Education on the Learning of Medical Students and Residents: A Thematic Analysis. Residents-as-teachers (RaT) is a theoretical framework emphasizing the significance of the similar learning background of teachers and learners. In Japan, community-based medical education (CBME) is a practical approach to teaching family medicine. This study aimed to investigate the impact and challenges of RaT on the learning of medical students and residents in CBME at a rural community hospital in Japan. Over the course of a year, the researchers conducted one-on-one interviews with three residents and ten medical students participating in family medicine training at the hospital. The interviews were recorded and transcribed verbatim. Grounded theory was used in the data analysis to clarify the findings. Three key themes emerged from the research: lack of educational experience with RaT, effectiveness of RaT, and challenges of RaT. Although participants were prejudiced against RaT, they felt its implementation could facilitate the establishment of beneficial relationships between learners and teachers. They were also able to participate in medical teams effectively. The findings suggest that the increased participation of senior doctors in RaT could strengthen its learning effects. RaT in rural CBME should be applied in various contexts, and its effectiveness should be further investigated both qualitatively and quantitatively. abstract_id: PUBMED:24468027 Is there a place for medical students as teachers in the education of junior residents? Background: The aim of this study was to investigate a novel resident education model that turns the traditional surgical hierarchy upside down, termed a "reverse" peer-assisted learning curriculum. Methods: Thirty surgical topics were randomized between medical students and chief residents on each clinical team, with 1 topic being presented briefly during morning rounds. An exam evaluating junior residents' knowledge of these topics was administered before and after 1 month of presentations. A questionnaire was distributed to evaluate the junior residents' perceptions of this teaching model. Results: Thirty-four residents participated. There was a significant improvement in the mean examination score (54 vs 74, P &lt; .05). No significant difference was noted in the mean score differentials of topics presented by either the medical students or the chief resident (21 vs 18, P = .22). More than 80% of the residents responded positively about the effectiveness of this exercise and agreed that they would like to see this model used on other services. Conclusions: This study confirms the hypothesis that medical students can teach surgical topics to junior residents at least as effectively as their chief residents. abstract_id: PUBMED:33290976 Radiology residents from learners to teachers: A multi-centric study. Purpose: The teaching role of radiology residents has seldom been evaluated, and little is known about how teaching skills of radiology residents evolve throughout their training in the absence of formal teaching guidance. Our objective is to identify residents' characteristics correlating with better teaching and compare teaching characteristics of junior and senior residents. Method: All seven medical schools in Lebanon were involved in this multi-centric study. A self-assessment questionnaire was sent to sixty-nine radiology residents and filled anonymously. Fifty-seven (83 %) responses were received and represent the study population. Data analysis was based on factors correlating with overall teaching effectiveness. A comparison of senior and junior residents' responses was also performed. Results: Overall teaching effectiveness correlated with better knowledge, technical skills, clinical judgment, communication skills, identification and correction of learning problems, and importantly providing and receiving feedback. Senior residents rated themselves significantly better than juniors in "knowledge related to radiology" and in "technical skills" but not in "overall teaching effectiveness". Seniors did not agree on the statement "medical students make my clinical responsibilities easier". Although not reaching statistical significance, seniors showed a trend towards improving teaching skills. Conclusions: Overall teaching effectiveness is correlated with the adoption of proper skills and techniques. Despite the lack of formal teaching guidance, senior radiology residents are improving their techniques as teachers but still face difficulties in several domains. abstract_id: PUBMED:33848841 Effect of "Residents as Teachers" Workshop on Learner Perception of Trainee Teaching Skill. Background: Surgical residencies use variable structures for formal training in education. We hypothesized that a one-day workshop intervention would improve resident teaching ability measured by self-assessment and learner evaluation. Materials And Methods: Faculty educators delivered a Residents as Teachers (RAT) workshop to general surgery residents on setting expectations, positive learning environment, difficult feedback and the 1-min preceptor model. For three months before and after the workshop, junior residents and medical students evaluated their supervising residents' teaching skill monthly using a Likert scale questionnaire. Pre- and postworkshop surveys were administered to resident participants to assess their knowledge of the material and teaching confidence. Results were analyzed using Wilcoxon rank sum tests. This study was conducted at a tertiary academic center with a large surgical residency program. Results: Thirty-nine PGY 1-5 residents participated in the Residents as Teachers workshop and were included in the study. Pre- and post- workshop survey results demonstrated significant improvements in participants' knowledge and teaching confidence. On monthly assessments of seniors by junior residents, significant improvements were noted in three domains. Medical student ratings did not reflect significant improvements in resident teaching skill. Conclusions: This is the first study using learner evaluation of a comprehensive surgical RAT program. Despite a significant increase in surgery residents' self-assessment following participation in an education workshop, no improvement was seen in resident teaching skill as perceived by medical students. abstract_id: PUBMED:34688990 "Residents as Teachers" Workshops Designed by Surgery Residents for Surgery Residents. Background: A core tenet of medical education is the expectation that senior residents will teach junior residents and medical students. However, many general surgery residency programs lack a formalized curriculum to equip trainees with necessary teaching skills. We evaluated the impact of resident-led residents-as-teachers (RAT) workshops (RATW) and assessed adaptability from in-person to virtual delivery. We hypothesized these courses would improve trainees' confidence in their roles as resident-teachers. Methods: Pre-COVID-19, an in-person workshop for residents (PGY1-5) was conducted over two days. During the COVID-19 pandemic, a virtual RATW for incoming interns (PGY1) was conducted during intern boot camp. Topic fidelity was preserved between the two RATWs. Resident-educators were responsible for content and delivery; the program director and associate program directors served as facilitators only. Surveys were used to evaluate residents' confidence in four core topics. A Wilcoxon test was used to compare quantitative data. Results: There was significant improvement in confidence in all areas following RATW attendance, except for "Teaching in the OR". In sub-analysis, there was a significant improvement in this category among incoming interns post-RATW (P &lt; 0.001). The majority of interns agreed that the RATW helped them transition into their new teaching role and agreed that the resident-led RATW was effective. Conclusions: A resident-designed and resident-led RAT curriculum in general surgery effectively improves residents' confidence in teaching and is well received by residents. We recommend the implementation of a RAT curriculum in general surgery residency and intern boot camp. The RATW was well adapted to distance-learning format. abstract_id: PUBMED:35527309 Residents as teachers in Neurology: a Germany-wide survey on the involvement of neurological residents in clinical teaching. Background: Residents play an important role in the clinical training of medical students, spending up to 25% of their daily work teaching. In the US medical curriculum didactic courses for residents already exist and their role as a teacher is firmly anchored. In Germany, there are no fixed regulations or residents-as-teachers-programs. The aim of this study was to evaluate the activities of neurological residents in clinical teaching. Methods: We conducted a prospective cross-sectional online survey among neurological residents in Germany. The evaluation was carried out descriptively and by means of text analysis. Results: 138 residents from 39 German neurological university hospitals answered the survey. Nearly half of them needed the teaching activity as part of their career planning. The residents are mostly involved in practical courses. More than 80% stated, that they enjoy teaching. 64% stated that there were no preparatory courses for teaching at their hospital/university. 78.4% of the respondents received no or merely insufficient feedback for their own teaching and 62.5% had only little or even no knowledge about the university curriculum. Conclusions: By teaching medical students, residents play an outstanding role in recruiting students for neurology and, simultaneously, teaching leads an improvement in the residents' own learning. To encourage young neurologists as teachers and-at the same time as learners-Clinic directors and universities should promote residents-as-teachers programs in neurology and reward the residents' teaching activities. abstract_id: PUBMED:38188414 Teaching Future Teachers: Development of a Students-as-Teachers Advanced Elective for Post-Clerkship Medical Students. Although teaching is an essential part of Graduate Medical Education and the practice of medicine, it is not often formally taught in Undergraduate Medical Education. Vanderbilt University School of Medicine (VUSM) has addressed this gap by creating a year-long Students-as-Teachers elective for post-clerkship medical students. The elective utilizes diverse activities to emphasize core tenets of medical education theory while simultaneously providing authentic teaching opportunities. The success of the elective is primarily attributable to its structure and design, the collaborative and supportive medical education environment, and student initiative and engagement. This manuscript describes the implementation and outcomes of the Students-as-Teachers elective. abstract_id: PUBMED:29637035 Curated Collections for Educators: Five Key Papers about Residents as Teachers Curriculum Development. The Accreditation Council for Graduate Medical Education (ACGME) requires residency programs to prepare residents to teach and assess medical students and other learners. In order to achieve this, many programs develop formal residents as teachers (RAT) curricula. Medical educators may seek the guidance of previously published literature during the development of RAT programs at their institutions. The authors sought to identify key articles published on the subject of RAT programs over the last 10 years. The authors utilized a formal literature search with the help of a medical librarian and identified additional articles from virtual discussions among the author group and an open call for articles on Twitter using the hashtag #MedEd. Virtual discussions occurred within an online community of practice, the Academic Life in Emergency Medicine (ALiEM) Faculty Incubator. The lead author conducted a four-round modified Delphi process among the author group in order to narrow the broad article list to five key articles on RAT programs. The authors summarize each article and provide considerations for junior faculty as well as faculty developers. Curriculum development and program evaluation should utilize established frameworks and evidence-based approaches. The papers identified by this Delphi process will help faculty use best practices when creating or revising new RAT curriculum. In addition, faculty tasked with guiding junior faculty in this process or creating faculty development programs around curriculum development will find these articles to be a great resource for building content. abstract_id: PUBMED:33886424 Twelve tips for fostering the next generation of medical teachers. Medical professionals with a special interest in and focus on education are essential to provide good quality education. Despite high numbers of students expressing an interest in teaching, concerns are rising regarding the supply of medical teachers, with few junior educators on the career ladder. To date, only some medical schools offer in-depth courses to students wanting to explore or aspire a career as a specialised medical teacher. We propose twelve tips for an elective course to foster the next generation of medical teachers. This course aims to enhance theoretical foundations and educational practices to cultivate the next generation of medical teachers. abstract_id: PUBMED:26826798 Literature review of teaching skills programs for junior medical officers. Objective: The aim of this review was to assess the current evidence regarding the efficacy of teaching skills programs for junior medical officers. We aimed to compare and contrast these results with findings from previous literature reviews, the last of which were published in 2009. Methods: In order to capture studies since the last published literature reviews, five databases and grey literature were searched for publications from January 2008 to January 2015. A search for literature reviews without using the timeframe limitation was also performed. Results: The search from January 2008 to January 2015 resulted in the inclusion of 12 studies. Five systematic reviews of the topic were found which included 39 individual studies that were also analysed. Nearly all studies reported positive effects. Twenty nine studies reported change in attitudes, 28 reported modification in knowledge, 28 reported change in behaviour, 6 reported change in the organisation and two reported change in program participant's students. There were substantial threats of bias present. Conclusions: The literature reviewed demonstrated many positive effects of teaching skills programs, which supports their utilization. However, high level outcomes need to be evaluated over longer periods of time to establish their true impact. An organisation specific approach to these programs needs to occur using sound course design principles, and they need to be reported in evaluation trials that are designed with robust methodology. Answer: Yes, there is a place for medical students as teachers in the education of junior residents. A study investigating a "reverse" peer-assisted learning curriculum, where medical students taught surgical topics to junior residents, found that medical students could teach these topics at least as effectively as their chief residents. The junior residents showed significant improvement in their examination scores after the presentations by medical students, and more than 80% of the residents responded positively about the effectiveness of this teaching model, expressing interest in its use in other services (PUBMED:24468027). This suggests that medical students can play a valuable role in the education of junior residents, contributing to their learning and potentially enhancing the educational environment.
Instruction: Expectations of iPad use in an internal medicine residency program: is it worth the "hype"? Abstracts: abstract_id: PUBMED:23656727 Expectations of iPad use in an internal medicine residency program: is it worth the "hype"? Background: While early reports highlight the benefits of tablet computing in hospitals, introducing any new technology can result in inflated expectations. Objective: The aim of the study is to compare anticipated expectations of Apple iPad use and perceptions after deployment among residents. Methods: 115 internal medicine residents received Apple iPads in October 2010. Residents completed matched surveys on anticipated usage and perceptions after distribution 1 month prior and 4 months after deployment. Results: In total, 99% (114/115) of residents responded. Prior to deployment, most residents believed that the iPad would improve patient care and efficiency on the wards; however, fewer residents "strongly agreed" after deployment (34% vs 15% for patient care, P&lt;.001; 41% vs 24% for efficiency, P=.005). Residents with higher expectations were more likely to report using the iPad for placing orders post call and during admission (71% vs 44% post call, P=.01, and 16% vs 0% admission, P=.04). Previous Apple iOS product owners were also more likely to use the iPad in key areas. Overall, 84% of residents thought the iPad was a good investment for the residency program, and over half of residents (58%) reported that patients commented on the iPad in a positive way. Conclusions: While the use of tablets such as the iPad by residents is generally well received, high initial expectations highlight the danger of implementing new technologies. Education on the realistic expectations of iPad benefits may be warranted. abstract_id: PUBMED:38249177 Perceived Preparedness of Internal Medicine Interns for Residency and the Value of Transition to Residency Courses. The popularity of Transition to Residency (TTR) courses has been increasing in U.S. medical schools. Yet there is limited data on interns' perceived preparation for residency and the value of nonsurgical TTR courses and their common components. Research has shown that TTR courses increase medical knowledge, clinical skills, and an increase in confidence in participants, but still, incoming residents do not feel prepared for the start of residency. Currently, there are only a few single institution studies researching interns' perceived value of TTR courses. This quantitative study surveyed internal medicine interns at five residency programs to find their preparation for residency, their preparedness in TTR competencies, and the perceived value of common TTR components. Data show that IM interns still feel unprepared for residency. The data also indicate that individuals who engage in TTR courses feel more prepared in TTR competencies compared to those who do not participate in TTR courses. Finally, internal medicine interns found the common TTR components valuable. abstract_id: PUBMED:7653490 International health and internal medicine residency training: the Duke University experience. Objective: To evaluate the impact of the Duke University Medicine Residency International Health Program (IHP) on program participants and to evaluate the relationship of the IHP to the residency program. Subjects And Methods: The Duke University Medicine Residency Program classes of 1988 to 1996 participated in a questionnaire-based survey. All program participants (n = 59), a group of nonparticipants (n = 138), and residents who had not yet had an opportunity to participate (preparticipants; n = 106). Results: The overall response rate to the questionnaire was 93%. Participation exceeded expectations and had a strongly positive impact on personal and professional lives of the majority of the participants. Participants reported a significant positive impact on their training in internal medicine and their knowledge of tropical medicine. A minority of nonparticipants identified a positive effect in these areas due to conferences and interactions with their participating colleagues. Participants who changes career plans during residency tended to move toward areas of general internal medicine or public health, in contrast to nonparticipants who tended to change areas of subspecialty or chose private practice. The IHP was identified as a significant factor for selection of the Duke Medicine Residency by 42% of the preparticipant group. Nearly all of the respondents (99%) indicated that the IHP should be continued. Conclusion: The IHP has a measurable positive impact on the participants, as well as on the Medicine Residency Program. abstract_id: PUBMED:20230607 Internal medicine residency training for unhealthy alcohol and other drug use: recommendations for curriculum design. Background: Unhealthy substance use is the spectrum from use that risks harm, to use associated with problems, to the diagnosable conditions of substance abuse and dependence, often referred to as substance abuse disorders. Despite the prevalence and impact of unhealthy substance use, medical education in this area remains lacking, not providing physicians with the necessary expertise to effectively address one of the most common and costly health conditions. Medical educators have begun to address the need for physician training in unhealthy substance use, and formal curricula have been developed and evaluated, though broad integration into busy residency curricula remains a challenge. Discussion: We review the development of unhealthy substance use related competencies, and describe a curriculum in unhealthy substance use that integrates these competencies into internal medicine resident physician training. We outline strategies to facilitate adoption of such curricula by the residency programs. This paper provides an outline for the actual implementation of the curriculum within the structure of a training program, with examples using common teaching venues. We describe and link the content to the core competencies mandated by the Accreditation Council for Graduate Medical Education, the formal accrediting body for residency training programs in the United States. Specific topics are recommended, with suggestions on how to integrate such teaching into existing internal medicine residency training program curricula. Summary: Given the burden of disease and effective interventions available that can be delivered by internal medicine physicians, teaching about unhealthy substance use must be incorporated into internal medicine residency training, and can be done within existing teaching venues. abstract_id: PUBMED:18612734 The state of evaluation in internal medicine residency. Background: There are no nationwide data on the methods residency programs are using to assess trainee competence. The Accreditation Council for Graduate Medical Education (ACGME) has recommended tools that programs can use to evaluate their trainees. It is unknown if programs are adhering to these recommendations. Objective: To describe evaluation methods used by our nation's internal medicine residency programs and assess adherence to ACGME methodological recommendations for evaluation. Design: Nationwide survey. Participants: All internal medicine programs registered with the Association of Program Directors of Internal Medicine (APDIM). Measurements: Descriptive statistics of programs and tools used to evaluate competence; compliance with ACGME recommended evaluative methods. Results: The response rate was 70%. Programs were using an average of 4.2-6.0 tools to evaluate their trainees with heavy reliance on rating forms. Direct observation and practice and data-based tools were used much less frequently. Most programs were using at least 1 of the Accreditation Council for Graduate Medical Education (ACGME)'s "most desirable" methods of evaluation for all 6 measures of trainee competence. These programs had higher support staff to resident ratios than programs using less desirable evaluative methods. Conclusions: Residency programs are using a large number and variety of tools for evaluating the competence of their trainees. Most are complying with ACGME recommended methods of evaluation especially if the support staff to resident ratio is high. abstract_id: PUBMED:20338034 Support for and aspects of use of educational games in family medicine and internal medicine residency programs in the US: a survey. Background: The evidence supporting the effectiveness of educational games in graduate medical education is limited. Anecdotal reports suggest their popularity in that setting. The objective of this study was to explore the support for and the different aspects of use of educational games in family medicine and internal medicine residency programs in the United States. Methods: We conducted a survey of family medicine and internal medicine residency program directors in the United States. The questionnaire asked the program directors whether they supported the use of educational games, their actual use of games, and the type of games being used and the purpose of that use. Results: Of 434 responding program directors (52% response rate), 92% were in support of the use of games as an educational strategy, and 80% reported already using them in their programs. Jeopardy like games were the most frequently used games (78%). The use of games was equally popular in family medicine and internal medicine residency programs and popularity was inversely associated with more than 75% of residents in the program being International Medical Graduates. The percentage of program directors who reported using educational games as teaching tools, review tools, and evaluation tools were 62%, 47%, and 4% respectively. Conclusions: Given a widespread use of educational games in the training of medical residents, in spite of limited evidence for efficacy, further evaluation of the best approaches to education games should be explored. abstract_id: PUBMED:9313003 The Federated Council of Internal Medicine's resource guide for residency education: an instrument for curricular change. The Federated Council of Internal Medicine has developed a resource guide to help internal medicine residency programs produce internists who are prepared for today's practice of internal medicine and the challenges of practice in the future. The guide situates general internal medicine as the primary care profession that focuses on preventive, short-term, and long-term care of adult patients. It assumes that a single pathway is sufficient for educating general internists and subspecialty-bound trainees. It identifies the learning experiences that should be part of general internal medicine residency training, lists the clinical competencies that are important for primary care practice, and describes the role of the integrative disciplines that should inform the care of every patient. It also describes a process that program directors and local program committees can use to develop competency-based curricula. abstract_id: PUBMED:1416560 The downsizing of internal medicine residency programs. A variety of forces are converging to reduce the number of internal medicine residency positions offered in this country. This reduction, referred to as downsizing, has been proposed as the solution to several of the problems facing internal medicine. We examine the forces that underlie the current enthusiasm for downsizing; we consider the alternative strategies by which downsizing might be implemented; and we consider the implications of these alternatives on different groups of stakeholders. Although downsizing may represent a legitimate approach to real problems, any mechanism to reduce the number of training positions in internal medicine will have broad implications for medical education and patient care well into the next century. Special efforts must be taken to ensure that downsizing will not exacerbate the existing problem of overspecialization and limited access to care. abstract_id: PUBMED:35398829 Identification and Validation of Entrustable Professional Activities in Indonesian Internal Medicine Residency Programs. Background: Entrustable professional activities (EPAs) are tasks entrusted to students who assist supervisors in determining their competencies. However, the competencies required and the end-educational stage in which each EPA item is assigned have yet to be determined by the stakeholders of internal medicine residency programs in Indonesia. This study aimed to identify and determine the activities in internal medicine residency programs which could be defined as EPAs in the competency-based curriculum of Indonesian internal medicine residency programs. Methods: A literature review was conducted to identify activities which could be examined as EPA items in Indonesian internal medicine residency programs, which were then validated by 10 educational experts. Two rounds of the Delphi method were conducted with participants consisting of the Indonesian Board of Internal Medicine professionals, residency program directors, internal medicine specialists, and internal medicine residents to evaluate the importance of the identified EPA items. The EPA items were rated on a Likert scale ranging from 1 to 5, and their variances were analyzed. The participants also rated the end-educational stage appropriate for each EPA item. The effect size was calculated between groups as (1) small, &lt;0.3; (2) moderate, approximately 0.5; and (3) large, &gt;0.8. Results: The literature review identified 29 modified items from the Royal College of Physicians and Surgeons (RCPS) and three items from other academically developed EPA designs. The expert discussion resulted in the validation of 28 EPA items (out of the 32 items in the initial EPA draft). All 28 items were accepted after two rounds of the Delphi method, and a decrease in their variances was found. Conclusion: This study formulated 28 EPA items for Indonesian internal medicine residency programs. Further collaboration between the Board of Internal Medicine and residency program directors will be needed for the application of these EPA items at each residency year. abstract_id: PUBMED:2366588 Altering residency curriculum in response to a changing practice environment: use of the Mayo internal medicine residency alumni survey. To elicit the opinions of practicing internists who had graduated from a single internal medicine residency program about the adequacy of their training and its relevance to their medical practice, we mailed a survey to 1,342 physicians who had spent at least 1 year in the Mayo internal medicine residency training program. Of this group, 703 alumni (52%) responded to the survey, 532 of whom were currently practicing internal medicine. Our detailed analysis was based on responses from these 532 and, for some aspects of evaluation, on the 121 general internists who had completed residency training after 1970. Of the respondents, 42% spent more than 80% of their time in general medicine, and 53% had at least some subspecialty practice; 55% were involved in teaching, 20% in some research, and 37% in various administrative duties. In 27%, all patient-care activities involved primary care, an increase from 18% in a 1979 survey and 9% in 1972. Of those who were subspecialists, 67% spent more than half their time in subspecialty practice. Of those who were trained after 1970, 90% were board certified. Most respondents thought that their training in the internal medicine subspecialties was adequate, that additional procedure training was needed in joint aspiration, line placement, and flexible sigmoidoscopy, and that many allied medical areas were important to their practice and necessitated additional training. Although virtually all respondents assessed their inpatient training as adequate, only 42% were fully satisfied with their outpatient training. Alumni surveys can be useful in restructuring a residency program to meet the needs of the trainees. Answer: The study on the expectations of iPad use in an internal medicine residency program aimed to compare the anticipated expectations and perceptions after deployment among residents. Before the deployment of iPads, most residents believed that the iPad would improve patient care and efficiency on the wards. However, after deployment, the number of residents who "strongly agreed" with this sentiment decreased significantly (34% vs 15% for patient care, P<.001; 41% vs 24% for efficiency, P=.005). Residents with higher expectations were more likely to report using the iPad for placing orders post-call and during admission. Previous Apple iOS product owners were also more likely to use the iPad in key areas. Despite the decrease in strong agreement regarding the iPad's impact on patient care and efficiency, 84% of residents thought the iPad was a good investment for the residency program, and over half of the residents (58%) reported that patients commented on the iPad in a positive way. The study concluded that while the use of tablets such as the iPad by residents is generally well received, the high initial expectations highlight the danger of implementing new technologies without managing expectations. Therefore, education on the realistic expectations of iPad benefits may be warranted (PUBMED:23656727). In summary, while iPads were well-received and considered a good investment by a majority of residents, the initial high expectations were not fully met, particularly in the areas of patient care and efficiency. This suggests that while iPads have a place in residency programs, it is important to set realistic expectations for their use to avoid the "hype" leading to potential disappointment.
Instruction: Is principal components analysis necessary to characterise dietary behaviour in studies of diet and disease? Abstracts: abstract_id: PUBMED:11527514 Is principal components analysis necessary to characterise dietary behaviour in studies of diet and disease? Objective: To assess the relative ability of principal components analysis (PCA)-derived dietary patterns to correctly identify cases and controls compared with other methods of characterising food intake. Subjects: Participants in this study were 232 endometrial cancer cases and 639 controls from the Western New York Diet Study, 1986-1991, frequency-matched to cases on age and county of residence. Design: Usual intake in the year preceding interview of 190 foods and beverages was collected during a personal interview using a detailed food-frequency questionnaire. Principal components analysis identified two major dietary patterns which we labelled 'healthy' and 'high fat'. Classification on disease status was assessed with separate discriminant analyses (DAs) for four different characterisation schemes: stepwise DA of 168 food items to identify the subset of foods that best discriminated between cases and controls; foods associated with each PCA-derived dietary pattern; fruits and vegetables (47 items); and stepwise DA of USDA-defined food groups (fresh fruit, canned/frozen fruit, raw vegetables, cooked vegetables, red meat, poultry, fish and seafood, processed meats, snacks and sweets, grain products, dairy, and fats). Results: In general, classification of disease status was somewhat better among cases (54.7% to 67.7%) than controls (54.0% to 63.1%). Correct classification was highest for fruits and vegetables (67.7% and 62.9%, respectively) but comparable to that of the other schemes (49.5% to 66.8%). Conclusions: Our results suggest that the use of principal components analysis to characterise dietary behaviour may not provide substantial advantages over more commonly used, less sophisticated methods of characterising diet. abstract_id: PUBMED:22950853 Dietary patterns obtained through principal components analysis: the effect of input variable quantification. Principal components analysis (PCA) is a popular method for deriving dietary patterns. A number of decisions must be made throughout the analytic process, including how to quantify the input variables of the PCA. The present study aims to compare the effect of using different input variables on the patterns extracted using PCA on 3-d diet diary data collected from 7473 children, aged 10 years, in the Avon Longitudinal Study of Parents and Children. Four options were examined: weight consumed of each food group (g/d), energy-adjusted weight, percentage contribution to energy of each food group and binary intake (consumed/not consumed). Four separate PCA were performed, one for each intake measurement. Three or four dietary patterns were obtained from each analysis, with at least one component that described 'more healthy' and 'less healthy' diets and one component that described a diet with high consumption of meat, potatoes and vegetables. There were no obvious differences between the patterns derived using percentage energy as a measurement and adjusting weight for total energy intake, compared to those derived using gram weights. Using binary input variables yielded a component that loaded positively on reduced fat and reduced sugar foods. The present results suggest that food intakes quantified by gram weights or as binary variables both resulted in meaningful dietary patterns and each method has distinct advantages: weight takes into account the amount of each food consumed and binary intake appears to describe general food preferences, which are potentially easier to modify and useful in public health settings. abstract_id: PUBMED:32083820 Empirical dietary patterns and their influence on health in epidemiological studies The last two or three decades in the epidemiology of nutrition empirical approaches to assessing the diet of the population are increasingly being used. However, in Russian studies, these approaches are used extremely rarely, which may be due to insufficient knowledge of the essence, methodological aspects and the field of application of posterior dietary patterns. In this regard, the aim of this review was to highlight the essence, methods and main results of using empirical approaches to assessing the diet of the population. Results. This review discusses the main methodological features of multivariate analysis methods - factor analysis (principal component analysis) and cluster analysis. The main trends of dietary patterns characteristic of the epidemiology of nutrition, and various in different countries and regions are shown. The results of studies of the impact of dietary patterns on some indicators of health status (cardiovascular and oncological diseases, diseases of the musculoskeletal system, diabetes mellitus, metabolic syndrome and its components, mortality) are considered. A brief description of the few Russian studies that used the principal component method in identifying empirical dietary patterns of population is given. In general, this review indicates that the use of multidimensional statistics to highlight population dietary patterns as part of an empirical assessment is nowadays one of the most important components of nutrition epidemiology in different countries. An empirical assessment allows us to consider the diets of the population from a factual point of view and significantly supplements the scientific knowledge obtained using other epidemiological approaches. Currently, considerable experience in the development of methodological approaches to dietary patterns, as well as in the analysis of their associations with various indicators of health status has been accumulated. Conclusion. All this testifies to the high relevance of the considered epidemiological approaches and the need for their testing in Russian conditions, which, perhaps, will give new knowledge about the dietary patterns formation and its impact on the health status of Russians. abstract_id: PUBMED:23360896 Evaluation of methodologies for assessing the overall diet: dietary quality scores and dietary pattern analysis. This paper aims to describe different approaches for studying the overall diet with advantages and limitations. Studies of the overall diet have emerged because the relationship between dietary intake and health is very complex with all kinds of interactions. These cannot be captured well by studying single dietary components. Three main approaches to study the overall diet can be distinguished. The first method is researcher-defined scores or indices of diet quality. These are usually based on guidelines for a healthy diet or on diets known to be healthy. The second approach, using principal component or cluster analysis, is driven by the underlying dietary data. In principal component analysis, scales are derived based on the underlying relationships between food groups, whereas in cluster analysis, subgroups of the population are created with people that cluster together based on their dietary intake. A third approach includes methods that are driven by a combination of biological pathways and the underlying dietary data. Reduced rank regression defines linear combinations of food intakes that maximally explain nutrient intakes or intermediate markers of disease. Decision tree analysis identifies subgroups of a population whose members share dietary characteristics that influence (intermediate markers of) disease. It is concluded that all approaches have advantages and limitations and essentially answer different questions. The third approach is still more in an exploration phase, but seems to have great potential with complementary value. More insight into the utility of conducting studies on the overall diet can be gained if more attention is given to methodological issues. abstract_id: PUBMED:24725305 Principal components analysis of diet and alternatives for identifying the combination of foods that are associated with the risk of disease: a simulation study. Dietary patterns derived empirically using principal components analysis (PCA) are widely employed for investigating diet-disease relationships. In the present study, we investigated whether PCA performed better at identifying such associations than an analysis of each food on a FFQ separately, referred to here as an exhaustive single food analysis (ESFA). Data on diet and disease were simulated using real FFQ data and by assuming a number of food intakes in combination that were associated with the risk of disease. In each simulation, ESFA and PCA were employed to identify the combinations of foods that are associated with the risk of disease using logistic regression, allowing for multiple testing and adjusting for energy intake. ESFA was also separately adjusted for principal components of diet, foods that were significant in the unadjusted ESFA and propensity scores. For each method, we investigated the power with which an association between diet and disease could be identified, and the power and false discovery rate (FDR) for identifying the specific combination of food intakes. In some scenarios, ESFA had greater power to detect a diet-disease association than PCA. ESFA also typically had a greater power and a lower FDR for identifying the combinations of food intakes that are associated with the risk of disease. The FDR of both methods increased with increasing sample size, but when ESFA was adjusted for foods that were significant in the unadjusted ESFA, FDR were controlled at the desired level. These results question the widespread use of PCA in nutritional epidemiology. The adjusted ESFA identifies the combinations of foods that are causally linked to the risk of disease with low FDR and surprisingly good power. abstract_id: PUBMED:36615850 The Application of Clustering on Principal Components for Nutritional Epidemiology: A Workflow to Derive Dietary Patterns. In the last decades, different multivariate techniques have been applied to multidimensional dietary datasets to identify meaningful patterns reflecting the dietary habits of populations. Among them, principal component analysis (PCA) and cluster analysis represent the two most used techniques, either applied separately or in parallel. Here, we propose a workflow to combine PCA, hierarchical clustering, and a K-means algorithm in a novel approach for dietary pattern derivation. Since the workflow presents certain subjective decisions that might affect the final clustering solution, we also provide some alternatives in relation to different dietary data used. For example, we used the dietary data of 855 women from Catania, Italy. Our approach-defined as clustering on principal components-could be useful to leverage the strengths of each method and to obtain a better cluster solution. In fact, it seemed to disentangle dietary data better than simple clustering algorithms. However, before choosing between the alternatives proposed, it is suggested to consider the nature of dietary data and the main questions raised by the research. abstract_id: PUBMED:30373530 Application of a new dietary pattern analysis method in nutritional epidemiology. Background: Diet plays an important role in chronic disease, and the use of dietary pattern analysis has grown rapidly as a way of deconstructing the complexity of nutritional intake and its relation to health. Pattern analysis methods, such as principal component analysis (PCA), have been used to investigate various dimensions of diet. Existing analytic methods, however, do not fully utilize the predictive potential of dietary assessment data. In particular, these methods are often suboptimal at predicting clinically important variables. Methods: We propose a new dietary pattern analysis method using the advanced LASSO (Least Absolute Shrinkage and Selection Operator) model to improve the prediction of disease-related risk factors. Despite the potential advantages of LASSO, this is the first time that the model has been adapted for dietary pattern analysis. Hence, the systematic evaluation of the LASSO model as applied to dietary data and health outcomes is highly innovative and novel. Using Food Frequency Questionnaire data from NHANES 2005-2006, we apply PCA and LASSO to identify dietary patterns related to cardiovascular disease risk factors in healthy US adults (n = 2609) after controlling for confounding variables (e.g., age and BMI). Both analyses account for the sampling weights. Model performance in terms of prediction accuracy is evaluated using an independent test set. Results: PCA yields 10 principal components (PCs) that together account for 65% of the variation in the data set and represent distinct dietary patterns. These PCs are then used as predictors in a regression model to predict cardiovascular disease risk factors. We find that LASSO better predicts levels of triglycerides, LDL cholesterol, HDL cholesterol, and total cholesterol (adjusted R2 = 0.861, 0.899, 0.890, and 0.935 respectively) than does the traditional, linear-regression-based, dietary pattern analysis method (adjusted R2 = 0.163, 0.005, 0.235, and 0.024 respectively) when the latter is applied to components derived from PCA. Conclusions: The proposed method is shown to be an appropriate and promising statistical means of deriving dietary patterns predictive of cardiovascular disease risk. Future studies, involving different diseases and risk factors, will be necessary before LASSO's broader usefulness in nutritional epidemiology can be established. abstract_id: PUBMED:26037078 An exploratory study of dietary intake patterns among adults diagnosed with cardiovascular risk factors. The objective of the present exploratory study was to assess compliance with the nutrient recommendations among a convenience sample of adults diagnosed with cardiovascular risk factors in northern Greece and evaluate their dietary intake patterns. Ninety-two people participated in this cross-sectional study. Dietary assessment was carried out using a semi-quantitative food frequency questionnaire. Principal components analysis (PCA) and hierarchical cluster analysis (HCA) were adopted to obtain dietary patterns and classify individuals with similar dietary behaviour. HCA, performed on the factorial scores obtained from PCA, revealed a 4-group interpretable and statistically significant clustering of participants. For all clusters, the mean daily intake for saturated fatty acids was more than 10% of total calories, while the mean sodium intake was above 1500 mg; additionally, a relatively low mean Mediterranean diet score was recorded. Dietary interventions should be considered to raise awareness and expand knowledge on the nutritional and functional benefits of heart-healthy foods. abstract_id: PUBMED:35416143 Exploring the association of dietary patterns with the risk of hypertension using principal balances analysis and principal component analysis. Objective: In the field of nutritional epidemiology, principal component analysis (PCA) has been used extensively in identifying dietary patterns. Recently, compositional data analysis (CoDA) has emerged as an alternative approach for obtaining dietary patterns. We aimed to directly compare and evaluate the ability of PCA and principal balances analysis (PBA), a data-driven method in CoDA, in identifying dietary patterns and their associations with the risk of hypertension. Design: Cohort study. A 24-h dietary recall questionnaire was used to collect dietary data. Multivariate logistic regression analysis was used to analyse the association between dietary patterns and hypertension. Setting: 2004 and 2009 China Health and Nutrition Survey. Participants: A total of 3892 study participants aged 18-60 years were included as the subjects. Results: PCA and PBA identified five patterns each. PCA patterns comprised a linear combination of all food groups, whereas PBA patterns included several food groups with zero loadings. The coarse cereals pattern identified by PBA was inversely associated with hypertension risk (highest quintile: OR = 0·74 (95 % CI 0·57, 0·95); Pfor trend = 0·037). None of the five PCA patterns was associated with hypertension. Compared with the PCA patterns, the PBA patterns were clearly interpretable and accounted for a higher percentage of variance in food intake. Conclusions: Findings showed that PBA might be an appropriate and promising approach in dietary pattern analysis. Higher adherence to the coarse cereals dietary pattern was associated with a lower risk of hypertension. Nevertheless, the advantages of PBA over PCA should be confirmed in future studies. abstract_id: PUBMED:33379357 Exploration of the Principal Component Analysis (PCA) Approach in Synthesizing the Diet Quality of the Malaysian Population. (1) Background: One of the most main dietary assessments is through a posteriori application. Although extensive research has incorporated dietary assessment of a population through a posteriori application, this study is the first to examine the Malaysian population and use an a posteriori method and principal component analysis (PCA) to assess the dietary patterns of the Malaysian population. The correlation between all dietary patterns derived via PCA and selected nutrient intake were determined in this sample of study; (2) Methods: A total of 3063 respondents (18 to 59 years old) covering Peninsular Malaysia, Sabah, and Sarawak, participated in this study. PCA was applied on the food frequency questionnaire collected from the respondents, and descriptive statistics and PCA were performed using SPSS version 21; (3) Results: Six patterns were identified: "traditional", " prudent", " modern", "western", "Chinese", and "combination" diets. All together, these six patterns were able to explain 45.9% of the total variability. Few components derived from the factor loadings showed positive association with several nutrient markers. The traditional dietary pattern showed a moderate, positive correlation with total protein and total sugar intake, there was a significant moderate correlation between the prudent dietary pattern and dietary fibre, and there was a moderate positive association between the Chinese dietary pattern and total energy; and (4) Conclusions: The exploration of the PCA approach above may provide justification for assessment of dietary patterns rather than reliance on single nutrients or foods to identify potential connections to overall nutritional wellbeing as well as to explore the diet-disease relationship. However, study of pattern analysis must be conducted among the Malaysian population to produce validity and reproducibility for this dietary approach in light of the numerous methodological issues that arise when performing PCA. Answer: Principal components analysis (PCA) is a widely used method in nutritional epidemiology to derive dietary patterns and characterise dietary behaviour in studies of diet and disease. However, the necessity of PCA for this purpose is not definitive and depends on the research objectives and the nature of the dietary data. Some studies suggest that PCA may not provide substantial advantages over more commonly used, less sophisticated methods of characterising diet. For instance, a study comparing PCA-derived dietary patterns with other methods of characterising food intake, such as discriminant analysis of food items and food groups, found that correct classification of disease status was comparable across different schemes, including PCA. This suggests that PCA may not be superior in identifying cases and controls in diet-disease studies (PUBMED:11527514). Other research has shown that PCA can yield meaningful dietary patterns, but the choice of input variables (e.g., weight consumed, energy-adjusted weight, percentage contribution to energy, or binary intake) can affect the patterns extracted. The use of binary input variables, for example, may describe general food preferences, which could be useful in public health settings (PUBMED:22950853). Moreover, PCA is not the only method available for dietary pattern analysis. Alternatives such as cluster analysis, reduced rank regression, and decision tree analysis have their own advantages and limitations, and they essentially answer different questions (PUBMED:23360896). Some studies have proposed new methods, such as the LASSO model, which may improve the prediction of disease-related risk factors compared to traditional PCA (PUBMED:30373530). In summary, while PCA is a valuable tool in nutritional epidemiology, it is not strictly necessary for characterising dietary behaviour in studies of diet and disease. Researchers should consider the specific goals of their study, the nature of their dietary data, and the potential advantages of alternative methods when deciding whether to use PCA or another approach (PUBMED:24725305, PUBMED:36615850, PUBMED:32083820, PUBMED:35416143, PUBMED:33379357).
Instruction: Is an Increase in Thyroid Nodule Volume a Risk Factor for Malignancy? Abstracts: abstract_id: PUBMED:25872408 Is an Increase in Thyroid Nodule Volume a Risk Factor for Malignancy? Background: Most benign thyroid nodules found on fine-needle aspiration cytology (FNAC) can be followed with periodic ultrasonography. During follow-up, when nodules grow, re-aspiration or surgical resection for a histologic diagnosis is recommended. However, there is little evidence regarding the malignancy risk associated with nodule growth. Method: We retrospectively reviewed the records of 542 patients with FNAC-diagnosed adenomatous nodules (ANs) who underwent surgery in 2011-2012 at Kuma Hospital. Among them, 196 patients had surgical resection because of nodule volume growth (median, 1.94 times; range, 1.21-27.60) during the observation period (mean, 45.9 months). Excluding nodule growth, the remaining 346 patients underwent surgery for various reasons including the large size of nodules or the appearance of undefined ultrasound features suspicious for malignancy during follow-up. For comparison, we reviewed 409 patients with FNAC-diagnosed follicular neoplasms (FNs) operated on in 2011-2013. Most (n=327) underwent surgery shortly after the FNAC diagnosis, while 82 patients were observed for a period of time and had a late operation due to nodule volume growth (median, 2.70 times; range, 1.27-15.82). Results: The histologic diagnoses of the 196 growing FNAC-diagnosed ANs were 158 ANs, 32 follicular adenomas (FAs), 4 follicular tumors of uncertain malignant potential (FT-UMP; 2%), and 2 malignancies (1%). The 346 patients who underwent surgery for reasons other than nodular growth had 16 FT-UMP (4.6%) and 16 malignancies (4.6%). This suggests that nodule growth itself is not a risk factor for malignancy. On the other hand, there were 23 FT-UMP (28%) and 15 malignancies (18.3%) in the 82 growing FNAC-diagnosed FNs, while 44 FT-UMP (13.5%) and 54 malignancies (16.5%) in the 327 FNAC-diagnosed FN patients who underwent immediate surgery. The malignant potential was significantly higher in the growing-FN group than the immediate-surgery FN group (p&lt;0.05). No significant difference was found in the volume change between the benign and the FT-UMP plus malignant group in the growing FNs, suggesting that a growth rate does not correlate with malignant potential. Conclusion: This is the first demonstration that the malignancy risk is low in FNAC-diagnosed ANs, even if the nodules grow significantly, whereas FNs have a higher risk when they grow. abstract_id: PUBMED:31375058 Determining Whether Tumor Volume Doubling Time and Growth Rate Can Predict Malignancy After Delayed Diagnostic Surgery of Follicular Neoplasm. Background: Tumor volume doubling time (TVDT) is considered as a dynamic indicator of tumor growth and progression. We aimed to clarify whether TVDT should be considered in the cytological diagnosis of follicular neoplasms (FNs). Methods: This retrospective cohort study included 100 consecutive FN patients with a TVDT &gt;1 year for whom surgical resection was delayed for &gt;1 year. Changes in tumor volume and the TVDT of each nodule were calculated by serial neck ultrasonography (US) over a median follow-up of 50 months. Results: Among 100 surgically resected FNs, 58 (58%) were benign and 42 (42%) were malignant tumors. At the 5-year follow-up, most FN nodules showed volume increases of &gt;50%, accounting for 65% of benign and 62% of malignant nodules. No significant difference was observed in tumor growth rates and time to a &gt;50% increase in tumor volume between benign and malignant nodules (p = 0.299 and p = 0.378, respectively). The calculated TVDT and a &gt;50% volumetric increase showed a linear relationship with a high concordance (R2 = 0.883; p &lt; 0.001). The risk of malignancy was not significantly associated with TVDT (relative risk = 1.00; [95% confidence interval 0.99-1.01]; p = 0.784). Even when US features were considered, no significant association was found between TVDT and the risk of malignancy. Conclusions: The size of most FNs increased exponentially with time; the tumor growth rates of benign and malignant nodules were similar. Despite the usefulness of TVDT as an indicator of growing velocity of FN nodules, TVDT could not predict malignancy in FNs. Clinical surveillance based on the growth of thyroid nodules categorized as FNs on serial neck US does not predict the risk of malignancy. abstract_id: PUBMED:37247771 Association of polycyclic aromatic hydrocarbon internal exposure and urinary iodine concentration with thyroid volume in children. Thyroid volume was proposed as a factor for malignancy in evaluating thyroid nodules. Previous studies have demonstrated the endocrine disrupting effect of polycyclic aromatic hydrocarbons (PAHs), but studies on the association between internal exposure of PAHs and thyroid volume are still scarce. In this work, we evaluated the association of polycyclic aromatic hydrocarbon internal exposure and urinary iodine concentration with thyroid volume in 590 school-age children without thyroid disease in Guangzhou, China. Urinary hydroxylated polycyclic aromatic hydrocarbons (OH-PAHs), urinary iodine concentrations, and thyroid volumes were measured. The mean concentrations of urinary iodine and ΣOH-PAHs were 271.1 μg/L and 3.27 μg/L, respectively, and the mean thyroid volume was 2.4 mL. The associations of urinary iodine and OH-PAH concentrations with thyroid volume were investigated by multivariable linear regression and the Bayesian kernel machine regression models. Urinary ΣOH-PAHs were observed to be significantly positively associated with thyroid volume in multivariable linear regression models. The increase in each unit in the log-transformed concentration of ΣOH-PAHs caused 3.88% change in thyroid volume. The Bayesian kernel machine regression model demonstrated a positive joint effect of increased urinary ΣOH-PAHs on thyroid volume. Moreover, urinary ΣOH-PAHs were statistically significant linked to urinary iodine, and iodine mediated the relationship between urinary OH-PAHs and thyroid volume with the mediated proportions of 15.2. abstract_id: PUBMED:30916163 Is thyroid nodule volume predictive for malignancy? Objective: We aimed to determine the roles of preoperative thyroid nodule diameter and volume in the prediction of malignancy. Subjects And Methods: The medical records of patients who underwent thyroidectomy between January 2007 and December 2014 were reviewed. The nodule diameters were grouped as &lt; 1 cm, 1-1.9 cm, 2-3.9 cm and ≥ 4 cm, and volume was grouped as &gt; 5 cm3, 5-9.9 cm3 and &gt; 10 cm3. ROC (Receiver Operating Characteristic) curve analysis was performed to find the optimal cutoff value of diameter and volume that can predict malignancy. Results: There were 5561 thyroid nodules in 2463 patients. Five hundred and forty (9.7%) nodules were &lt; 1 cm, 2,413 (43.4%) were 1-1.9 cm, 1,600 (28.8%) were 2-3.9 cm and 1,008 (18.1%) were ≥ 4 cm. Malignancy rates were 25.6%,10.6%, 9.7% and 8.5% in nodules &lt; 1 cm, 1-1.9 cm, 2-3.9 cm and ≥ 4 cm, respectively. When classified according to volume, 3,664 (65.9%) nodules were &lt; 5 cm3, 594 (10.7%) were 5-9.9 cm3 and 1,303 (23.4%) were ≥ 10 cm3. The malignancy rates were 12.7%, 11.4% and 7.8% for the nodules &lt; 5 cm3, 5-9.9 cm3 and ≥ 10 cm3, respectively (p &lt; 0.001). In ROC curve analysis, an optimal cutoff value for diameter or volume that can predict malignancy in all thyroid nodules or nodules ≥ 4 cm could not be determined. Conclusion: In this surgical series, malignancy risk did not increase with increasing nodule diameter or volume. Although the volume of malignant nodules ≥ 4 cm was higher than that of benign nodules ≥ 4 cm, there was no optimal cutoff value. The diameter or volume of the nodule cannot be used to predict malignancy or decide on surgical resection. abstract_id: PUBMED:38027220 Risk factor analysis and prediction model for papillary thyroid carcinoma with lymph node metastasis. Objective: We aimed to identify the clinical factors associated with lymph node metastasis (LNM) based on ultrasound characteristics and clinical data, and develop a nomogram for personalized clinical decision-making. Methods: A retrospective analysis was performed on 252 patients with papillary thyroid carcinoma (PTC). The patient's information was subjected to univariate and multivariate logistic regression analyses to identify risk factors. A nomogram to predict LNM was established combining the risk factors. The performance of the nomogram was evaluated using receiver operating characteristic (ROC) curve, calibration curve, cross-validation, decision curve analysis (DCA), and clinical impact curve. Results: There are significant differences between LNM and non-LNM groups in terms of age, sex, tumor size, hypoechoic halo around the nodule, thyroid capsule invasion, lymph node microcalcification, lymph node hyperechoic area, peak intensity of contrast (PI), and area under the curve (AUC) of the time intensity curve of contrast (P&lt;0.05). Age, sex, thyroid capsule invasion, lymph node microcalcification were independent predictors of LNM and were used to establish the predictive nomogram. The ROC was 0.800, with excellent discrimination and calibration. The predictive accuracy of 0.757 and the Kappa value was 0.508. The calibration curve, DCA and calibration curve demonstrated that the prediction model had excellent net benefits and clinical practicability. Conclusion: Age, sex, thyroid capsule invasion, and lymph node microcalcification were identified as significant risk factors for predicting LNM in patients with PTC. The visualized nomogram model may assist clinicians in predicting the likelihood of LNM in patients with PTC prior to surgery. abstract_id: PUBMED:24338169 The relationship between thyroid volume and malignant thyroid disease. The present retrospective study aimed to investigate the relationship between thyroid volume and prevalence of thyroid cancer. We investigated the data of 3,850 patients who underwent fine-needle aspiration biopsy (FNAB). Biopsy results were evaluated as diagnostic or nondiagnostic, and diagnostic results were classified as benign, malignant, and indeterminate. We included 2,672 patients who underwent FNAB firstly in our hospital and evaluated as diagnostic biopsy except subgroup of indeterminate. We obtained cytologic data, levels of serum thyroid-stimulating hormone (TSH), and thyroid volumes of those patients retrospectively. Among 2,672 patients with thyroid nodule, 2,562 (95.9 %) patients had benign cytology and 110 (%4,1) patients had malignant cytology. There was no correlation between the malignancy and gender (p = 0.935), and patients with malignant cytology were younger (52 vs 59, p &lt; 0.001). Also, TSH levels were higher in patients with malignant than benign cytology (p = 0.017). Median volume of right part, left part, and total thyroid for patients who had malignant cytology was significantly lower than patients who had benign cytology (8.3, 7.1, 15.9 vs 10.8 ml, 9.0 mml, 20.6 ml, respectively, p ≤ 0.001 for all parameters). The results demonstrated that thyroid cancer prevalence was higher in patients with low thyroid volume. According to our results, thyroid volume should be considered as a risk factor for malignancy in the evaluation of thyroid nodules. abstract_id: PUBMED:15704609 Thyroid antibodies and tumor necrosis factor-alpha in patients with benign thyroid nodules treated by percutaneous ethanol injections. Objective: Treatment of benign thyroid tumors with percutaneous ethanol injections (PEI) is an alternative to radioiodine and surgery. This procedure causes a release of large amounts of denaturated thyroglobulin within the gland which may become an autoantigen, triggering the mechanism of autoimmunization. The aim of the study was to investigate whether ethanol injections can induce increased levels of thyroid autoantibodies and tumor necrosis factor-alpha (TNF-alpha) in patients with nonfunctioning or pre-toxic thyroid nodules. Material And Methods: Thirty-four patients (31 F, 3 M) with single benign thyroid tumors were enrolled, 23 (20 F/3 M) with nonfunctioning nodule (group 1) and 11 (F) with pre-toxic nodule characterized by normal free thyroid hormones and low TSH (group 2). Under sonographic guidance, sterile 96% ethanol solution was injected into thyroid nodules at 2-week intervals up to a dose of 0.7-1.0 ml of ethanol per 1.0 ml nodule volume. TSH, fT4, thyroglobulin antibodies and thyroperoxidase antibodies as well as TNF-apha levels were assessed prior to alcohol administration and 3, 6, and 12 months after the end of treatment. Results: PEI treatment decreased tumor volume by 75.8% in group 1 and by 80.4% in group 2, and normalized TSH level in 90.9% of patients with pre-toxic nodules. No statistically significant differences in thyroperoxidase antibodies and TNF-alpha levels were observed during the study period in both groups. Thyroglobulin antibody levels increased significantly 6 months after treatment in both groups, but returned to the baseline levels after 12 months. No significant difference in peak thyroglobulin antibody levels between the two groups was seen. Conclusion: PEI procedure is a safe method for treating nonfunctioning and pre-toxic thyroid nodules since this treatment reduces tumor size significantly without inducing long-lasting autoimmune reactions in the thyroid gland. Transient increase in thyroglobulin antibodies indicates that this procedure should be performed with caution in patients at risk of autoimmune diseases. abstract_id: PUBMED:22127951 Serum thyrotropin as a risk factor for thyroid malignancy in euthyroid subjects with thyroid micronodule. Background: An elevated thyroid stimulating hormone (TSH) level is a risk factor for differentiated thyroid cancer. The present study investigated whether TSH is a risk factor for thyroid microcarcinoma (TMC) in euthyroid subjects with a micronodule. Methods: In this cross-sectional study, 483 euthyroid subjects (77 males patients, 406 females patients) with a thyroid micronodule were enrolled and underwent ultrasound guided fine-needle aspiration biopsy. Serum TSH level was measured in every subject. The final diagnostic outcome (malignant or benign) was defined in conjunction with cytology and histology. Results: Compared with the patients below the population mean, patients above the population mean had a significantly higher malignancy rate (10.8% vs 4.8%, p = .025). Multiple logistic regression analysis revealed TSH as an independent risk factor for malignancy in euthyroid patients with a micronodule (odds ratio, 1.449; 95% confidence interval, 1.033-2.033; p = .031). Conclusions: Measurement of TSH in subjects with a thyroid micronodule is helpful to estimate cancer risk. abstract_id: PUBMED:37828412 Criteria to Evaluate Tumor Enlargement During the Active Surveillance of High-Risk Thyroid Nodules: Which is Better, Diameter or Volume? Introduction: Tumor enlargement is the most common parameter identifying disease progression during active surveillance, but the value and significance of the changes in tumor diameter and volume in the evaluation of tumor growth have not been compared. Methods: This cohort study included 468 patients with high-risk thyroid nodule, in whom nodule size change was monitored using ultrasound, to compare the changes in tumor diameter and volume in assessing tumor growth. Results: A total of 569 high-risk thyroid nodules were found in the 468 patients. A total of 14 nodules (2.5%) showed a diameter increase ≥ 3 mm. The number of nodules with a peak volume change exceeding 50% and 100% was 185 (32.5%) and 86 (15.1%), respectively. Among the 555 stable nodules, the number of nodules with volume fluctuations exceeding 50% and 100% was 171 (30.8%) and 72 (13.0%), respectively. Among 212 stable nodules at the baseline and in the first three follow-up, the percentage of peak volume fluctuations exceeding 50% (48.5% vs. 28.5%, p = 0.004) and 100% (26.5% vs. 8.3%, p &lt; 0.001) in the nodules with the sum of three diameters (SOTDs) ≤ 1 cm was significantly higher than that of nodules with SOTDs &gt; 1 cm. A statistically significant difference was also found in the range distribution of SOTDs ≤ 1 cm and SOTDs &gt; 1 cm (p = 0.007). Conclusions: Volume is not an appropriate method for determining tumor growth. Tumor diameter measurement alone serves as a better surrogate for disease progression in sonographically high-risk thyroid nodules than volume. abstract_id: PUBMED:22762696 Vitamin D deficiency as a potentially modifiable risk factor for thyroid cancer. Background: Vitamin D, classically involved in calcium homeostasis, has garnered recent interest for its potential role in cancer prevention and therapy. Although few risk factors have been established in the development of well-differentiated thyroid carcinoma, some would argue that no clear modifiable risk factor exists. Our study is one of the first to explore the relationship between vitamin D deficiency and thyroid cancer. Methods: This retrospective cohort study was done at a university-affiliated thyroid cancer centre. In 2010, 212 patients undergoing thyroidectomy had their preoperative 25-hydroxyvitamin D(3) levels recorded in addition to the final pathologies of their resected thyroid nodule. The patients were stratified based on vitamin D status; vitamin D deficiency (VDD), reflecting levels under the established threshold of 37.5 nmol/L; and vitamin D sufficiency (VDS), reflecting levels above it. The primary outcome of malignancy rate was used to compare the proportion of malignant nodules in the VDD versus the VDS groups. Results: The malignancy rate rises when comparing the VDS and VDD groups, from 37.5 (33 of 88) to 75% (9 of 12), respectively, corresponding to a relative risk of 2.0 (p = .03, 95% CI 1.07-2.66). Conclusion: This is the first study to demonstrate the inverse relationship between VDD and well-differentiated thyroid carcinoma. Larger prospective studies are needed to replicate these results. Should this happen, VDD may be the first modifiable risk factor for thyroid cancer. Answer: An increase in thyroid nodule volume is not consistently a risk factor for malignancy. The evidence from the studies provided suggests that the relationship between nodule growth and malignancy risk varies depending on the type of thyroid nodule and other factors. For instance, a study that retrospectively reviewed records of patients with FNAC-diagnosed adenomatous nodules (ANs) who underwent surgery found that the malignancy risk is low in FNAC-diagnosed ANs, even if the nodules grow significantly. In contrast, follicular neoplasms (FNs) have a higher risk when they grow (PUBMED:25872408). Another study involving follicular neoplasms (FNs) indicated that tumor volume doubling time (TVDT) could not predict malignancy in FNs, as the tumor growth rates of benign and malignant nodules were similar. This suggests that clinical surveillance based on the growth of thyroid nodules categorized as FNs on serial neck ultrasonography does not predict the risk of malignancy (PUBMED:31375058). A retrospective study aimed to determine the roles of preoperative thyroid nodule diameter and volume in the prediction of malignancy concluded that malignancy risk did not increase with increasing nodule diameter or volume. No optimal cutoff value for diameter or volume that can predict malignancy in all thyroid nodules or nodules ≥ 4 cm could be determined (PUBMED:30916163). In the context of papillary thyroid carcinoma with lymph node metastasis, factors such as age, sex, thyroid capsule invasion, and lymph node microcalcification were identified as significant risk factors for predicting lymph node metastasis (LNM) in patients with papillary thyroid carcinoma (PTC), rather than nodule volume (PUBMED:38027220). Another study found that thyroid cancer prevalence was higher in patients with low thyroid volume, suggesting that thyroid volume should be considered as a risk factor for malignancy in the evaluation of thyroid nodules (PUBMED:24338169). Lastly, a cohort study concluded that volume is not an appropriate method for determining tumor growth in high-risk thyroid nodules, and that tumor diameter measurement alone serves as a better surrogate for disease progression than volume (PUBMED:37828412). In summary, while nodule growth or volume increase is a factor considered during the follow-up of thyroid nodules, it is not a definitive risk factor for malignancy across all types of nodules In summary, while nodule growth or volume increase is a factor considered during the follow-up of thyroid nodules, it is not a definitive risk factor for malignancy across all types of nodules. The evidence suggests that other factors, such as the type of nodule, presence of specific ultrasound features, and patient demographics, may be more predictive of malignancy risk than nodule volume alone. For example, in the case of follicular neoplasms, growth did not correlate with an increased risk of malignancy (PUBMED:31375058), and in patients with papillary thyroid carcinoma, factors like age, sex, and specific ultrasound characteristics were more significant predictors of lymph node metastasis than nodule volume (PUBMED:38027220). Furthermore, another study found that thyroid cancer prevalence was higher in patients with lower thyroid volume, which suggests that smaller thyroid volume, rather than growth, might be associated with a higher risk of malignancy (PUBMED:24338169). Additionally, a cohort study indicated that changes in tumor diameter rather than volume are more appropriate for assessing tumor growth in high-risk thyroid nodules (PUBMED:37828412). It is also worth noting that one study found an inverse relationship between vitamin D deficiency and well-differentiated thyroid carcinoma, suggesting that vitamin D deficiency may be a modifiable risk factor for thyroid cancer (PUBMED:22762696). However, this finding is not directly related to nodule volume but rather to a potential systemic factor that could influence thyroid cancer risk. In conclusion, the relationship between thyroid nodule volume and malignancy is complex and cannot be generalized. The decision to pursue further diagnostic procedures or surgery should not be based solely on nodule growth but should take into account a comprehensive evaluation of all relevant clinical factors.
Instruction: Can regional lymph node involvement be predicted in patients with carcinoma of the penis? Abstracts: abstract_id: PUBMED:8012778 Can regional lymph node involvement be predicted in patients with carcinoma of the penis? Objective: To detect the incidence of metastases in regional nodes (inguinal and external iliac) in patients with carcinoma of the penis and to determine whether nodal involvement was predictable pre-operatively by clinical and histological parameters. Patients And Methods: Seventy-eight patients who in total had undergone 135 groin dissections were studied. The incidence of inguinal and iliac node metastases was correlated with factors such as the size of the nodes, the histological degree of differentiation, the extent of penile involvement by the primary tumour and the clinical palpability of the iliac nodes. Results: The incidence of metastases to the inguinal and iliac nodes was 74% and 32% of patients respectively. The risk of involvement was equal on both sides irrespective of whether the nodes were palpable. Inguinal nodes larger than 2 cm in diameter and poor histological differentiation of the primary tumour were significant predictors of inguinal node involvement. The palpability of the iliac nodes, inguinal nodes larger than 2 cm and the fixity of the inguinal nodes were important indicators of metastases to iliac nodes. Extension of the primary tumour to the proximal shaft of the penis was associated with a significantly higher incidence of inguinal node but not iliac node metastases. None of the parameters studied identified all the patients with nodal metastases. Conclusions: In the absence of any reliable predictor of nodal metastases, all patients with carcinoma of the penis required an intensive and continued follow-up to detect signs of nodal involvement. In developing countries however, where patients do not come for regular follow-up and often present with fungating inguinal secondaries, a policy of early bilateral regional node clearance despite the level of morbidity is preferable. abstract_id: PUBMED:17296384 Pelvic lymph node dissection for penile carcinoma: extent of inguinal lymph node involvement as an indicator for pelvic lymph node involvement and survival. Purpose: We identified pathological parameters of inguinal lymph node involvement with the aim of predicting pelvic lymph node involvement and survival. Materials And Methods: A total of 308 patients with penile carcinoma and adequate followup were included in this study. The outcome of 102 patients who underwent lymphadenectomy for lymph node metastases was analyzed further. Histopathological characteristics of the regional lymph nodes were reviewed including unilateral or bilateral involvement, the number of involved nodes, pathological tumor grade of the involved nodes, and the presence of extracapsular growth. Results: Tumor grade of the involved inguinal lymph nodes (OR 6.0, 95% CI 1.2-30.3) and the number of involved nodes (2 or less vs more than 2) (OR 12.1, 95% CI 3.0-48.1) were independent prognostic factors for pelvic lymph node involvement. Extracapsular growth (OR 2.3, 95% CI 1.1-4.8), bilateral inguinal involvement OR 3.4, 95% CI 1.2-9.4) and pelvic lymph node involvement (OR 3.1, 95% CI 1.4-6.6) were independent prognostic factors for disease specific survival. Conclusions: Patients with only 1 or 2 inguinal lymph nodes involved without extracapsular growth and no poorly differentiated tumor within these nodes are at low risk of pelvic lymph node involvement and have a good prognosis with a 5-year survival rate of approximately 90%. Pelvic lymph node dissection seems to be unnecessary in these cases. abstract_id: PUBMED:16753383 Size of metastasis in the sentinel node predicts additional nodal involvement in penile carcinoma. Purpose: The majority of patients with penile cancer with a tumor positive sentinel node do not benefit from complementary lymph node dissection because of absent additional involved nodes. We analyzed factors that may determine the involvement of additional nodes. Materials And Methods: A total of 158 patients with clinically node negative penile carcinoma underwent sentinel node biopsy. Complementary inguinal lymph node dissection was performed when the sentinel node was tumor positive. The size of the sentinel node metastasis was measured and classified as micrometastasis--2 mm or less, or macrometastasis--more than 2 mm. Sentinel and dissection specimen nodes were step-sectioned. Factors were analyzed for their association with additional nodal involvement, including stage, diameter, grade, absence or presence of vascular invasion of the primary tumor, and sentinel node metastasis size. Results: Tumor positive sentinel nodes were found in 46 groins and complementary lymph node dissection was performed. Nine of these 46 groins (20%) contained additional involved lymph nodes. On univariate and multivariate analyses the size of the sentinel node metastasis proved to be the only significant prognostic variable for additional lymph node involvement (each p = 0.02). None of the 15 groins with only micrometastasis in the sentinel node contained additional involved nodes. Conclusions: In penile carcinoma additional nodal involvement was related to the size of the metastasis in the sentinel node. Sentinel node micrometastasis was not associated with other involved lymph nodes. This finding suggests that these patients can be spared complementary lymph node dissection. abstract_id: PUBMED:32755338 Lymph Node Mapping in Patients with Penile Cancer Undergoing Pelvic Lymph Node Dissection. Purpose: A map of pelvic lymph node metastasis in patients with penile cancer helps to clarify the pattern of pelvic spread and define the reasonable limits of dissection, and it has not been established. We aim to provide an accurate map of lymph node metastasis in patients with penile cancer and determine the reasonable extent of pelvic lymph node dissection. Materials And Methods: We enrolled patients with penile cancer undergoing pelvic lymph node dissection (128) at our institution from 1999 to 2018. The numbers of removed lymph nodes and positive lymph nodes at 10 distinct regions were recorded. The chi-square and Fisher exact tests were used. Results: The median number of pelvic lymph nodes retrieved was 18 (IQR 10-30), with the majority being from the external iliac package (43.0%) and obturator package (31.9%). Pelvic lymph node metastasis was present in 57/128 (44.5%) patients. The median number of positive pelvic lymph nodes removed was 2 (IQR 1-4), with the majority being from the external iliac package (50.0%) and obturator package (36.6%). Advanced T-stage was related to higher risk of pelvic lymph node metastasis, which was present in 30.3%, 44.2%, 59.0% and 58.3% of patients with pT1, pT2, pT3 and pT4, respectively. Notably, 2 patients had crossover metastasis from 1 inguinal region to the contralateral pelvic region. Conclusions: We present a detailed map of pelvic lymph node metastasis in patients with penile carcinoma. The external iliac and obturator packages appear to be most commonly involved. Optimal pelvic lymph node dissection may extend to the common iliac artery, including common iliac, external iliac, internal iliac and obturator lymph nodes. Extranodal extension in inguinal nodes may not be as important as previously thought. abstract_id: PUBMED:22641958 Sentinel lymph node biopsy in penile carcinoma. Sentinel lymph node (SLN) biopsy is a fairly new technique that is becoming the standard of care for regional lymph node staging of many solid tumors. This technique is based on the hypothesis of stepwise distribution of malignant cells in the lymphatic system. The absence of tumor cells in the first lymph node(s) in the lymphatic drainage of a tumor would indicate the absence of further spread in the regional lymph node basin(s). Therefore, this first lymph node is the guardian (sentinel) of the regional lymph node basin. To localize the sentinel node preoperatively, lymphoscintigraphy is usually performed after intradermal peritumoral injections of colloid particles labeled with technetium-99m. The tracer is transported through the lymphatic channels to the first draining nodes in the groins and is visible on the lymphoscintigram as hot spots. The main advantage of SLN biopsy in penile cancer is to decrease the treatment-related morbidity without compromising the survival benefit for the patient. Recent figures indicate a false-negative rate of 7%, with a complication rate of less than 5% for SLN biopsy. In conclusion, sentinel node biopsy of patients with penile cancer has evolved into a highly reliable procedure enabling the detection of lymph node invasion at the earliest possible time with minimal morbidity. With this technology at hand, which minimizes the treatment-related morbidity, there is hardly any place for standard lymphadenectomy in penile cancer patients. abstract_id: PUBMED:8053721 Carcinoma of the penis. What to do with the regional lymph nodes? From 1982 to 1990, we diagnosed and treated 23 cases of squamous cell carcinoma of the penis. The minimum follow-up was 3 years (range 3-8 years; mean 4 years). Special emphasis was placed on the exact staging of the tumor according to the TNM classification, above all in relation to inguinal and iliac lymph node involvement, which indicates the choice of treatment and the prognosis. At the time of diagnosis, 14 patients (60.8%) had palpable inguinal lymph nodes. This was reduced to 10 patients after a 3-6 week course of antibiotic therapy. Treatment of the primary lesion is usually by partial amputation of the penis, which enables us to determine the pathological stage and the histological grade of the tumor and, consequently, our approach to the regional lymph nodes. Regional lymphadenectomy was performed early in 11 cases and delayed in 7 cases; i.e., when pathological nodes were palpable in the groin. Four patients (36.3%) had positive nodes in the first group and 6 (85.7%) in the second group. Forty percent had bilateral lymph node involvement. Overall the surgical technique carried a morbidity rate of 38.8% for skin necrosis and 33.3% for localized infections, which delayed healing but required no plastic surgery. Inguinal lymph node involvement is a major prognostic factor: when negative, our 3-year survival rate was 87.5%, which dropped to 40% when positive, despite lymphadenectomy. Currently, in relatively young patients with high stage and grade tumor, our approach is to do early bilateral ilioinguinal lymphadenectomy. Radical dissection carries an acceptable morbidity and mortality today and can benefit patients with positive nodes that may be undetected even by the best methods for clinical staging. abstract_id: PUBMED:33814858 Modified Inguinal Lymph Node Dissection in Groin-Negative Patients of Penile Cancer: Our Experience. Cancer of the penis is an important health problem in India, causing significant morbidity. Involvement of locoregional lymph nodes is the most significant prognostic factor for patients with penile cancer. In this study, we reviewed clinical data of all patients who underwent modified inguinal lymph node dissection as a means to diagnose micro-metastasis in inguinal lymph nodes, and analysed the outcomes. We retrospectively reviewed the hospital clinical charts of patients treated for carcinoma of the penis. Inguinal and distant metastases were assessed by physical examination, ultrasound imaging of the inguinal region, computed tomography of the abdomen and pelvis and a chest radiograph. Patients with clinically negative inguinal lymph nodes underwent modified lymph node dissection (mILND) both to diagnose and stage the disease. Complications occurring during a 30-day period after surgery were defined as early and thereafter as late complications. A total of 40 patients with a mean age of 52.27±13.10 (range 25-73) years underwent mILND. Wedge biopsy from the primary lesion had revealed intermediate-risk disease in 22 (55%) patients and high-risk disease in 18 (45%) patients. Histopathological examination of the primary penile lesion revealed a pT1 lesion in 32 patients and a pT2 lesion in the remaining 8 patients. Fourteen (35%) of the 40 patients showed micro-metastases in the inguinal lymph nodes on frozen sections. The mean follow-up in these patients was 56.6±18.09 months. There were no instances of local or systemic recurrences seen in 38 (95%) patients within 5 years. Superficial lymph node dissection and where facilities are available DSLNB remain the standard of care in the management of patients with clinically groin-negative (cN0) intermediate- and high-risk groups. Modified inguinal lymph node dissection would be a safe and appropriate alternative to this in all centres that do not have access to newer modalities like DSLNB, video-endoscopic (VEIL) or robotic-assisted techniques. abstract_id: PUBMED:15856474 Lymphatic and vascular embolizations are independent predictive variables of inguinal lymph node involvement in patients with squamous cell carcinoma of the penis: Gruppo Uro-Oncologico del Nord Est (Northeast Uro-Oncological Group) Penile Cancer data base data. Background: The objective of the current study was to identify independent clinical and pathologic variables that were predictive of lymph node involvement in patients with squamous cell carcinoma of the penis in a multicenter series with the intent to select patients who were suitable to undergo immediate inguinal lymphadenectomy. Methods: Data were analyzed from 175 patients who underwent surgery for penile carcinoma in 11 urologic centers participating in the Gruppo Uro-Oncologico del Nord-Est (Northeast Uro-Oncological Group) Penile Cancer Data Base. Pathologically positive lymph nodes were defined as the presence of histologically confirmed lymph node metastasis in patients who underwent either immediate or delayed inguinal and/or pelvic lymphadenectomy. Patients who had clinically positive lymph nodes with cytologically positive fine-needle aspiration results and who had not undergone lymphadenectomy were censored. Results: Overall, lymph-node involvement was observed in 71 of 175 patients (40.6%) included in the analyses. After analyzing the whole group of patients, the following variables were identified as independent predictors of pathologic lymph node metastasis: clinical lymph node status, pathologic stage of the primary tumor, venous and lymphatic embolizations, and histologic grade. In the subgroup of patients with clinically negative lymph nodes, tumor thickness, histologic grade, lymphatic and venous embolizations, infiltration of both corpus spongiosum and urethra, and pathologic stage of the primary tumor (according to the 1997 TNM classification system) were predictive of lymph node involvement on univariate analysis. The generated logistic regression model showed that venous and/or lymphatic embolizations and infiltration of the corpus spongiosum and/or urethra were independent predictors of pathologic lymph node metastasis in patients with clinically negative lymph nodes. Conclusions: Venous and/or lymphatic embolizations played relevant roles as predictors of pathologic lymph node involvement in patients with penile neoplasia and should be considered important parameters in determining which patients with clinically negative lymph nodes should undergo immediate lymphadenectomy. abstract_id: PUBMED:25833472 Prognostic factors in patients with penile carcinoma and inguinal lymph node metastasis. Objectives: To identify prognostic factors in patients with penile carcinoma and confirmed lymph node metastasis. Methods: Patients were selected from a historical series of patients with penile carcinoma. An experienced pathologist reviewed all cases. Information regarding the total number of lymph nodes excised, the number of positive lymph nodes and the presence of extranodal extension were used. Lymph node ratio was categorized as &lt;0.15 and &gt;0.15. Results: The 5-year recurrence-free survival and disease-specific survival rates were 55.3% and 64.1%, respectively. Lymphovascular invasion, lymph node ratio and pN status influenced survival rates in univariate analysis. Lymphovascular invasion and lymph node ratio remained as independent predictors of disease-specific survival and recurrence-free survival in the multivariate analysis. A risk stratification of death and tumor recurrence was observed when patients were grouped into three categories: absence of risk factors; the presence of one risk factor; and the presence of two or more risk factors. Conclusions: The presence of one or more of the following parameters is correlated with a significantly higher risk of death and tumor recurrence in patients with penile carcinoma and inguinal lymph node metastasis: extranodal extension, lymph node ratio &gt;0.15 and lymphovascular invasion. abstract_id: PUBMED:33668548 Is There an Oncological Benefit of Performing Bilateral Pelvic Lymph Node Dissection in Patients with Penile Cancer and Inguinal Lymph Node Metastasis? We aim to review the literature for studies investigating the oncological outcomes of patients with penile cancer (PC) undergoing bilateral pelvic lymph node dissection (PLND) in the presence of inguinal lymph node metastasis (LNM) who are at risk of harboring pelvic metastasis. A search of English language literature was performed using the PubMed-MEDLINE database up to 3 December 2020 to identify articles addressing bilateral PLND in PC patients. Eight articles investigating bilateral PLND met our inclusion criteria. Patients with pelvic LNM have a dismal prognosis and, therefore, PLND has an important role in both the staging and treatment of PC patients. Ipsilateral PLND is recommended in the presence of ≥2 positive inguinal nodes and/or extranodal extension (ENE). Significant survival improvements were observed with a higher pelvic lymph node yield, in patients with pN2 disease, and in men treated with bilateral PLND as opposed to ipsilateral PLND. Nevertheless, the role of bilateral PLND for unilateral inguinal LNM remains unclear. Although the EAU guidelines state that pelvic nodal disease does not occur without ipsilateral inguinal LNM, metastatic spread from one inguinal side to the contralateral pelvic side has been reported in a number of studies. Further studies are needed to clarify the disseminative pattern of LNM, in order to establish PLND templates according to patients' risk profiles and to investigate the benefit of performing bilateral PLND for unilateral inguinal disease. Answer: Regional lymph node involvement in patients with carcinoma of the penis can be predicted to some extent using various clinical and histological parameters, but no single parameter can identify all patients with nodal metastases. The incidence of metastases to the inguinal and iliac nodes was found to be 74% and 32% respectively in one study, with inguinal nodes larger than 2 cm in diameter and poor histological differentiation of the primary tumour being significant predictors of inguinal node involvement. The palpability of the iliac nodes, inguinal nodes larger than 2 cm, and the fixity of the inguinal nodes were important indicators of metastases to iliac nodes (PUBMED:8012778). Other studies have identified additional predictive factors. The tumor grade of the involved inguinal lymph nodes and the number of involved nodes were independent prognostic factors for pelvic lymph node involvement. Extracapsular growth, bilateral inguinal involvement, and pelvic lymph node involvement were independent prognostic factors for disease-specific survival (PUBMED:17296384). The size of the metastasis in the sentinel node was also a significant prognostic variable for additional lymph node involvement, with micrometastasis (2 mm or less) in the sentinel node not being associated with other involved lymph nodes (PUBMED:16753383). Lymphatic and vascular embolizations were identified as independent predictive variables of inguinal lymph node involvement in another study (PUBMED:15856474). Additionally, lymphovascular invasion, lymph node ratio, and pN status were found to influence survival rates and were independent predictors of disease-specific survival and recurrence-free survival (PUBMED:25833472). Overall, while certain factors can predict regional lymph node involvement to some degree, the variability and complexity of the disease make it challenging to have a definitive pre-operative prediction for all patients. Therefore, an intensive and continued follow-up is recommended for all patients with carcinoma of the penis to detect signs of nodal involvement, and in some cases, a policy of early bilateral regional node clearance may be preferable (PUBMED:8012778).
Instruction: Is the transplant quality at the time of surgery adequate for matrix-guided autologous cartilage transplantation? Abstracts: abstract_id: PUBMED:23553069 Is the transplant quality at the time of surgery adequate for matrix-guided autologous cartilage transplantation? A pilot study. Background: Matrix-guided autologous chondrocyte transplantation (MACT) has been proposed as an option for treating large full-thickness cartilage defects. However, little is known about the chondrogenic potential of transplants for MACT at the time of implantation, although cell quality and chondrogenic differentiation of the implants are crucial for restoration of function after MACT. Questions/purposes: We therefore asked: (1) Do MACT implants allow deposition of extracellular cartilage matrix in an in vitro culture model? (2) Are these implants associated with improved knee function 1 year after MACT in large cartilage defects? Methods: We retrospectively reviewed all 125 patients with large localized cartilage defects (mean defect size 5 cm(2)) of the knee who were treated with MACT from 2005 to 2010. The mean age was 31 years (range, 16-53 years). Portions of the cell-matrix constructs (n = 50) that were not implanted in the cartilage defects were further cultured and tested for their potential to form articular cartilage. Knee function of all patients was analyzed preoperatively, 3 months, and 1 year postoperatively with the International Knee Documentation Committee (IKDC) score. Results: In vitro assessment of the cell-matrix implants showed chondrogenic differentiation with positive staining for glycosaminoglycans and collagen II in all cultures. Enzyme-linked immunosorbent assay showed an increase of collagen II production. We observed an improvement in median IKDC score from 41 to 67 points at last followup. Conclusions: Cartilage extracellular matrix deposition shows adequate implant quality for MACT at the time of implantation and justifies the use for treatment of large cartilage defects. abstract_id: PUBMED:36168117 Experimental Study on the Biological Outcome of Auricular Cartilage and Costal Cartilage at Different Time Periods After Autologous Cartilage Rhinoplasty. Since autologous cartilage is a good transplant material, it is widely used in various fields of clinical medicine. In this study, we collected clinical specimens obtained at different numbers of years after transplantation and used histologic staining to explore the post-transplantation changes in auricular cartilage and costal cartilage. A retrospective analysis was performed on patients who underwent primary autologous cartilage rhinoplasty and secondary rhinoplasty from 2017 to 2021, and the remaining autologous cartilage tissue after surgery was used for histologic testing. As time progressed after transplantation, the density of costal chondrocytes decreased first and then increased, while the secretion of type II collagen and extracellular matrix both decreased slightly. There was a clear boundary between the cartilage tissue and the surrounding connective tissue, and there was no ingrowth of blood vessels in the cartilage. Auricular cartilage showed a decrease in the integrity of the matrix edge. Moreover, local fibrosis was visible, and vascular ingrowth was observed at the edge of the cartilage. The content of type II collagen first increased and then decreased, and the cell secretion function was lower than that of normal chondrocytes. The results of the study suggest that the histologic outcome of elastic cartilage after transplantation is significantly different from that of hyaline cartilage. Moreover, costal cartilage was more stable than auricular cartilage after transplantation. abstract_id: PUBMED:30993353 Arthroscopic matrix-associated, injectable autologous chondrocyte transplantation of the hip: significant improvement in patient-related outcome and good transplant quality in MRI assessment. Purpose: Acetabular chondral lesions are common in patients with FAI. For large full-thickness cartilage defects, arthroscopic matrix-associated autologous chondrocyte transplantation (MACT) using an injectable in situ crosslinking product is an option. Aim of the study was to evaluate clinical and MRI results 12 months after MACT of acetabular cartilage defects in FAI patients. Methods: We report data on 21 patients with a focal cartilage defect of the hip [2.97 ± 1.44 cm2 (mean ± SD)] caused by FAI treated with an arthroscopically conducted MACT combined with FAI surgery. The results were assessed with patient-reported outcome measures (iHOT33, EQ-5D) pre- as well as post-operatively and by MRI using MOCART scoring system 6 and 12 months post-operatively. Results: The iHOT33 score improved from 52.9 ± 21.14 (mean ± SD) pre-operative to 81.08 ± 22.04 (mean ± SD; p = 0.0012) 12 months post-operatively. The lower the pre-operative iHOT33 score and the larger the defect size, the greater the observed improvement compared to pre-operative scores at 12 months. Patients showed a significant improvement in EQ-5D-5L index value (p = 0.0015) and EQ-5D VAS (p = 0.0006). MRI analysis after 12 months revealed a complete integration of the transplant in 16 of 20 patients. Conclusions: Injectable MACT is a promising minimally invasive treatment option for full-thickness cartilage defects of the hip caused by FAI. A significant improvement in symptoms and function associated with an increase in quality of life was detected in patients treated with injectable MACT combined with FAI surgery. This is of considerable clinical relevance, since, in addition to the elimination of the mechanical cause, MACT allows the successful therapy of consequential cartilage damage. Level Of Evidence: Level 4, case series. abstract_id: PUBMED:37188223 Rate of Conversion to Matrix-Induced Autologous Chondrocyte Implantation After a Biopsy: A Multisurgeon Study. Background: Autologous chondrocyte implantation (ACI) and matrix-induced autologous chondrocyte implantation (MACI) are performed to treat focal chondral defects (FCDs); both are 2-step procedures involving a biopsy, followed by transplantation. There is little published research evaluating ACI/MACI in patients who undergo a biopsy alone. Purpose: To determine (1) the value of ACI/MACI cartilage biopsies and concomitant procedures in patients with FCDs of the knee and (2) the conversion rate to cartilage transplantation as well as the rate of reoperation. Study Design: Case series; Level of evidence, 4. Methods: A retrospective review was performed of 46 patients (63% female) who underwent a MACI (or ACI) biopsy between January 2013 and January 2018. Preoperative data, intraoperative data, and postoperative outcomes were assessed at a minimum of 2 years after the biopsy. The conversion rate from a biopsy to transplantation and the reoperation rate were calculated and analyzed. Results: Among the 46 patients included, 17 (37.0%) underwent subsequent surgery, with only 12 undergoing cartilage restoration surgery, for an overall transplantation rate of 26.1%. Of these 12 patients, 9 underwent MACI/ACI, 2 underwent osteochondral allograft transplantation (OCA), and 1 underwent particulated juvenile articular cartilage implantation at 7.2 ± 7.5 months after the biopsy. The reoperation rate was 16.7% (1 patient after MACI/ACI and 1 patient after OCA) at 13.5 ± 2.3 months after transplantation. Conclusion: Arthroscopic surgery with debridement, chondroplasty, loose body removal, meniscectomy/meniscal repair, and other treatment approaches of knee compartment abnormalities at the time of a biopsy appeared to be sufficient in improving function and reducing pain in patients with knee FCDs. abstract_id: PUBMED:29110690 Autologous-cell-derived, tissue-engineered cartilage for repairing articular cartilage lesions in the knee: study protocol for a randomized controlled trial. Background: Spontaneous recovery from articular cartilage injury is difficult, and the ongoing progression of disease can eventually lead to osteoarthritis. Currently, there is no effective non-surgical treatment for articular cartilage injury. Arthroscopic debridement and microfracture surgery are performed for fibrocartilage repair. But fibrocartilage is different from normal articular cartilage, and functional recovery is not satisfactory. Therefore, it is necessary to develop more effective techniques for articular cartilage repair. Progress in material science, cell biology, biomechanics, and bioreactor technology has allowed the development of biomimetic, tissue-engineered osteochondral composites that have shown potential for the repair of damaged cartilage. We prepared biomimetic, tissue-engineered cartilage scaffolds optimized for biochemical composition and structural characteristics. Based on the experience of our pre-clinical studies on animals, a human articular cartilage acellular matrix scaffold was prepared and is in clinical use. The combination of autologous chondrocytes and scaffolds has shown satisfactory results in repairing cartilage defects in preliminary experiments. Methods: This is a prospective randomized controlled trial. One hundred patients with full-thickness cartilage injury of the knee will be randomly divided into two groups to receive treatment with either tissue-engineered cartilage constructed using biomimetic cartilage extracellular-matrix-oriented scaffolds combined with autologous chondrocytes, or arthroscopic debridement and microfracture surgery. There will be five visiting time points: at baseline, then at 3, 6, 12, and 18 months postoperatively. The primary outcome will be therapeutic efficacy as assessed by the Lysholm score at 12 months postoperatively. The secondary outcomes will be the International Knee Documentation Committee score, Visual Analog Scale score, and cartilage injury and repair as assessed by magnetic resonance imaging as well as the incidence of postoperative adverse events. Discussion: This trial will attempt to verify the use of tissue-engineered cartilage constructed using autologous chondrocytes combined with allogeneic, acellular cartilage matrix for the repair of cartilage defects, thereby providing favorable evidence for its use in clinical practice. Trial Registration: ClinicalTrials.gov, identifier: NCT02770209 . Registered on 11 May 2016. abstract_id: PUBMED:28575639 Matrix Production Affects MRI Outcomes After Matrix-Associated Autologous Chondrocyte Transplantation in the Knee. Background: Matrix-associated autologous chondrocyte transplantation (MACT) has been an effective therapy for large, full-thickness cartilage lesions for years. However, little is known about how graft maturation is affected by characteristics of transplanted chondrocytes. Purpose: To investigate the influence of gene expression of chondrocytes at the time of transplantation on MRI outcomes up to 2 years after MACT. Study Design: Case series; Level of evidence, 4. Methods: This study included 25 patients with 27 symptomatic traumatic defects of articular cartilage, who had undergone MACT in the knee. Postoperative MRI examinations were conducted at 3, 6, 12, and 24 months after surgery. Biochemical graft maturation was assessed by measuring T2 relaxation time values of the transplant and healthy native cartilage areas. The MOCART (magnetic resonance observation of cartilage repair tissue) score was used to evaluate the morphological quality of regeneration tissue. Gene expression (collagen type I, collagen type II, aggrecan, versican, and interleukin-1β) was determined by real-time polymerase chain reaction (PCR) in transplant residuals at the time point of transplantation and was correlated with MRI outcomes using Spearman's rank correlation coefficient. A Friedman test with post hoc analysis (Wilcoxon signed rank test) conducted with a Bonferroni correction was applied to compare scores at different time points. Results: T2 relaxation time of regeneration tissue improved from a mean ± SD of 74.6 ± 20.1 milliseconds at 3 months to 47.9 ±13.3 milliseconds at 24 months ( P &lt; .003). These values were similar to the T2 relaxation times of the native surrounding cartilage (50.9 ± 15 ms). The calculated T2 index (ratio of regeneration tissue to native cartilage) improved from 1.63 ± 0.76 at 3 months to 1.0 ± 0.4 at 24 months ( P &lt; .011). The MOCART score increased from 51.6 ± 15 points to 72.4 ± 12.2 points ( P &lt; .001). Improvement of the T2 index over time significantly correlated with aggrecan, COL1A1, COL2A1, and versican expression ( rs = 0.9, P &lt; .001; rs = 0.674, P &lt; .012; rs = 0.553, P &lt; .05; and rs = 0.575, P &lt; .04, respectively). No correlation was found for IL-1β. Conclusion: These data demonstrate that matrix production in transplanted chondrocytes affects maturation of MACT grafts in MRI 2 years after surgery. abstract_id: PUBMED:19523325 Enhanced early tissue regeneration after matrix-assisted autologous mesenchymal stem cell transplantation in full thickness chondral defects in a minipig model. Adult mesenchymal stem cells (MSCs) are an attractive cell source for new treatment strategies in regenerative medicine. This study investigated the potential effect of matrix assisted MSC transplantation for articular cartilage regeneration in a large-animal model 8 weeks postoperatively. MSCs from bone marrow aspirates of eight Goettingen minipigs were isolated and expanded prior to surgery. Articular cartilage defects of 5.4 mm were created bilaterally in the medial patellar groove without penetrating the subchondral bone plate. Defects were either left empty (n = 4), covered with a collagen type I/III membrane (n = 6) or additionally treated with autologous MSC transplantation (2 x 10(6); n = 6). After 8 weeks animals were euthanized and the defect area was assessed for its gross appearance. Histomorphological analysis of the repair tissue included semiquantitative scoring (O'Driscoll score) and quantitative histomorphometric analysis for its glycosaminoglycan (GAG) and collagen type II content. All membranes were found to cover the defect area 8 weeks postoperatively. Median defect filling was 115.8% (membrane), 117.8% (empty), and 100.4% (MSC), respectively (not significant). Histomorphological scoring revealed significantly higher values in MSC-treated defects (median 16.5) when compared to membrane treatment (median 9.5) or empty defects (median 11.5; p = 0.015 and p = 0.038). Histomorphometric analysis showed larger GAG/collagen type II-positive areas in the MSC-treated group (median 24.6%/29.5% of regeneration tissue) compared to 13.6%/33.1% (empty defects) and 1.7%/6.2% (membrane group; p = 0.066). Cell distribution was more homogeneous in MSC compared to membrane-only group, where cells were found mainly near the subchondral zone. In conclusion, autologous matrix-assisted MSC transplantation significantly increased the histomorphological repair tissue quality during early articular cartilage defect repair and resulted in higher GAG/collagen type II-positive cross-sectional areas of the regenerated tissue. abstract_id: PUBMED:17407931 Navigation-guided transfer of cartilage defect geometry for arthroscopic autologous chondrocyte transplantation. Cartilage repair with autologous chondrocyte transplantation shows intriguing results. Chondrogenic transplants generally must be inserted into cartilage defects via arthrotomy. The following study showed that arthroscopically-guided navigation could detect and precisely measure the cartilage defect sizes of different geometries. The new cartilage defect-managing module allowed for the precise transfer of navigated cartilage defect geometries for exact size preparation of the tissue engineering scaffolds. Therefore, navigation can help accomplish chondrocyte transplantation arthroscopically. abstract_id: PUBMED:12940674 Autologous elastic cartilage for laryngoplasty: histologic evaluation in a rabbit model. A wide range of materials have been used to achieve medialization of the paralyzed vocal fold. Recently, medialization techniques using autologous cartilage have been described, but little information is available on cartilage integration and viability in this situation. In this prospective, experimental, controlled study, right vocal fold paralysis was surgically induced in 30 New Zealand rabbits. An autologous auricular cartilage transplant was inserted in the vocal fold in 15 animals. In the control group, only the laryngeal nerve was sectioned. Each group was divided into two groups with follow-ups of 6 weeks and 6 months, respectively. Histologic studies revealed no inflammatory reaction against the cartilage transplants. There were no differences in the transplant surfaces in the 6-week and 6-month groups. The results show tissue integration and a low level of initial transplant resorption that stabilizes with time. Autologous auricular cartilage appears to be an appropriate material for type I thyroplasty procedures because of the low absorption rate. abstract_id: PUBMED:29798637 Matrix-induced autologous chondrocyte implantation for treatment of femoral trochlea cartilage injury Objective: To determine the short-term effectiveness of matrix-induced autologous chondrocyte implantation (MACI) for femoral trochlea cartilage injury. Methods: A retrospective analysis was performed on the clinical data of 10 patients with femoral trochlea cartilage injury treated with MACI between June 2012 and October 2014. There were 6 males and 4 females, aged from 15 to 48 years (mean, 33 years). The left knee was involved in 3 cases and the right knee in 7 cases. Nine patients had a history of trauma, and 1 case suffered from osteochondritis dissecans. Combined injuries included meniscus injury in 1 case, anterior cruciate ligament injury in 3 cases, and lateral collateral ligament tear in 2 cases. The mean lesion depth was 2.80 mm (range, 2-7 mm), with the mean defect size of 84.85 mm 2 (range, 28.26-153.86 mm 2). The mean duration of definite diagnosis was 14 days (range, 5 days to 3 months). By using arthroscopic biopsy, 200-300 mg healthy articular cartilage at non weight-bearing area of the knee femoral trochlea was collected as a source of seed cells, which were isolated and cultured to prepare MACI membrane. The adhesion activity, growth rate, and mechanical properties of the chondrocytes on the Bio-gide collagen scaffold were evaluated. In addition, the stretch rate, tensile strength, and suture strength of scaffold were tested. MACI membrane was implanted after 2 weeks to 6 months. The visual analogou scale (VAS), Lysholm score, and Tegner movement level score at preoperation and last follow-up were used to assess the function. Results: The MACI membrane was successfully prepared, and the human chondrocytes adhered and grew well on the Bio-gide collagen scaffold. Mechanical test showed that MACI membrane had the stretch rate of 65.27%, the tensile strength of 26.81 MPa, and the suture strength of 6.49 N, indicating good mechanical properties. MACI membrane was successfully implanted. The mean operation time was 58.5 minutes (range, 43-99 minutes), and the mean hospitalization time was 7 days (range, 6-15 days). All incisions healed well. Ten cases were followed up 9 to 16 months (mean, 12 months). Four cases underwent iliac bone graft surgery. The mean healing time was 14 weeks (range, 12-16 weeks). No complications of osteochondrolysis, knee pain, nerve and vascular injury, deep vein thrombosis, and knee adhesion occurred during follow-up. The VAS score, Lysholm score, and Tegner score at last follow-up were significantly improved when compared with preoperative scores ( t=12.060, P=0.000; t=-9.200, P=0.000; t=-14.000, P=0.000). Conclusion: MACI for femoral trochlea cartilage injury has good short-term effectiveness, with less injury and fast function recovery. Answer: Yes, the transplant quality at the time of surgery is considered adequate for matrix-guided autologous cartilage transplantation (MACT). This is supported by several studies that have evaluated the quality and outcomes of MACT transplants. One study retrospectively reviewed 125 patients with large localized cartilage defects of the knee treated with MACT and found that in vitro assessment of the cell-matrix implants showed chondrogenic differentiation with positive staining for glycosaminoglycans and collagen II in all cultures. An increase in collagen II production was also observed, and there was an improvement in median International Knee Documentation Committee (IKDC) score from 41 to 67 points at the last follow-up, indicating that cartilage extracellular matrix deposition shows adequate implant quality for MACT at the time of implantation (PUBMED:23553069). Another study on arthroscopic matrix-associated, injectable autologous chondrocyte transplantation of the hip reported significant improvement in patient-related outcome measures and good transplant quality in MRI assessment 12 months after MACT of acetabular cartilage defects in patients with femoroacetabular impingement (FAI). The study concluded that injectable MACT is a promising minimally invasive treatment option for full-thickness cartilage defects of the hip caused by FAI (PUBMED:30993353). Furthermore, a study investigating the influence of gene expression of chondrocytes at the time of transplantation on MRI outcomes up to 2 years after MACT found that matrix production in transplanted chondrocytes affects the maturation of MACT grafts in MRI 2 years after surgery. This suggests that the quality of the transplanted chondrocytes is an important factor for successful graft maturation (PUBMED:28575639). In summary, the available evidence suggests that the quality of transplants at the time of surgery is adequate for MACT, with positive outcomes in terms of chondrogenic differentiation, extracellular matrix deposition, and improvement in patient function and quality of life.
Instruction: Does fluid hysteroscopy increase the stage or worsen the prognosis in patients with endometrial cancer? Abstracts: abstract_id: PUBMED:15262358 Does fluid hysteroscopy increase the stage or worsen the prognosis in patients with endometrial cancer? A randomized controlled trial. Objective: To determine whether fluid hysteroscopic directed biopsies, in patients with endometrial cancer upstages the tumor and worsens the prognosis. Study Design: Between January 1996 and September 2001, a total of 62 consecutive patients with endometrial cancer, treated at our institution, were randomized 3:2 to have or not to have a fluid hysteroscopic biopsy just prior to surgery. A total of 38 patients underwent a hysteroscopy after the induction of anesthesia. All patients had pelvic washings performed, followed by a hysterectomy, bilateral salpingooforectomy and pelvic +/- para-aortic lymph node dissections. Only stages I and II endometrioid type tumors or stage IIIa, secondary to positive pelvic washings, were included in the study. Eight patients in the hysteroscopy group and four patients in the control group were excluded for various reasons. Patients received post-operative radiation therapy depending on the surgical-pathological risk factors. The median follow up was 34 months. Fisher's Exact Test was performed to compare differences between the hysteroscopic (n = 30) and the control (n = 20) groups. Results: We found three patients (10%) with positive washings in the hysteroscopic group compared to one (5%) among the controls (P = 0.64), with a statistical power of &lt;20%. If the differences would persist, we would need 588 patients in each arm to obtain a power of 80%, and reach definitive conclusions. The Odds Ratio (OR) of performing a hysteroscopy and upstaging the tumor in this study was: 2.1 95% CI (0.20-21.09). Prognostic variables were compared between both groups and no differences were observed. All patients but one (dead due to intercurrent disease), were alive and with no evidence of disease at the completion of the study. Conclusions: Fluid hysteroscopy and directed biopsies may have a small risk of upstaging early endometrial cancers, but does not seem to influence prognosis. abstract_id: PUBMED:34745963 The Oncology Safety of Diagnostic Hysteroscopy in Early-Stage Endometrial Cancer: A Systematic Review and Meta-Analysis. Background: Hysteroscopy is becoming a common method for the diagnosis of uterine disorders in developed countries. However, hysteroscopy might worsen the prognosis of endometrial cancer because it could cause cancer dissemination into the peritoneal cavity through the fallopian tubes. Objective: The aim of this systematic review and meta-analysis was to explore the oncological safety of hysteroscopy for early-stage endometrial cancer. Search Strategy: Eligible studies were obtained from PubMed, Embase, and the Cochrane Library up to September 22, 2020. Selection Criteria: Studies which compared the oncological safety of hysteroscopy with other methods were included. Data Collection And Analysis: A total of 3980 patients were included in this study, of whom1357 patients had undergone hysteroscopy and2623 had not. Main Results: There was no significant association between hysteroscopy and worse prognosis in early-stage endometrial cancer [disease-free survival: log risk ratio(logRR) -0.22; 95% confidence interval (CI), -0.54 to 0.1; p=0.97; overall survival: logRR 0.03; 95% CI, -0.05 to 0.11; p=0.02; disease-specific survival: logRR 0.03; 95% CI, -0.03 to 0.10; p=0.00]. Conclusion: This study suggests that hysteroscopy is a safe diagnostic and treatment method, and has no significant effect on the prognosis of early-stage endometrial cancer. Systematic Review Registration: PROSPERO registration number: CRD42020193696. abstract_id: PUBMED:28334032 Does hysteroscopy worsen prognosis in women with type II endometrial carcinoma? Background: Prior studies evaluating the impact of hysteroscopy on outcomes in endometrial cancer have predominantly evaluated type I tumors. We sought to evaluate whether hysteroscopy worsens prognosis in type II endometrial cancer. Methods: A retrospective cohort analysis of 140 patients from two institutions with type II endometrial cancer was performed. Women who underwent either diagnostic hysteroscopy (HSC) or dilation and curettage (D&amp;C) for cancer diagnosis from June 2001 until June 2010 were included. The clinical and pathologic characteristics, including peritoneal cytology results were reviewed. The primary endpoint was disease-specific survival (DSS). The exposure of interest was hysteroscopy. Survival curves were projected using the Kaplan-Meier method and compared using the log-rank test. Results: There was no difference in age, histology, stage, depth of myometrial invasion, adnexal involvement, or nodal metastasis between HSC and D&amp;C patients. Positive cytology was found in 16/54 (30%) patients following HSC and in 10/86 (12%) following D&amp;C (p = 0.008). Fourteen patients with stage I and II disease had positive peritoneal cytology, with 11/40 (27.5%) patients in the HSC group and 3/59 (5%) patients in the D&amp;C group(p = 0.002). Median DSS was clinically different for the HSC and D&amp;C groups, but statistical significance was not reached (53 versus 63.5 months, p = 0.34). For stage I and II patients, 18/99 (18%) were dead of EC, with a median DSS of 60 months for HSC and 71 months for D&amp;C (p = 0.82). Overall 46 (33%) patients developed a recurrence, with 18/54 (33%) in the HSC group compared to 28/86 (32%) in the D&amp;C group (p = 0.92). There was no difference in recurrence location between groups. Conclusions: Diagnostic hysteroscopy significantly increased the rate of positive peritoneal cytology at the time of surgical staging in this cohort of patients with type II EC. However, we were unable to detect a difference in prognosis as measured by DSS. abstract_id: PUBMED:37799024 Preoperative hysteroscopy shortened progression-free survival in advanced FIGO stage in endometrial cancer: Ten year analysis. Objective: To investigate the impact of preoperative hysteroscopy on progression-free survival (PFS) and disease-specific survival (DFS), and to explore the factors which contribute to poor clinical outcomes between hysteroscopy and dilation and curettage (D&amp;C) in endometrial cancer (EC). Methods: A retrospective study was designed by collecting data from women diagnosed with EC through hysteroscopy or D&amp;C from January 2010 to December 2019 in a tertiary hospital in China. A propensity score was used for 1:1 matching of advanced stage patients. Univariate and multivariate analysis were conducted to determine whether hysteroscopy was a prognostic factor in EC and to identify factors associated with its impact on PFS and DFS in different subgroups. Results: Overall, 543 and 272 women who underwent D&amp;C and hysteroscopy, respectively were included. Compared to D&amp;C, preoperative hysteroscopy was related to reduced PFS and DFS, with a hazard ratio (HR) of 1.904 and 3.905, respectively. Hysteroscopy contributed to an increased risk of positive wash cytology (48.27% vs 24.13%), recurrence (48.28% vs 20.69%) and shorter PFS after matching in FIGO Stage I-IV EC, while there was no significance in positive ascites cytology (14.04% vs 13.45%), PFS and DFS in FIGO Stage I EC. Conclusions: Hysteroscopy was an independent predictive factor for poor prognosis in EC. Hysteroscopy appeared to be a safe diagnostic method as D&amp;C in FIGO Stage I EC but was a risk factor for increased recurrence and reduced PFS in advanced stage disease. Its impact on DFS is uncertain. abstract_id: PUBMED:36698830 Comparison of prognosis of patients with endometrial cancer after hysteroscopy versus dilatation and curettage: A multicenter retrospective study. Introduction: Hysteroscopy is a useful procedure for diagnosing endometrial cancer. There is controversy regarding whether hysteroscopy affects the prognosis of endometrial cancer by prompting cancer cell into intraperitoneal dissemination. Our purpose was to confirm whether hysteroscopy could be a risk factor of the tumor stage, recurrence and survival rate of endometrial cancer. Methods: This multicenter retrospective study included all consecutive patients who had endometrial carcinoma diagnosed preoperatively with hysteroscopy and directed endometrial biopsy (HSC, group A) and dilatation and curettage (D&amp;C, group B) between February 2014 and December 2018 at the Fujian Provincial, China. We compared the demographic feature, clinical characteristics and prognosis between the two groups. Results: A total of 429 patients were included in the study (Group A, n = 77; Group B, n = 352). There was no significant difference between their baseline characteristics [including age, BMI, histological type and International Federation of Gynecology and Obstetrics (FIGO) stage]. By comparing several pathological conditions that may affect prognosis, there were no significant differences between the two groups in the peritoneal cytology, depth of myometrial invasion, the positivity of lymph nodes, lymphovascular space invasion and paraaortic lymph node dissection. Finally, no significant difference was found between the two groups in overall survival (OS) (P = 0.189) or recurrence free survival (RFS) (P = 0.787). Conclusion: Under certain inflation pressure and distension medium, hysteroscopic examination and lesion biopsy ensure the safety and have no adverse effects on prognosis compared to conventional curettage. abstract_id: PUBMED:17961159 Does hysteroscopy affect prognosis in apparent early-stage endometrial cancer? The objective of the study was to compare the outcome measures of patients with endometrial adenocarcinoma diagnosed by endometrial biopsy, uterine curettage, or hysteroscopy. Medical records of 392 women diagnosed with apparent early-stage endometrial adenocarcinoma were reviewed. Data concerning the mode of diagnosis, histologic type and grade, surgical stage, peritoneal washings and lymph nodes status, and patient's outcome were retrieved. During the study period, 99 (25.3%) cases were diagnosed by endometrial biopsy, 193 (49.2%) by uterine curettage, and 100 (25.5%) by hysteroscopy. There were 347 (88.5%) cases of endometrioid adenocarcinoma and 45 (11.5%) of poor histologic types, including serous papillary, clear cell, and small cell cancer. Three hundred and sixteen (80.6%) patients had stage I disease, 8 (2.0%) stage II, and 68 (17.4%) stage III. Peritoneal cytology was positive in only one case. Recurrent disease occurred in 6.9% patients, of which 50% had local recurrence and 50% had distant. Recurrent disease was found in 15.2% patients diagnosed by endometrial biopsy, in 4.7% where uterine curettage was used, and in 5% when hysteroscopy was applied. No statistically significant difference in the survival rate between the different diagnostic methods applied was found, although a higher recurrence rate was noted following endometrial biopsy. After a median follow-up time of 25 months for patients undergoing hysteroscopy, there was no difference in recurrence rates and/or overall survival compared to other diagnostic procedures implying that hysteroscopy can be safely used in the diagnosis of endometrial cancer. abstract_id: PUBMED:38123196 Effects of preoperative diagnostic hysteroscopy on peritoneal cytology and prognosis in patients with non-endometrioid carcinoma Objective: To explore the effects of preoperative hysteroscopic guided biopsy and segmental diagnosis and curettage on the risk of abdominal dissemination and prognosis of non-endometrioid carcinoma. Methods: The clinical and pathological data of 97 patients who underwent surgical treatment and were pathologically confirmed as non-endometrioid carcinoma (including serous carcinoma, clear cell carcinoma, mixed adenocarcinoma, and undifferentiated carcinoma, etc.) from October 2008 to December 2021 in Peking University People's Hospital, were collected for retrospective analysis. According to preoperative diagnostic methods, they were divided into hysteroscopic group (n=44) and non-hysteroscopic group (n=53). The impact of hysteroscopy examination on peritoneal cytology and prognosis was analyzed. Results: (1) There were no statistical differences in age, body mass index, tumor size, pathological characteristics, and treatment methods between the hysteroscopic group and the non-hysteroscopic group (all P&gt;0.05), but the proportion of stage Ⅰ-Ⅱ patients in the hysteroscopic group was significantly higher than that in the non-hysteroscopic group [68% (30/44) vs 47% (25/53); χ2=4.32, P=0.038]. (2) Among 97 patients, 25 (26%, 25/97) of them were cytologically positive for ascites. The hysteroscopic group had a lower positive rate of peritoneal cytology than that in the non-hysteroscopy group, which was significantly different [11% (5/44) vs 38% (20/53); χ2=8.74, P=0.003]. Stratification according to surgical and pathological stages showed that the positive rate of peritoneal cytology in the hysteroscopic group (3%, 1/30) was lower than that in the non-hysteroscopic group (12%, 3/25) in the 55 patients with stage Ⅰ-Ⅱ, and that in the hysteroscopic group (4/14) was also lower than that in the non-hysteroscopic group (61%, 17/28) in the 42 patients with stage Ⅲ-Ⅳ. There were no significant differences (all P&gt;0.05). (3) The 5-year disease-free survival (DFS) rate of the hysteroscopic group and the non-hysteroscopic group were respectively 72.7% and 60.4%, and there was no significant difference between the two groups (P=0.186). After stratification according to staging, the 5-year DFS rate were respectively 90.0% and 72.0% (P=0.051) between the hysteroscopic and non-hysteroscopic groups of patients in stage Ⅰ-Ⅱ, and 35.7% and 50.0% (P=0.218) between the hysteroscopic and non-hysteroscopic groups of patients in stage Ⅲ-Ⅳ, in which there were not statistically significant differences. The 5-year overall survival (OS) rate were respectively 86.4% and 81.1% between the hysteroscopic group and the non-hysteroscopic group, with no significant difference between the two groups (P=0.388). The 5-year OS rate were respectively 93.3% and 96.0% in the hysteroscopic group and non-hysteroscopic group for patients with stage Ⅰ-Ⅱ(P=0.872), and 71.4% and 67.9% in the hysteroscopic group and non-hysteroscopic group in patients with stage Ⅲ-Ⅳ (P=0.999), with no statistical significance. Conclusions: Diagnostic hysteroscopy do not increase the rate of positive peritoneal cytology result at the time of surgery in this cohort, and no significant correlation between preoperative hysteroscopy examination and poor prognosis of non-endometrioid carcinoma is observed. Therefore, preoperative hysteroscopic guided biopsy and segmental diagnosis and curettage in non-endometrioid carcinoma maybe safe. abstract_id: PUBMED:15047235 Does diagnostic hysteroscopy in patients with stage I endometrial carcinoma cause positive peritoneal washings? Objectives: Previous studies have shown that positive peritoneal washings may adversely affect cancer survival rates and that hysteroscopy is associated with a higher risk of positive washings in patients with endometrial carcinoma. Our aim was to assess if diagnostic hysteroscopy increases the risk of positive peritoneal washings in patients with endometrial cancer and affects the prognosis after surgery. Study Design: Retrospective cohort study. The medical records of 50 consecutive patients with endometrial carcinoma, diagnosed with hysteroscopy and tissue sampling and treated by abdominal hysterectomy with bilateral salpingo-oophorectomy and peritoneal washings were reviewed. Results: Of the 43 patients with endometrial carcinoma FIGO stage I, none had positive peritoneal washings (95%CI: 0-8.2%). The mean interval between hysteroscopy and surgery was 33.5 days. The 5-year disease-specific survival rate was 91.8%, the 5-year recurrence-free survival rate was 85.4%. Conclusions: Diagnostic hysteroscopy had no adverse effect on the incidence of positive peritoneal washings or on prognosis in stage I endometrial cancer patients. abstract_id: PUBMED:31689674 The oncological safety of hysteroscopy in the diagnosis of early-stage endometrial cancer: An Israel gynecologic oncology group study. Objective: To compare survival measures of women with early-stage endometrial cancer who underwent either hysteroscopy or a non-hysteroscopic procedure as a diagnostic procedure. Study Design: An Israel Gynecologic Oncology Group multicenter study of 1324 patients with stage I endometrial cancer who underwent surgery between 2002 and 2014. Patients were divided into two groups: hysteroscopy and non-hysteroscopy (curettage or office endometrial biopsy). Clinical, pathological, and survival measures were compared between the groups. Results: There were 355 patients in the hysteroscopy group and 969 patients in the non-hysteroscopy group. The median follow-up was 52 months (range 12-120 months). There were no differences between the groups in the 5-year recurrence-free survival (90.2% vs. 88.2%; p = 0.53), disease-specific survival (93.4% vs. 91.7%; p = 0.5), and overall survival (86.2% vs. 80.6%; p = 0.22). Conclusion: Our findings affirm that hysteroscopy does not compromise the survival of patients with early-stage endometrial cancer. abstract_id: PUBMED:9538182 Value of peritoneal cytology after hysteroscopy in surgical stage I adenocarcinoma of the endometrium. Nineteen clinical stage I adenocarcinoma of the uterus with favourable histological prognosis factors (low grade, no myometrial extension, and no pelvic node involvement) were diagnosed using a pre-operative hysteroscopy. During the laparotomy, peritoneal cytology was performed systematically. The frequency of positive peritoneal washings was abnormally high (7 cases) with cytologic findings showing grouped cells in large clusters. However, these patients have not experienced peritoneal recurrences. The endoscopic procedures may have facilitated the transtubal malignant cell dissemination and are questionable in endometrial carcinoma. Answer: The evidence from the provided abstracts suggests that fluid hysteroscopy does not significantly increase the stage or worsen the prognosis in patients with endometrial cancer. A randomized controlled trial found that fluid hysteroscopy and directed biopsies may have a small risk of upstaging early endometrial cancers but does not seem to influence prognosis (PUBMED:15262358). Similarly, a systematic review and meta-analysis concluded that hysteroscopy is a safe diagnostic method and has no significant effect on the prognosis of early-stage endometrial cancer (PUBMED:34745963). A retrospective cohort analysis indicated that diagnostic hysteroscopy significantly increased the rate of positive peritoneal cytology at the time of surgical staging in patients with type II endometrial cancer, but no difference in prognosis as measured by disease-specific survival was detected (PUBMED:28334032). Another retrospective study found that preoperative hysteroscopy was related to reduced progression-free survival and disease-specific survival in advanced stage endometrial cancer, suggesting that hysteroscopy might be a risk factor for increased recurrence and reduced progression-free survival in advanced stage disease (PUBMED:37799024). However, a multicenter retrospective study showed no significant difference in overall survival or recurrence-free survival between patients diagnosed with endometrial cancer via hysteroscopy versus dilatation and curettage (PUBMED:36698830). Another study found no adverse effect on the incidence of positive peritoneal washings or on prognosis in stage I endometrial cancer patients following diagnostic hysteroscopy (PUBMED:15047235). An Israel Gynecologic Oncology Group study also supported the oncological safety of hysteroscopy in the diagnosis of early-stage endometrial cancer, with no differences in survival measures between patients who underwent hysteroscopy and those who did not (PUBMED:31689674). In summary, the majority of the studies suggest that hysteroscopy does not significantly worsen the prognosis or upstage the disease in patients with early-stage endometrial cancer, although there may be a higher risk of positive peritoneal cytology. For advanced stage disease, there may be some evidence of reduced progression-free survival associated with preoperative hysteroscopy (PUBMED:37799024).
Instruction: Added value of blue dye injection in sentinel node biopsy of breast cancer patients: do all patients need blue dye? Abstracts: abstract_id: PUBMED:24486686 Added value of blue dye injection in sentinel node biopsy of breast cancer patients: do all patients need blue dye? Background: In the current study, we evaluated the incremental value of blue dye injection in sentinel node mapping of early breast cancer patients. We specially considered the experience of the surgeons and lymphoscintigraphy results in this regard. Methods: 605 patients with early stage breast cancer were retrospectively evaluated in the study. Patients underwent sentinel node mapping using combined radiotracer and blue dye techniques. Lymphoscintiraphy was also performed for 590 patients. Blue dye, radioisotope, and overall success rates in identifying the sentinel lymph node were evaluated in different patient groups. The benefit of blue dye and radioisotope in identifying the sentinel lymph nodes was also evaluated. Results: Marginal benefits of both blue dye and isotope for overall sentinel node detection as well as pathologically involved sentinel nodes were statistically higher in inexperienced surgeons and in patients with sentinel node visualization failure. In the patients with sentinel node visualization on lymphoscintigraphy, 6 sentinel nodes were detected by blue dye only. All these six nodes were harvested by inexperienced surgeons. On the other hand 8 sentinel nodes were detected by dye only in the patients with sentinel node non-visualization. All these nodes were harvested by experienced surgeons. Conclusions: The use of blue dye should be reserved for inexperienced surgeons during their learning phase and for those patients in whom lymphoscintigraphy failed to show any uptake in the axilla. abstract_id: PUBMED:33814841 Efficacy of Periareolar Versus Peritumoral Injection of TC99-Labelled Sulphur Colloid and Methylene Blue Dye for Detection of Sentinel Lymph Node in Patients with Early Breast Cancer: a Comparative Study. Sentinel lymph node biopsy using dual methods of blue dye and radioactive isotope is what is practised as the standard of care at most of the centres. The combined use of radioactive colloid and blue dye injection is considered the gold standard for axillary sentinel lymph node biopsy in breast cancer with a 97% accuracy rate. The aim of this study is to determine the optimal injection site for methylene blue dye and Tc99-labelled sulphur colloid for sentinel lymph node biopsy in early breast cancer. In both periareolar and peritumoral groups of patients, overall rate of identifying sentinel lymph node (hot, blue and hot and blue nodes) with dual dye was comparable (100% and 96.36%) with p value = 0.475. Also in both groups of patients, overall rate of getting pathological positive sentinel lymph node on final histopathological report was comparable (52.73% and 45.28%) with p value = 0.561. Periareolar versus peritumoral injection of dual dye shows comparable success rates for axillary sentinel lymph node identification and can be considered rapid and reliable method. However, the periareolar route is technically simple and especially privileged in nonpalpable (T0) and upper outer quadrant lesions mainly for the prevention of the shine through phenomenon. abstract_id: PUBMED:36412286 Use of methylene blue dye for lymphatic basin mapping and sentinel lymph node biopsy in breast cancer patients in Enugu, Nigeria. Background: Sentinel lymph node biopsy is the current standard of care for axillary staging and further treatment planning in patients with clinical axillary node-negative breast cancer. Sentinel node (SN) biopsy was designed to accurately stage the axilla and minimize the side effects of conventional axillary-lymph-node dissection without sacrificing oncologic outcomes. Sentinel lymph node biopsy is normally performed with nuclear scan and patent blue violet or isosulfan blue. These are expensive and not commonly available in resource-poor regions such as West Africa. Methylene blue dye is a commonly used agent in a wide range of clinical diagnostic procedures and has been used by other investigators to perform this procedure. This study was designed to demonstrate the feasibility and effectiveness of SN biopsy in the management of axillary node-negative breast cancer in resource-limited populations using methylene blue dye. Aim: To determine the efficacy of methylene blue dye as a single tracer in lymphatic basin mapping and sentinel lymph node biopsy in patients with clinical axillary node-negative breast cancer. Methods: This was a prospective, case-controlled study involving 28 consecutively presenting female patients with clinical axillary node-negative breast cancer at the University of Nigeria Teaching Hospital, Enugu. Each of the patients had lymphatic basin mapping and sentinel lymph node biopsy with a sub-areola-subdermal injection of methylene blue dye. The SN (s) were then removed using the dye as the marker. Each patient then had a mastectomy or wide local excision as appropriately planned and conventional levels I and II axillary dissection was performed in the same sitting. The SNs and other axillary nodes were reviewed independently by our institution's pathologist. Each patient's axillary dissection specimen acted as her control for the study. Results: The SNs were identified in 24 (85.7%) patients. There was a demonstrable learning curve with an improvement in identification rate in the later half of the cases (92.9%) compared to the earlier half of the cases (78.6%). A range of 1-3 nodes and a mean of 1.78 nodes were obtained. A sensitivity of 90.9%, specificity of 79.6%, false-positive rate of 28.6%, false-negative rate of 9.1%, and accuracy of 95.8% were obtained. There was no incidence of allergic/hypersensitivity reaction. Conclusion: Sentinel lymph node biopsy with methylene blue dye can be applied with high accuracy within resource-limited environments. However, there is a definite short learning curve that must be overcome and the procedure validated before clinical application in decision-making. abstract_id: PUBMED:28345826 Value of Sentinel Lymph Node (SLN) Mapping and Biopsy using Combined Intracervical Radiotracers and Blue Dye Injections for Endometrial Cancer Background: Lymphadenectomy, as part of the initial surgical staging of patients with endometrial carcinoma, remains a controversial topic in gynecologic oncology. Sentinel lymph node (SLN) mapping has become a well-accepted procedure for melanomas and breast cancer; a number of investigators have begun to explore the utility and accuracy of this technique with regard to endometrial cancer. Aim: This study was conducted to evaluate SLN mapping of early stage endometrial cancer with blue dye in conjunction with a radioactive tracer. Subjects and methods: In this prospective cross-sectional study, patients with stage I and II endometrial cancer who were candidates for systemic lymph node dissection during surgery were enrolled, some underwent lymph node mapping and SLN biopsy using combined intra cervical radiotracer and blue dye injections and some applying only an intra cervical radiotracer. SLNs and other lymph nodes were sent for pathological assessment. Sensitivity, specificity, the positive predictive value, and the negative predictive value were calculated as predictive values for the radiotracer and blue dye. Results: Pre-operative lymph node mapping showed SLN in 29 out of 30 patients. Intra operations in 29/30 patients, SLNs were harvested by gamma probe; in 13 out of 19 patients SLNs were detected by blue dye. The median number of SLNs per patient was 3 and the total number of SLNs detected was 81. Four patients had positive pelvic lymph nodes. All of the positive nodes were SLNs. Using this technique (radiotracer and blue dye) an overall detection rate of 96.7%, an NPV of 100%, a sensitivity of 100% and a specificity of 3.85% were achieved. Conclusion: Results of SLN research for endometrial cancer are promising and make feasible the possibility of avoiding unnecessary aggressive surgical procedures in near future by advances in SLN mapping. abstract_id: PUBMED:25320623 Use of Fluorescence Imaging in Combination with Patent Blue Dye versus Patent Blue Dye Alone in Sentinel Lymph Node Biopsy in Breast Cancer. Purpose: Near-infrared fluorescence imaging with indocyanine green (ICG) has the potential to improve sentinel lymph node (SLN) mapping in breast cancer. In this clinical trial, we compared the potential value of ICG combined with blue dye with that of blue dye alone for detecting SLNs. Methods: Patients undergoing SLN biopsy (SLNB) between November 2010 and November 2013 were included. Up to December 2011, SLNs were detected by using patent blue (PB) alone, and since January 2012, by using PB in combination with ICG. The patients were divided into the following two groups: group A (ICG-PB; n=96) and group B (PB; n=73), and SLN detection parameters were compared between the groups. All patients underwent level I and II axillary dissections after SLNB. Results: In group A, the SLN detection rate was 96.9% (93/96), the accuracy of detection was 98.9% (92/93), and the false-negative rate (FNR) was 3.4% (1/29). In group B, the SLN detection rate was 84.9% (62/73), the accuracy of detection was 96.8% (60/62), and the FNR was 11.1% (2/18). The ICG-PB group showed significantly superior results compared to the PB group for SLN detection (p=0.005) and a greatly improved FNR. Conclusion: The combined fluorescence and blue dye-based tracer technique was superior to the use of blue dye alone for identifying SLNs, and for predicting axillary lymph node status in patients with breast cancer; in addition, the combined technique had reduced false-negative results. abstract_id: PUBMED:25538772 Can methylene blue dye be used as an alternative to patent blue dye to find the sentinel lymph node in breast cancer surgery? Background: Sentinel lymph node biopsy (SLNB) is standard care to evaluate axillary involvement in early breast cancer. It has fewer complications than complete lymph node dissection; however, using blue dye in SLNB is controversial. We have evaluated the detection rate and local complications associated with methylene blue dye (MBD) used in SLNB in early breast cancer patients and compared these results to patent blue dye (PBD). Materials And Methods: In a cohort prospective study, 312 patients with early breast cancer without axillary lymph node involvement were divided into two groups according to dye type. All of the patients received radiotracer and one type of blue dye. We filled out a checklist for the patients that contained demographic data, size of tumor, stage, detection of sentinel lymph node, and complications and then analyzed the data. Results: Demographic and histopathologic characteristics were not significantly different in both groups. Mean (standard deviation [SD]) tumor size in all patients was 2.4 (0.8) cm. Detection rate in the MBD group was 77.5% with dye alone and 94.2% with dye and radioisotope; and in the PBD group it was 80.1% and 92.9% respectively (P &gt; 0.05). We had blue discoloration of the skin in 23.7% in the PBD and 14.1% in the MBD group (P &lt; 0.05) local inflammation was detected in one patient in the PBD and five in the MBD group (P &lt; 0.05). Skin necrosis and systemic complications were not observed. Conclusion: Methylene blue has an acceptable detection rate, which may be a good alternative in SLNB. Complication such as blue discoloration of the skin was also lower with MBD. abstract_id: PUBMED:15053296 Added value of blue dye in sentinel node biopsy for breast cancer. Sentinel node biopsy in breast cancer is a new rapidly advancing minimal invasive procedure which enables nodal staging of clinically node negative breast cancer patients without performing complete axillary dissection. There are still controversies over the added value of Blue Dye when lymphoscintigraphy and gamma probe are used. In our series, 91 consecutive patients with invasive breast carcinoma were operated by a single surgeon, using lymphoscintigraphy, gamma probe and Blue Dye. The sentinel nodes (SLN) were histologically examined by HE and immunohistochemistry. Lymphoscintigraphy was succesful in 81 patients (89%). After the injection of Blue Dye, SLN could be identified in all 91 patients. Metastases in the SLN were present in 35 patients. We retrieved 128 SLN, of these 93 were hot and blue, 19 only hot and 16 only blue. The distribution of metastatic and nonmetastatic SLN between these three labeling groups was not different (P = 0.9361). We could not show any difference in the metastatic involvement of SLN in patients in whom preoperative lymphoscintigraphy could visualise the SLN preoperatively compared to those in whom it could not (P = 0.7315). False negativity calculated in our initial series of 36 patients was 0%. Our study showed added value of Blue Dye in detection of metastatic and nonmetastatic SLN. abstract_id: PUBMED:32523266 A Randomized Trial Comparing the Efficacy of Methylene Blue Dye Alone Versus Combination of Methylene Blue Dye and Radioactive Sulfur Colloid in Sentinel Lymph Node Biopsy for Early Stage Breast Cancer Patients. Although sentinel lymph node biopsy (SLNB) has become a standard of care for management of axilla in breast cancer patients, the technique of SLNB is still not well defined. Unlike radioactive sulfur colloid which requires nuclear medicine facilities, methylene blue dye is readily available. The purpose of this study is to validate the use of methylene blue dye alone for SLNB in early breast cancer patients. 60 patients of early breast cancer were randomized to receive either methylene blue alone (Group A-30 patients) or a combination of both methylene blue and radioactive colloid (Group B-30 patients) for detection of sentinel lymph nodes. Sentinel lymph node biopsy was done followed by complete axillary dissection in all patients. In both Groups A and B, sentinel node was identified in all 30 patients, giving identification rate of 100%. In group A, sentinel node was the only positive node in 1 patient, with a false-positive rate of 14.2%. The negative predictive value was 91.3%. The sensitivity of the procedure in predicting further axillary disease was 75% with a specificity of 95.45%. The overall accuracy was 90%. In group B, sentinel node was the only positive node in 2 cases, giving a false-positive rate of 28.7%. The negative predictive value was 95.65%. The sensitivity of the procedure in predicting further axillary disease was 83.33% with a specificity of 91.67%. The overall accuracy was 90%. Although the false-negative rate was slightly higher with methylene blue alone than that using combination (8.6%-4.3%), it was statistically insignificant. Similarly the sensitivity (75%-83.33%), specificity (95.45-91.67%), and negative predictive value (91.3%-95.67%) were also comparable between groups A and B, respectively. Negative predictive value and false-negative rates are comparable, whether blue dye is used alone or a combination of blue dye and radioactive colloid is used. Sentinel lymph node biopsy with blue dye alone is reliable and can be put to clinical practice more widely, even if nuclear medicine facilities are not available in resource constrained centers, so as to reduce long-term morbidity of axillary dissection, with similar oncological outcomes. abstract_id: PUBMED:33877488 Long-term results of fluorescence and indigo carmine blue dye-navigated sentinel lymph node biopsy. Background: Sentinel lymph node biopsy is widely applied for the management of clinically node-negative breast cancer, and a radioisotope with a blue dye are most often used as tracers. Fluorescence of indocyanine green could also potentially be used as tracer. This study aimed to demonstrate the long-term survival results of fluorescence-guided sentinel lymph node biopsy. Patients And Methods: Patients with clinically node-negative breast cancer who underwent surgery as initial treatment were included in this study. Both fluorescence of indocyanine green and indigo carmine blue dye were used as tracers. Axillary lymph node dissection was omitted unless metastasis was pathologically proven in sentinel nodes. Breast cancer recurrence and death were recorded and prognostic factors were identified using disease-free survival and overall survival data. Results: A total of 565 patients were analyzed. There were 14 (2.5%) patients whose sentinel nodes could not be identified, yielding an identification rate of 97.5%. Axillary dissection was performed in 90 patients. Forty-three recurrences including 6 ipsilateral axilla recurrence and 13 deaths were observed during the median 83 months of follow-up period. Seven-year disease-free and overall survival were 92.4% and 97.3%, respectively. Multivariate analyses demonstrated that pre-menopausal status and invasive lobular carcinoma were significant unfavorable prognostic factors of disease-free survival. Half of ipsilateral axilla recurrences occurred within 5 years after surgery and these recurrences were correlated with inappropriate adjuvant therapy. Conclusion: Fluorescence-guided sentinel lymph node biopsy demonstrated favorable prognostic results and could be alternative to the radioisotope for clinically node-negative breast cancer. abstract_id: PUBMED:22930612 Sentinel lymph node biopsy in breast cancer: the node to recovery. The widespread adoption of sentinel lymph node biopsy to stage the axilla has led to decrease in arm and shoulder morbidity. Sentinel lymph node biopsy is suitable for patients with clinically/radiologically node negative invasive breast cancer and selected patients with DCIS (those with clinical/radiological mass or extensive lesions requiring mastectomy). The combined isotope-blue dye injection technique gives the best results. We inject the isotope intra-dermally preoperatively and blue dye dye subdermally after anesthetic induction into the tumour quadrant peri-areolar tissue. Lymphoscintiscan is not necessary but is useful during the learning phase. Sentinel node biopsy can be performed through a small transverse or vertical axillary incision (∼3 cm) appropriately placed to allow axillary lymph node clearance if needed. Answer: The added value of blue dye injection in sentinel node biopsy for breast cancer patients is a subject of discussion. According to the study by PUBMED:24486686, the marginal benefits of blue dye for overall sentinel node detection and pathologically involved sentinel nodes were statistically higher in inexperienced surgeons and in patients with sentinel node visualization failure on lymphoscintigraphy. The study suggests that the use of blue dye should be reserved for inexperienced surgeons during their learning phase and for those patients in whom lymphoscintigraphy failed to show any uptake in the axilla. Other studies have explored alternative methods and compared the efficacy of different techniques. For instance, PUBMED:33814841 found that periareolar versus peritumoral injection of dual dye shows comparable success rates for axillary sentinel lymph node identification, indicating that the injection site may not significantly impact the effectiveness of the procedure when using a combination of radioactive colloid and blue dye. In resource-limited settings, PUBMED:36412286 demonstrated that methylene blue dye can be used with high accuracy for sentinel lymph node biopsy, suggesting that it can be an effective single tracer in such environments. PUBMED:28345826 reported high detection rates and predictive values for sentinel lymph node mapping in endometrial cancer using combined intracervical radiotracers and blue dye injections, indicating the potential utility of blue dye in conjunction with radiotracers. PUBMED:25320623 showed that the combined fluorescence and blue dye-based tracer technique was superior to the use of blue dye alone for identifying sentinel lymph nodes in breast cancer, with reduced false-negative results. PUBMED:25538772 compared methylene blue dye (MBD) to patent blue dye (PBD) and found that MBD has an acceptable detection rate and may be a good alternative in sentinel lymph node biopsy (SLNB), with fewer complications like blue discoloration of the skin. PUBMED:15053296 highlighted the added value of blue dye in the detection of metastatic and nonmetastatic sentinel lymph nodes when used with lymphoscintigraphy and a gamma probe. PUBMED:32523266 conducted a randomized trial comparing methylene blue dye alone versus a combination of methylene blue dye and radioactive sulfur colloid, finding that the negative predictive value and false-negative rates were comparable between the two methods.
Instruction: Do iodinated contrast media increase serum potassium levels? Abstracts: abstract_id: PUBMED:8685334 Do iodinated contrast media increase serum potassium levels? Purpose: To test the hypothesis that iodinated contrast media may induce an elevation in serum potassium level. Materials And Methods: Protocol A: After intravenous infusion of contrast media into six rabbits, alterations of potassium ion concentrations were measured. Protocol B: Fresh rabbit blood was mixed in vitro with contrast media, and the fluctuations in potassium were monitored over a 30-minute period. Protocol C: Similar to protocol B, except that blood from humans with no reaction to contrast media was used. Results: For protocol A, blood potassium levels increased above baseline levels. The elevations were statistically significant (P &lt; .05). For protocol B, diatrizoate and ioxaglate caused a gradual increase in blood potassium levels, but iopamidol did not. In protocol C, all three contrast media caused statistically significant elevation in potassium levels. The release of potassium was statistically significant at 5 minutes (P &lt; .05 for diatrizoate and ioxaglate, and P &lt; .01 for iopamidol). The mean release rates (+/- standard deviation) by means of linear regression analysis were 0.0190 mmol/min +/- 0.0112 with diatrizoate, 0.0159 mmol/min +/- 0.0057 with iopamidol, and 0.0088 mmol/min +/- 0.0033 with ioxaglate. Conclusion: Iodinated contrast media increase blood potassium levels causing release of potassium into intravascular spaces. This potassium release may play some role in contrast medium-induced adverse reactions. abstract_id: PUBMED:24436602 Adverse reactions to iodinated contrast media. The use of iodinated contrast agents for angiography dates back to the 1920s. With over 75 million contrast-requiring procedures performed annually worldwide, it is important to understand the risk factors, pathogenesis, diagnosis, prevention, and treatment of adverse reactions caused by iodinated contrast media (ICM). As contrast media have evolved from ionic, high-osmolality to nonionic, low-osmolality formulations the risk of reactions has decreased over time; however, no pretreatment protocol has been shown to eliminate all repeat reactions. Clinical alertness and early recognition of adverse reactions is of paramount importance and key for appropriate management of these patients. In this article, we review the most recent literature regarding adverse reactions to ICM and provide an insight into the pathogenesis, clinical presentation, pretreatment, and management of contrast-related reactions. abstract_id: PUBMED:37229268 Pharmacist-led iodinated contrast media infusion risk assessment service. Background: With the increasing development of medical imaging, the use of iodinated contrast media has become more widespread. Adverse reactions caused by iodinated contrast media have drawn much attention. Despite this, there is still a lack of unified standards for the safe infusion process of iodinated contrast media in clinical practice both domestically and internationally. Objectives: Establishing a risk management service system to better predict the risks associated with iodinated contrast media infusion, reduce the incidence of adverse reactions and minimize patient harm. Method: A prospective interventional study was carried out from April 2021 to December 2021 at Nanjing Drum Tower Hospital in China. During this study, a service system was established to manage the risks associated with the infusion of iodinated contrast media. Personalized risk identification and assessment were performed by a pharmacist-led multidisciplinary team before iodinated contrast media infusion. Early warning, prevention, and adverse reaction management were performed according to different risk levels during and after infusion. Results: A multidisciplinary team led by pharmacists was established to evaluate the risks associated with infusion of iodinated contrast media. A total of 157 patients with risk factors related to the iodinated contrast media were screened out, which prevented 22 serious adverse events and enhanced the quality of medical care. All participants expressed high satisfaction with the service. Conclusion: Through practical exploration, the pharmacist-led multidisciplinary team can provide advance warning and effectively limit the risks of adverse reactions caused by iodinated contrast media to a preventable and controllable level. This approach serves as a valuable reference for developing strategies and schemes to reduce the incidence of such reactions. Therefore, we encourage the implementation of this intervention in other areas of China. abstract_id: PUBMED:30310828 Contrast enhancement efficacy of iodinated contrast media: Effect of molecular structure on contrast enhancement. Purpose: To investigate the contrast enhancement in DSA images based on the X-ray absorption characteristics of iodinated contrast media. Methods: We have derived a new formula of predicting the pixel value ratio of two different contrast media and designate it as "Contrast Enhancement Ratio (CER)". In order to evaluate the accuracy of CER, we have evaluated the relationship between CER and pixel value ratio for all combinations of eleven iodinated contrast media. The non-ionic iodinated contrast media, iopamidol, iomeprol, iopromide, ioversol, iohexol, and iodixanol, were evaluated in this study. Each contrast medium was filled in the simulated blood vessel in our constructed anthropomorphic phantom, and DSA images were obtained using an angiographic imaging system. To evaluate the contrast enhancement of the contrast medium, the mean pixel value was calculated from all pixel values in the vascular image. Results: CER was indicated to agree well with the pixel value ratio of two different contrast medium solutions and showed a good accuracy. CER was also shown to have a good linear relation to the pixel value ratio when the iodine concentration was constant. This means that the molecular structure of the contrast media affects contrast enhancement efficacy. Furthermore, in evaluation of contrast enhancement of iodinated contrast media by using the weight factor (that is a key factor in CER) ratio, Iodixanol, and iopamidol, and iomeprol have the same ability of contrast enhancement in DSA images, and iohexol shows the lowest ability. Conclusions: We have derived a new formula (CER) of predicting the pixel value ratio of two different contrast medium solutions, and shown that CER agreed well with the pixel value ratio for blood vessel filled with eleven contrast media. abstract_id: PUBMED:37712163 How I explore … an hypersensitivity to iodinated contrast media Immediate or delayed hypersensitivity reactions to iodinated contrast media (ICM) occur in 0.5-3 % of patients receiving radiological examinations using the last generation ICM. Although the majority of the reactions are mild to moderate, severe and potentially lethal reactions may occasionally be observed. In the last few years, there has been an increase of patients referred to the dermato-allergology department for the work-up of a hypersensitivity reaction to ICM. In this article, the allergological investigations and their indications are described in the case of a suspicion of ICM hypersensitivity. abstract_id: PUBMED:34609674 Shelling the myth: allergies to Iodine containing substances and risk of reaction to Iodinated contrast media. Purpose: In excess of 100 million procedures using iodinated radio-contrast media are conducted each year. There is a common misunderstanding regarding the links between allergy to iodinated substances and the risk of allergic reaction to intravenous iodinated contrast agents. These perceived risks are managed via administration of corticosteroids or avoidance of iodinated contrast altogether. Methods: An extensive review of published literature on scientific databases and international guidelines was conducted in order to inform the research question. A questionnaire was formulated and distributed to hospital doctors in four tertiary centres. Within this questionnaire, hospital doctors were presented with six different scenarios of bona fide allergy to iodinated substances (e.g. shellfish) and asked to select the treatment response option which they deemed to be the most suitable from a choice of three (standard contrast scan/delay scan with pre-medication/change to non-contrast scan). Results: Eighty-seven questionnaire responses were received. Contrast (standard protocol) was the most appropriate regimen in the setting of all the listed allergies. This was identified correctly by 76%, 69%, 44%, 32%, 18% and 14% for kiwi, fish, poly-food, shellfish, betadine and tincture of iodine allergies, respectively. Conclusions: There is a lack of understanding amongst local junior medical staff regarding administration of iodinated contrast media to patients with a history of allergy to iodinated substances. These misconceptions may potentiate the unnecessary usage of pre-medication and ordering of non-contrast scans in the setting of a gold-standard enhanced scan. Findings from this study suggest that there is a need for future education efforts targeted during the basic specialty training stage. abstract_id: PUBMED:22560565 Safety essentials: acute reactions to iodinated contrast media. The objectives of this article are to review the diagnosis and management of acute nonrenal reactions to iodinated contrast media. We will begin by discussing the types of contrast media and their correlative rates of reaction. The mechanism of contrast reactions, predisposing risk factors, and preventative measures will then be discussed. The remainder of the article will review the assessment of potential reactions, initial management, and treatment algorithms for specific reactions. abstract_id: PUBMED:31681546 Effect of Iodinated Contrast Media on Serum Electrolyte Concentrations in Patients Undergoing Routine Contrast Computed Tomography Scan Procedure. Background And Objectives: Being hypertonic solutions, iodinated contrast media such as iohexol can cause a shift of fluids and electrolytes between different compartments of the body, but there is an ongoing discrepancy in data and current studies as to the effect of iodinated contrast media on serum electrolytes. Hence, this hospital-based prospective clinical observational study was carried out with objectives of evaluating the changes in serum electrolyte concentrations with intravenous iodinated contrast media administration in adult population and to correlate the changes in electrolyte concentrations, if any, with the demographic profile of the patients. Materials And Methods: We analyzed 103 numbers of adult patient samples over a period of 2 months by collecting blood both before administration of contrast and after 24 h of the contrast-enhanced computed tomography scan procedure. Serum concentrations of sodium, potassium, chloride, and ionized calcium were measured using Eschweiler Combiline analyzer based on ion-selective electrode principle. Results: The mean age of the study population in our study was 40.11 ± 20.51 years. We found that changes in serum sodium and chloride concentration after administration of contrast media are significant (sodium: 136.29 ± 3.53 vs. 132.49 ± 6.36 mmol/L and chloride: 100.03 ± 0.70 vs. 97.53 ± 0.70 mmol/L). Sodium concentration shows more decrease in females compared to males after administration of iodine contrast. The most probable reason for this decrease in serum electrolytes was secondary changes to hemodilution due to high osmolality of the contrast. Conclusions: Attending physicians must be alert for such possibilities of changes in electrolytes after contrast administration and be prepared to treat any adversity if one occurs. abstract_id: PUBMED:32994773 Oral premedication in patients with a history suggesting hypersensitivity to iodinated contrast media. Introduction: Iodinated contrast media (ICM) are pharmaceuticals widely used in diagnostic procedures. Adverse effects associated with their administration are quite frequent and mostly mild. However, they raise concerns in patients and doctors in the context of their future use. Aim: To determine efficacy of premedication before medical procedures with the use of iodinated contrast media in patients with a history suggesting a hypersensitivity reaction after their past use. Material And Methods: Out of 152 patients consulted due to adverse reactions after ICM (85 women and 67 men, aged 43-90), 101 were selected with the history suggesting a mild hypersensitivity reaction (urticaria, itching, skin redness, malaise etc.). All the patients had health problems requiring a procedure with ICMadministration in the near future. The premedication was given with cetirizine (10 mg) and prednisone (20 mg or 50 mg, randomly assigned) 13, 7 and 1 h before the ICM administration. Presence of adverse events was compared between the subgroups with χ 2 test and efficacy of premedication - with Wilcoxon test. Results: Seventy-six patients underwent the radiologic procedure with premedication with antihistamine and a lower (40 patients) or higher dose (36 patients) of prednisone. Four of them reported a cutaneous hypersensitivity reaction (urticaria, itching, redness) and one - dyspnoea. There was no statistically significant difference in relation to the premedication protocol (p = 0.1306). Conclusions: Premedication with cetirizine and prednisone before radiologic procedures proved to be efficient in patients with a history suggesting hypersensitivity to iodinated contrast media. There was no significant difference in efficacy related to the dose of prednisone (20 mg vs. 50 mg). abstract_id: PUBMED:28081790 Non-ionic iodinated contrast media related immediate reactions: A mechanism study of 27 patients. The underlying mechanism of non-ionic iodinated contrast media-related immediate reactions was evaluated in this study. Patients presenting at least grade II immediate reactions after non-ionic iodinated contrast media injection were enrolled. Basophil activation was evaluated by flow cytometry. The plasma concentration of human terminal complement complex SC5b-9, as well as concentrations of serum chymase, tryptase, human mast cell carboxypeptidase A3, human prostaglandin D2, and total IgE were measured by enzyme-linked immunosorbent assay. The basophil activation percentage was significantly higher in the study group than in the control group (17.94±21.06% vs 3.45±1.49%). The plasma concentration of human terminal complement complex SC5b-9 and concentrations of serum chymase, human mast cell carboxypeptidase A3, prostaglandin D2, tryptase, and total IgE were also significantly increased (236.99±318.21 vs 49.70±30.41ng/mL, 0.41±0.49 vs 0.09±0.06ng/mL, 1.17±0.67 vs 0.30±0.17ng/mL, 203.52±137.27 vs 102.28±48.72pg/mL, 3.81±0.22 vs 2.70±0.16ng/mL, 102.00±51.84 vs 19.97±2.75ng/mL, respectively). Both mast cells and basophils were activated in non-ionic iodinated contrast media to mediate immediate hypersensitivity, and mast cells may be involved. Different mechanisms, including IgE-dependent, complement-dependent, and direct membrane effects, contributed to mast cell and basophil activation. Individual patients may use a single or combined mechanism involving single or combined mast cells and basophils. Immediate reactions following non-ionic iodinated contrast media injection may be a mechanically heterogenous disease. Answer: Yes, iodinated contrast media can increase serum potassium levels. A study involving rabbits and human blood samples demonstrated that iodinated contrast media induce an elevation in serum potassium levels. In the study, after intravenous infusion of contrast media into rabbits, blood potassium levels increased above baseline levels, and these elevations were statistically significant (P < .05) (PUBMED:8685334). Additionally, when fresh rabbit blood was mixed in vitro with contrast media, diatrizoate and ioxaglate caused a gradual increase in blood potassium levels, although iopamidol did not. However, when human blood was used, all three contrast media caused statistically significant elevations in potassium levels, with the release of potassium being statistically significant at 5 minutes (P < .05 for diatrizoate and ioxaglate, and P < .01 for iopamidol) (PUBMED:8685334). The study concluded that iodinated contrast media increase blood potassium levels by causing the release of potassium into intravascular spaces, which may play a role in contrast medium-induced adverse reactions (PUBMED:8685334).
Instruction: Are burns photographs useful? Abstracts: abstract_id: PUBMED:16997481 Are burns photographs useful? Unlabelled: Routine photography of all patients admitted to the West of Scotland Regional Burns Unit was introduced in 2003. To date, there are few burns units to evaluate the usefulness of photographs taken. Aim: To assess the usefulness of photographs of patients admitted to the burns unit to various members of the multidisciplinary team. Method: A questionnaire was completed by hospital staff involved in the management of burns patients over a 3-month period. A total of 43 questionnaires were completed. Results: The majority of questionnaires were completed by nursing staff (55%) followed by medical staff (23%); physiotherapy (5%); anaesthetists (7%); theatre staff (5%); students (2%); dietician (2%). About 98% of respondents agreed that photographs were useful overall, particularly for teaching purposes. About 9% disagreed that photographs were useful for assessment due to difficulty in assessing depth of burn. About 72% agreed that the photographs were useful for patient management and improve patient care. About 88% agreed that all patients should have photographs available in future. Advantages of photographs include; moving and handling of patients; patient positioning in theatre; reviewing wound healing and complications. They are useful for assessing site, size and type of burn. Disadvantages include difficulty in assessing depth of burn, technical factors, and unavailability out of hours. Conclusions: Photographs of burns patients are useful overall to all members of the multidisciplinary team. abstract_id: PUBMED:37722902 Electrical burns in adults. Introduction: Electrical burns account for up to 10% of burns admissions worldwide and are a potentially serious mechanism of injury. The aim of this study is to describe the epidemiology, presentation, management and complications of electrical burn injuries in adults. Material And Methods: A retrospective study of all adult patients with electrical burns admitted to a tertiary burns centre. Results: Eighty-two cases were identified. The mean age was 40 ± 2 years, 92.7% were males. The most common activities causing the injuries were work (39%) and do-it yourself activities (32%). A low voltage (&amp;lt; 1,000 W) power source was involved in 78% of cases. The mean total body surface area involved was 3 ± 0.3%. The head, hands, and other upper extremities were the body parts most frequently injured. The mean hospital stay was 2 ± 1days. Conclusion: Electrical injury was an infrequent but potentially serious cause of injury in adults. Minor injuries were successfully managed non-operatively. Electrical burns in adults are mainly low voltage burns contracted by manual workers resulting in a flesh burn. Although rare, the loss of digits, neurological sequelae, cardiac arrhythmias and renal failure remain serious complications in a significant number of cases. abstract_id: PUBMED:7146446 Epidemiology of burns in children. This chapter deals with the magnitude of the problem of burns in a developing country and describes the epidemiology. The common causes of burn injury have been illustrated by suitable photographs. The author feels that a large percentage of both domestic and industrial burns are preventable by simple modification of the agents or environment, and by public education. He recommends that public, social workers, professional persons, industrialists, government and nongovernment agencies undertake a programme for prevention of such accidents. abstract_id: PUBMED:11964977 Early excision and grafting of face and neck burns in patients over 20 years. Although excision and grafting of burns has become common and standard, many surgeons have been reluctant to excise and graft face burns. In fact, we could find photographic results at 1 year after grafting of only eight patients in the English literature. We began excision and grafting of face burns in 1979 and presented our first 16 patients in 1986 in this journal. With encouragement from Janzekovic and Jackson, we continued and have now used essentially the same procedure for more than 20 years in approximately 100 patients and, from this large series, are able to present outcomes. From January of 1979 to May of 1999, we performed excision and grafting on 91 patients with deep face burns. Data were recorded and 35-mm photographs were obtained throughout the 20-year period. We reviewed that database and the slide files of these patients. We found 45 patients with complete photographic sets including 1-year follow-up. Since, in our opinion, there is no useful, objective measure of appearance, we decided to simply publish all 45 sets of complete photographs, permitting the reader to subjectively form an opinion of the outcome of this procedure. The results are all shown as "full" face burns and two "partial" face burns. We continue to believe that early excision and grafting is indicated for face burns that will not heal within 3 weeks and that the procedure yields results that permit the burn victims to return to society and minimizes the time off work or out of school. abstract_id: PUBMED:33922543 Accuracy and Reproducibility of Facial Measurements of Digital Photographs and Wrapped Cone Beam Computed Tomography (CBCT) Photographs. The study sought to assess whether the soft tissue facial profile measurements of direct Cone Beam Computed Tomography (CBCT) and wrapped CBCT images of non-standardized facial photographs are accurate compared to the standardized digital photographs. In this cross-sectional study, 60 patients with an age range of 18-30 years, who were indicated for CBCT, were enrolled. Two facial photographs were taken per patient: standardized and random (non-standardized). The non-standardized ones were wrapped with the CBCT images. The most used soft tissue facial profile landmarks/parameters (linear and angular) were measured on direct soft tissue three-dimensional (3D) images and on the photographs wrapped over the 3D-CBCT images, and then compared to the standardized photographs. The reliability analysis was performed using concordance correlation coefficients (CCC) and depicted graphically using Bland-Altman plots. Most of the linear and angular measurements showed high reliability (0.91 to 0.998). Nevertheless, four soft tissue measurements were unreliable; namely, posterior gonial angle (0.085 and 0.11 for wrapped and direct CBCT soft tissue, respectively), mandibular plane angle (0.006 and 0.0016 for wrapped and direct CBCT soft tissue, respectively), posterior facial height (0.63 and 0.62 for wrapped and direct CBCT soft tissue, respectively) and total soft tissue facial convexity (0.52 for both wrapped and direct CBCT soft tissue, respectively). The soft tissue facial profile measurements from either the direct 3D-CBCT images or the wrapped CBCT images of non-standardized frontal photographs were accurate, and can be used to analyze most of the soft tissue facial profile measurements. abstract_id: PUBMED:10150820 Burns of the ocular adnexa. Ocular and periocular burns continue to be very complicated problems to manage clinically. Significant contributions have been made over the last year regarding the care and rehabilitation of the patient with this type of injury. N-acetylglucose aminidase activity in burned corneoscleral tissues may prove to be a good clinical indicator to assess the severity of ocular burns. Tenonplasty and split-thickness dermal grafts offer alternatives to conjunctival flaps when the latter is not practical or possible. Split-thickness dermal grafts are useful for treating persistent corneoscleral defects when an environment conducive to free grafts exists. Tenonplasty is a useful procedure in patients with persistent corneoscleral defects in whom no conjunctiva is available and a free graft is not advisable. Oversized soft contact lenses may prove useful in the care of the burn patient predisposed to forniceal foreshortening. An algorithm has been presented to aid in the management of the patient with periocular burns. High-density porous polyethylene can be successfully utilized in the reconstruction of the facial skeleton in burn patients. Useful guidelines have been proposed for the use of tissue expanders in the lower face and neck in pediatric burn patients. abstract_id: PUBMED:26092214 Reliable scar scoring system to assess photographs of burn patients. Background: Several scar-scoring scales exist to clinically monitor burn scar development and maturation. Although scoring scars through direct clinical examination is ideal, scars must sometimes be scored from photographs. No scar scale currently exists for the latter purpose. Materials And Methods: We modified a previously described scar scale (Yeong et al., J Burn Care Rehabil 1997) and tested the reliability of this new scale in assessing burn scars from photographs. The new scale consisted of three parameters as follows: scar height, surface appearance, and color mismatch. Each parameter was assigned a score of 1 (best) to 4 (worst), generating a total score of 3-12. Five physicians with burns training scored 120 representative photographs using the original and modified scales. Reliability was analyzed using coefficient of agreement, Cronbach alpha, intraclass correlation coefficient, variance, and coefficient of variance. Analysis of variance was performed using the Kruskal-Wallis test. Color mismatch and scar height scores were validated by analyzing actual height and color differences. Results: The intraclass correlation coefficient, the coefficient of agreement, and Cronbach alpha were higher for the modified scale than those of the original scale. The original scale produced more variance than that in the modified scale. Subanalysis demonstrated that, for all categories, the modified scale had greater correlation and reliability than the original scale. The correlation between color mismatch scores and actual color differences was 0.84 and between scar height scores and actual height was 0.81. Conclusions: The modified scar scale is a simple, reliable, and useful scale for evaluating photographs of burn patients. abstract_id: PUBMED:34903415 A simple mnemonic, B.U.R.N.S., for burns first aid. Introduction: Burn injuries remain common in the world, with an average of more than thousands of cases requiring medical attention each year. Singapore sees an average of 220 burns admissions annually. Given the high number of burn cases in Singapore, the authors identified that there is a need for more public awareness on first aid burn treatment. Acute management of burns can improve eventual patient outcome. The authors devised a simple mnemonic that can be used in burns education for first aid treatment, intended to be taught to trained personnel, who will have the first contact with these burn patients. The aim of the study was to assess the viability of implementing this mnemonic, B.U.R.N.S, to facilitate first aid education for burns. Material And Methods: In this study, we presented this mnemonic as a poster to 30 full-time burn care medical professionals. Feedback was then obtained from this group of medical professionals and used to revise the mnemonic. The mnemonic was then subsequently taught to 400 medical professionals, who are predominantly involved in the pre-hospital management of burns. They are then asked to reiterate the mnemonic to test the ease of remembering the mnemonic. Objective feedback was obtained with a 5-point scoring system. Results: The results indicated a significant improvement in burn first aid knowledge after the implementation of the mnemonic, from a score of 3.67-4.77, The content was deemed as appropriate and easy to understand and recall, and participants were able to reiterate the content, and will recommend this mnemonic to be used for burns teaching for first aid. Conclusion: The study results suggest that this B.U.R.N.S. mnemonic and visual aid is simple and easy to apply, especially for uniformed personnel, as these individuals may have the first contact with the burns victims, and it is important for them to render the appropriate burns first aid treatment. Overall, burns first aid awareness and education can be improved with the implementation of this mnemonic and poster. Moving forward, we aim to integrate the B.U.R.N.S. into educational programmes in professional institutions involved in responding to burns (medical and nursing schools of Singapore), as well as in schools and public institutions, for educational purpose to raise public awareness. We wish to also be able to do so on an international level when courses are conducted. abstract_id: PUBMED:8854711 The treatment of hydrofluoric acid burns. Hydrofluoric acid is a colourless or almost colourless, fuming liquid having a highly caustic and corrosive effect on organic tissue. Many kinds of hydrofluoric acid in concentrations are used in Industrial processes and laboratory uses because of its useful chemical properties. Hydrofluoric acid causes severe and painful burns to the skin and eyes. In this paper, the immediate symptoms, the typical treatments in the literature, first aid and therapy of hydrofluoric acid burns are reviewed. abstract_id: PUBMED:15774281 A computer assisted diagnosis tool for the classification of burns by depth of injury. In this paper, a computer assisted diagnosis (CAD) tool for the classification of burns into their depths is proposed. The aim of the system is to separate burn wounds from healthy skin, and to distinguish among the different types of burns (burn depths) by means of digital photographs. It is intended to be used as an aid to diagnosis in local medical centres, where there is a lack of specialists. Another potential use of the system is as an educational tool. The system is based on the analysis of digital photographs. It extracts from those images colour and texture information, as these are the characteristics observed by physicians in order to form a diagnosis. Clinical effectiveness of the method was demonstrated on 35 clinical burn wound images, yielding an average classification success rate of 88% compared to expert classified images. Answer: Yes, burns photographs are considered useful. According to a study assessing the usefulness of photographs of patients admitted to the burns unit, 98% of respondents, which included various members of the multidisciplinary team such as nursing staff, medical staff, physiotherapists, anaesthetists, theatre staff, students, and dieticians, agreed that photographs were useful overall, particularly for teaching purposes. About 72% agreed that the photographs were useful for patient management and improved patient care, and 88% agreed that all patients should have photographs available in the future. The advantages of photographs include aiding in moving and handling of patients, patient positioning in theatre, reviewing wound healing and complications, and assessing the site, size, and type of burn. However, some disadvantages were noted, such as difficulty in assessing the depth of burn, technical factors, and unavailability out of hours (PUBMED:16997481). Furthermore, a computer-assisted diagnosis tool for the classification of burns by depth of injury has been proposed, which is based on the analysis of digital photographs. This tool extracts color and texture information from images, which are the characteristics observed by physicians to form a diagnosis. The clinical effectiveness of this method was demonstrated on 35 clinical burn wound images, yielding an average classification success rate of 88% compared to expert classified images (PUBMED:15774281). Additionally, a modified scar scale has been developed and tested for reliability in assessing burn scars from photographs. This scale is considered a simple, reliable, and useful tool for evaluating photographs of burn patients (PUBMED:26092214). In summary, photographs are a valuable resource in the management, teaching, and assessment of burns and burn scars, and they support the work of healthcare professionals in various capacities.
Instruction: Mammographic surveillance of breast cancer patients: should the mastectomy site be imaged? Abstracts: abstract_id: PUBMED:8273633 Mammographic surveillance of breast cancer patients: should the mastectomy site be imaged? Objective: This study evaluated the usefulness of routine mammography of the mastectomy site in women who have been treated for breast carcinoma. Materials And Methods: From a retrospective review of medical records and mammographic examinations of 827 patients who had mastectomies, the following information was obtained: demographic data, type of breast reconstruction (if any), length of follow-up after mastectomy, evidence of local recurrence, and results of physical examination of the breast and mammography immediately before diagnosis of local recurrence. Results: Local recurrences of breast cancer were seen in 39 (4.5%) of the 859 breasts in the study group. All patients had a physical examination of the breast and mammography within 3 months of the diagnosis of recurrent breast carcinoma. Mammography of the mastectomy site did not show any recurrences that were not suspected on the basis of physical examination. For two of 20 patients, spot-compression views of palpable abnormalities showed thickening of the surgical scar. In four patients with pain in the chest wall, no abnormalities were found on physical examination or mammograms, but bone scintigrams showed metastasis to the ribs. Conclusion: Our results indicate that mammographic imaging of the mastectomy site does not increase the detection of locally recurrent breast cancer. abstract_id: PUBMED:32643449 Mammographic density and breast cancer screening. Mammographic density, which is determined by the relative amounts of fibroglandular tissue and fat in the breast, varies between women. Mammographic density is associated with a range of factors, including age and body mass index. The description of mammographic density has been transformed by the digitalization of mammography, which has allowed automation of the assessment of mammographic density, rather than using visual inspection by a radiologist. High mammographic density is important because it is associated with reduced sensitivity for the detection of breast cancer at the time of mammographic screening. High mammographic density is also associated with an elevated risk of developing breast cancer. Mammographic density appears to be on the causal pathway for some breast cancer risk factors, but not others. Mammographic density needs to be considered in the context of a woman's background risk of breast cancer. There is intense debate about the use of supplementary imaging for women with high mammographic density. Should supplementary imaging be used in women with high mammographic density and a clear mammogram? If so, what modalities of imaging should be used and in which women? Trials are underway to address the risks and benefits of supplementary imaging. abstract_id: PUBMED:27864044 Localization Methods for Excisional Biopsy in Women With Nonpalpable Mammographic Abnormalities. Introduction: With the advent and proliferation of breast cancer screening programs, more women are being diagnosed with mammographic abnormalities that require tissue diagnosis. If imaged-guided biopsy is not possible or previous image-guided biopsies reveal pathologies that require more extensive surgery, guided excisional biopsy/lumpectomy may be necessary. Methods: Fifteen women were enrolled in the study of the feasibility of off-site or day-before wire-localization excisional biopsy of the breast with mammographic abnormalities. Five patients had their localization wire placed the day before, whereas 10 patients had their localization the same day with surgery in a distant procedure room under straight local anesthesia. Results: Two of the 15 patients had an eventual cancer diagnosis from their wire-localized excisional breast biopsy. All patients had their mammographic abnormality removed with the previously placed core biopsy clip, and there was 100% radiologic/clinical correlation. All patients' wounds healed primarily without any surgical site infections. Conclusion: The protocol answers 2 questions concerning the wire-localized excisional breast biopsy technique. The series shows that the wire-localization technique can be performed the night before or in a location away from the procedure room that would allow better synchronization with surgical schedules or allow the procedure to take place in low-cost settings away from the expense of the hospital operating room. abstract_id: PUBMED:30977028 The association between mammographic density and breast cancer risk in Western Australian Aboriginal women. Purpose: Mammographic density is an established breast cancer risk factor within many ethnically different populations. The distribution of mammographic density has been shown to be significantly lower in Western Australian Aboriginal women compared to age- and screening location-matched non-Aboriginal women. Whether mammographic density is a predictor of breast cancer risk in Aboriginal women is unknown. Methods: We measured mammographic density from 103 Aboriginal breast cancer cases and 327 Aboriginal controls, 341 non-Aboriginal cases, and 333 non-Aboriginal controls selected from the BreastScreen Western Australia database using the Cumulus software program. Logistic regression was used to examine the associations of percentage dense area and absolute dense area with breast cancer risk for Aboriginal and non-Aboriginal women separately, adjusting for covariates. Results: Both percentage density and absolute dense area were strongly predictive of risk in Aboriginal women with odds per adjusted standard deviation (OPERAS) of 1.36 (95% CI 1.09, 1.69) and 1.36 (95% CI 1.08, 1.71), respectively. For non-Aboriginal women, the OPERAS were 1.22 (95% CI 1.03, 1.46) and 1.26 (95% CI 1.05, 1.50), respectively. Conclusions: Whilst mean mammographic density for Aboriginal women is lower than non-Aboriginal women, density measures are still higher in Aboriginal women with breast cancer compared to Aboriginal women without breast cancer. Thus, mammographic density strongly predicts breast cancer risk in Aboriginal women. Future efforts to predict breast cancer risk using mammographic density or standardize risk-associated mammographic density measures should take into account Aboriginal status when applicable. abstract_id: PUBMED:33312342 Mammographic density: intersection of advocacy, science, and clinical practice. Purpose: Here we aim to review the association between mammographic density, collagen structure and breast cancer risk. Findings: While mammographic density is a strong predictor of breast cancer risk in populations, studies by Boyd show that mammographic density does not predict breast cancer risk in individuals. Mammographic density is affected by age, parity, menopausal status, race/ethnicity, and body mass index (BMI).New studies normalize mammographic density to BMI may provide a more accurate way to compare mammographic density in women of diverse race and ethnicity. Preclinical and tissue-based studies have investigated the role collagen composition and structure in predicting breast cancer risk. There is emerging evidence that collagen structure may activate signaling pathways associated with aggressive breast cancer biology. Summary: Measurement of film mammographic density does not adequately capture the complex signaling that occurs in women with at-risk collagen. New ways to measure at-risk collagen potentially can provide a more accurate view of risk. abstract_id: PUBMED:25298869 Comparison of Danish dichotomous and BI-RADS classifications of mammographic density. Background: In the Copenhagen mammography screening program from 1991 to 2001, mammographic density was classified either as fatty or mixed/dense. This dichotomous mammographic density classification system is unique internationally, and has not been validated before. Purpose: To compare the Danish dichotomous mammographic density classification system from 1991 to 2001 with the density BI-RADS classifications, in an attempt to validate the Danish classification system. Material And Methods: The study sample consisted of 120 mammograms taken in Copenhagen in 1991-2001, which tested false positive, and which were in 2012 re-assessed and classified according to the BI-RADS classification system. We calculated inter-rater agreement between the Danish dichotomous mammographic classification as fatty or mixed/dense and the four-level BI-RADS classification by the linear weighted Kappa statistic. Results: Of the 120 women, 32 (26.7%) were classified as having fatty and 88 (73.3%) as mixed/dense mammographic density, according to Danish dichotomous classification. According to BI-RADS density classification, 12 (10.0%) women were classified as having predominantly fatty (BI-RADS code 1), 46 (38.3%) as having scattered fibroglandular (BI-RADS code 2), 57 (47.5%) as having heterogeneously dense (BI-RADS 3), and five (4.2%) as having extremely dense (BI-RADS code 4) mammographic density. The inter-rater variability assessed by weighted kappa statistic showed a substantial agreement (0.75). Conclusion: The dichotomous mammographic density classification system utilized in early years of Copenhagen's mammographic screening program (1991-2001) agreed well with the BI-RADS density classification system. abstract_id: PUBMED:26158085 Mammographic density measurements are not affected by mammography system. Mammographic density (MD) is a significant risk factor for breast cancer and has been shown to reduce the sensitivity of mammography screening. Knowledge of a woman's density can be used to predict her risk of developing breast cancer and personalize her imaging pathway. However, measurement of breast density has proven to be troublesome with wide variations in density recorded using radiologists' visual Breast Imaging Reporting and Data System (BIRADS). Several automated methods for assessing breast density have been proposed, each with their own source of measurement error. The use of differing mammographic imaging systems further complicates MD measurement, especially for the same women imaged over time. The purpose of this study was to investigate whether having a mammogram on differing manufacturer's equipment affects a woman's MD measurement. Raw mammographic images were acquired on two mammography imaging systems (General Electric and Hologic) one year apart and processed using VolparaDensity™ to obtain the Volpara Density Grade (VDG) and average volumetric breast density percentage (AvBD%). Visual BIRADS scores were also obtained from 20 expert readers. BIRADS scores for both systems showed strong positive correlation ([Formula: see text]; [Formula: see text]), while the VDG ([Formula: see text]; [Formula: see text]) and AvBD% ([Formula: see text]; [Formula: see text]) showed stronger positive correlations. Substantial agreement was shown between the systems for BIRADS ([Formula: see text]; [Formula: see text]), however, the systems demonstrated an almost perfect agreement for VDG ([Formula: see text]; [Formula: see text]). abstract_id: PUBMED:22692036 Benefit of semiannual ipsilateral mammographic surveillance following breast conservation therapy. Purpose: To compare cancer recurrence outcomes on the basis of compliant semiannual versus noncompliant annual ipsilateral mammographic surveillance following breast conservation therapy (BCT). Materials And Methods: A HIPAA-compliant retrospective review was performed of post-BCT examinations from 1997 through 2008 by using a deidentified database. The Committee on Human Research did not require institutional review board approval for this study, which was considered quality assurance. Groups were classified according to compliance with institutional post-BCT protocol, which recommends semiannual mammographic examinations of the ipsilateral breast for 5 years. A compliant semiannual examination was defined as an examination with an interval of 0-9 months, although no examination had intervals less than 3 months. A noncompliant annual examination was defined as an examination with an interval of 9-18 months. Cancer recurrence outcomes were compared on the basis of the last examination interval leading to diagnosis. Results: Initially, a total of 10 750 post-BCT examinations among 2329 asymptomatic patients were identified. Excluding initial mammographic follow-up, there were 8234 examinations. Of these, 7169 examinations were semiannual with 94 recurrences detected and 1065 examinations were annual with 15 recurrences detected. There were no differences in demographic risk factors or biopsy rates. Recurrences identified at semiannual intervals were significantly less advanced than those identified at annual intervals (stage I vs stage II, P = .04; stage 0 + stage I vs stage II, P = .03). Nonsignificant findings associated with semiannual versus annual intervals included smaller tumor size (mean, 11.7 vs 15.3 mm; P = .15) and node negativity (98% vs 91%, P = .28). Conclusion: Results suggest that a semiannual interval is preferable for ipsilateral mammographic surveillance, allowing detection of a significantly higher proportion of cancer recurrences at an earlier stage than noncompliant annual surveillance. abstract_id: PUBMED:27864652 Differences in mammographic density between Asian and Caucasian populations: a comparative analysis. Purpose: Mammographic density is a measurable and modifiable biomarker that is strongly and independently associated with breast cancer risk. Paradoxically, although Asian women have lower risk of breast cancer, studies of minority Asian women in predominantly Caucasian populations have found that Asian women have higher percent density. In this cross-sectional study, we compared the distribution of mammographic density for a matched cohort of Asian women from Malaysia and Caucasian women from Sweden, and determined if variations in mammographic density could be attributed to population differences in breast cancer risk factors. Methods: Volumetric mammographic density was compared for 1501 Malaysian and 4501 Swedish healthy women, matched on age and body mass index. We used multivariable log-linear regression to determine the risk factors associated with mammographic density and mediation analysis to identify factors that account for differences in mammographic density between the two cohorts. Results: Compared to Caucasian women, percent density was 2.0% higher among Asian women (p &lt; 0.001), and dense volume was 5.7 cm3 higher among pre-menopausal Asian women (p &lt; 0.001). Dense volume was 3.0 cm3 lower among post-menopausal Asian women (p = 0.009) compared to post-menopausal Caucasian women, and this difference was attributed to population differences in height, weight, and parity (p &lt; 0.001). Conclusions: Our analysis suggests that among post-menopausal women, population differences in mammographic density and risk to breast cancer may be accounted for by height, weight, and parity. Given that pre-menopausal Asian and Caucasian women have similar population risk to breast cancer but different dense volume, development of more appropriate biomarkers of risk in pre-menopausal women is required. abstract_id: PUBMED:35160252 Immune Regulation of Mammary Fibroblasts and the Impact of Mammographic Density. Mammographic density is associated with a 4-6-fold increase in breast cancer risk independent of age and BMI. High mammographic density is characterized by breast tissue with high proportions of stroma comprised of fibroblasts, collagen, and immune cells. This study sought to investigate whether stromal fibroblasts from high mammographic density breast tissue contributes to increased extracellular matrix deposition and pro-tumorigenic signaling. Mammary fibroblasts were isolated from women with high and low mammographic density and exposed to immune factors myeloperoxidase (MPO), eosinophil peroxidase (EPO), transforming growth factor beta 1 (TGFB1) and tumour necrosis factor alpha (TNFA) for 72 h and profiled for expression of cancer-associated fibroblast and extracellular matrix regulation markers. No differences in gene expression profiles or collagen production were observed between fibroblasts with high or low mammographic density, and they did not have a differential response to immune mediators. MPO and EPO significantly increased the production of collagen 1. TGFB and TNFA induced variable changes in gene expression. Fibroblasts cultured in vitro from women with high mammographic density do not appear to be inherently different to those from women with low mammographic density. The function of fibroblasts in mammographic density-associated breast cancer risk is likely to be regulated by immune signals from surrounding cells in the microenvironment. Answer: The study reported in abstract PUBMED:8273633 evaluated the usefulness of routine mammography of the mastectomy site in women who have been treated for breast carcinoma. The results indicated that mammographic imaging of the mastectomy site did not increase the detection of locally recurrent breast cancer. All local recurrences were suspected based on physical examination, and mammography did not reveal any additional recurrences. Therefore, the conclusion of this study suggests that mammographic surveillance of the mastectomy site may not be necessary for the detection of local recurrences in breast cancer patients.
Instruction: Idiopathic Brainstem Neuronal Chromatolysis (IBNC): a novel prion protein related disorder of cattle? Abstracts: abstract_id: PUBMED:18826563 Idiopathic Brainstem Neuronal Chromatolysis (IBNC): a novel prion protein related disorder of cattle? Background: The epidemic form of Bovine Spongiform Encephalopathy (BSE) is generally considered to have been caused by a single prion strain but at least two strain variants of cattle prion disorders have recently been recognized. An additional neurodegenerative condition, idiopathic brainstem neuronal chromatolysis and hippocampal sclerosis (IBNC), a rare neurological disease of adult cattle, was also recognised in a sub-set of cattle submitted under the BSE Orders in which lesions of BSE were absent. Between the years of 1988 and 1991 IBNC occurred in Scotland with an incidence of 7 cases per 100,000 beef suckler cows over the age of 6 years. Results: When the brains of 15 IBNC cases were each tested by immunohistochemistry, all showed abnormal labelling for prion protein (PrP). Immunohistological labelling for PrP was also present in the retina of a single case available for examination. The pattern of PrP labelling in brain is distinct from that seen in other ruminant prion diseases and is absent from brains with other inflammatory conditions and from normal control brains. Brains of IBNC cattle do not reveal abnormal PrP isoforms when tested by the commercial BioRad or Idexx test kits and do not reveal PrPres when tested by Western blotting using stringent proteinase digestion methods. However, some weakly protease resistant isoforms of PrP may be detected when tissues are examined using mild proteinase digestion techniques. Conclusion: The study shows that a distinctive neurological disorder of cattle, which has some clinical similarities to BSE, is associated with abnormal PrP labelling in brain but the pathology and biochemistry of IBNC are distinct from BSE. The study is important either because it raises the possibility of a significant increase in the scope of prion disease or because it demonstrates that widespread and consistent PrP alterations may not be confined to prion diseases. Further studies, including transmission experiments, are needed to establish whether IBNC is a condition in which prion protein is abnormally regulated or it is yet a further example of an infectious cattle prion disease. abstract_id: PUBMED:29666222 First report of prion-related protein gene (PRNT) polymorphisms in cattle. Prion diseases are caused by structural changes in normal prion protein (PrPC). The prion gene family includes four members: prion protein (PRNP), prion-like protein (PRND), shadow of PRNP (SPRN) and prion-related protein (PRNT). Genetic association studies of prion diseases and the other genes in the prion gene family, except for PRNT, have been performed in cattle. Our previous studies indicated that the distribution of PRNP promoter polymorphisms related with bovine spongiform encephalopathy susceptibility is significantly different in Hanwoo (Korean native cattle) and Holstein cattle. However, PRNT polymorphisms have not been reported thus far in cattle. Hence, we examined the PRNT single nucleotide polymorphisms (SNPs) in 315 Hanwoo and 140 Holstein cattle. We found a total of two SNPs, PRNT c.-87C&gt;T and PRNT c.-37G&gt;C, in the 5' untranslated region of exon 2. The c.-87C&gt;T and c.-37G&gt;C genotype (P&lt;0.0001) and allele (P&lt;0.0001) frequencies exhibited significant differences in the distribution between Hanwoo and Holstein cattle. In addition, the c.-37G&lt;C polymorphism was not found in Hanwoo. Interestingly, we did not find any polymorphisms in the ORF of bovine PRNT, which is in contrast with the highly polymorphic ovine PRNT ORF region. This is the first genetic research of the PRNT gene in cattle. abstract_id: PUBMED:1441151 Idiopathic brainstem neuronal chromatolysis and hippocampal sclerosis: a novel encephalopathy in clinically suspect cases of bovine spongiform encephalopathy. Some of the brains submitted for neurohistopathological examination under the Bovine Spongiform Encephalopathy (BSE) Orders did not show lesions of BSE. They showed neuronal chromatolysis and necrosis of the brainstem, perivascular cuffs and meningeal infiltrates of mononuclear cells and large irregularly shaped vacuoles in the neuropil. About half of them also showed loss of pyramidal neurons in the hippocampus, with astrocytic gliosis. The topography of the brainstem neuronal degeneration and vacuolation was the same in all the cattle, suggesting that neuronal necrosis and chromatolysis, vacuolation and hippocampal sclerosis are part of a spectrum of changes common to a single disease. The cows affected with such changes came from most parts of Scotland with the largest number from the north east. They were of various breeds, mostly suckler cows, and were aged from six to 16 years. Some cows had had no reported access to feed supplements. Clinically, the cows showed a range of neurological signs: tremor, ataxia, apprehension and weight loss were described in more than 80 per cent of the cases. The cause of the disorder was not determined. abstract_id: PUBMED:29675959 Presumptive BSE cases with an aberrant prion protein phenotype in Switzerland, 2011: Lack of prion disease in experimentally inoculated cattle and bovine prion protein transgenic mice. Bovine spongiform encephalopathy (BSE) is caused by different prion strains that are discriminated by the molecular characteristics of the pathological prion protein. In 2011, Switzerland reported two presumptive cases of BSE in cattle with a prion protein phenotype different from previously described strains, and it was unclear whether these findings were related to a transmissible disease and have implications on animal and public health. In this study, brain tissues of these cases were inoculated into transgenic mice expressing the bovine prion protein (BoPrP-Tg110) and into cattle. Clinical and pathological investigations as well as molecular testing did not provide evidence for the presence of BSE in the Swiss cases after two passages in BoPrP-Tg110 mice and a challenge period of 3.5 years in cattle. This lack of disease transmission suggests that the Swiss 2011 cases were not affected by a prion disease and were unrelated to the feed-born BSE epidemic. abstract_id: PUBMED:28244340 Lack of germline mutation at codon 211 of the prion protein gene (PRNP) in Korean native cattle - Short communication. Bovine prion diseases are composed of two types of bovine spongiform encephalopathy (BSE), classical BSE and atypical BSE. Recent studies have identified one case of atypical BSE with an E211K mutation. E211K is homologous to the human E200K mutation, which is related to familial Creutzfeldt-Jakob disease (CJD), one of the familial forms of human prion diseases. To date, familial forms of prion diseases have not been reported in non-human animals. Because the familial forms of human prion diseases account for more than 10% of all human prion disease cases, the detection of the E211K mutation in healthy cattle is very important for verifying the role of this mutation as a familial form of BSE. To detect putative mutations related to familial BSE, specifically E211K in Korean native cattle (Hanwoo) and Korean dairy cattle (Holstein), we performed direct sequencing targeting codon 211 and the adjacent regions of the bovine prion protein (PRNP) gene in 384 Hanwoo and 152 Holstein cattle. We did not find the E211K mutation in any of the Korean cattle. Although we did not find the E211K mutation in Korean native cattle, E211K is a postulated mutation; therefore, further screening in other countries and larger samples is highly desirable. abstract_id: PUBMED:25727649 At the centre of neuronal, synaptic and axonal pathology in murine prion disease: degeneration of neuroanatomically linked thalamic and brainstem nuclei. Aims: The processes by which neurons degenerate in chronic neurodegenerative diseases remain unclear. Synaptic loss and axonal pathology frequently precede neuronal loss and protein aggregation demonstrably spreads along neuroanatomical pathways in many neurodegenerative diseases. The spread of neuronal pathology is less studied. Methods: We previously demonstrated severe neurodegeneration in the posterior thalamus of multiple prion disease strains. Here we used the ME7 model of prion disease to examine the nature of this degeneration in the posterior thalamus and the major brainstem projections into this region. Results: We objectively quantified neurological decline between 16 and 18 weeks post-inoculation and observed thalamic subregion-selective neuronal, synaptic and axonal pathology while demonstrating relatively uniform protease-resistant prion protein (PrP) aggregation and microgliosis across the posterior thalamus. Novel amyloid precursor protein (APP) pathology was particularly prominent in the thalamic posterior (PO) and ventroposterior lateral (VPL) nuclei. The brainstem nuclei forming the major projections to these thalamic nuclei were examined. Massive neuronal loss in the PO was not matched by significant neuronal loss in the interpolaris (Sp5I), while massive synaptic loss in the ventral posteromedial nucleus (VPM) did correspond with significant neuronal loss in the principal trigeminal nucleus. Likewise, significant VPL synaptic loss was matched by significant neuronal loss in the gracile and cuneate nuclei. Conclusion: These findings demonstrate significant spread of neuronal pathology from the thalamus to the brainstem in prion disease. The divergent neuropathological features in adjacent neuronal populations demonstrates that there are discrete pathways to neurodegeneration in different neuronal populations. abstract_id: PUBMED:23228942 Lactoferrin protects against prion protein-induced cell death in neuronal cells by preventing mitochondrial dysfunction. Prion disorder-related neurodegenerative diseases are characterized by the accumulation of prion protein (PrP) scrapie isoform (PrPsc) within the central nervous system. PrPsc induces neuronal cell death by increasing intracellular generation of reactive oxygen species (ROS). Lactoferrin (LF) is an 80 kDa protein, which has antioxidant abilities due to the scavenging of ROS. The effects of LF treatment on PrP (106-126)-mediated neurotoxicity and ROS generation were the focus of this study. LF treatment protected against PrP (106-126)-induced neuronal cell death and decreased ROS generation. The reduced ROS generation prevented PrP (106-126)-induced mitochondrial dysfunction. Moreover, PrP (106-126)-induced protein activation including c-Jun N-terminal kinase and caspase-3 were blocked by LF treatment. These results demonstrated that LF protects neuronal cells against PrP (106-126)-mediated neurotoxicity through the scavenging of ROS and provide evidence that LF treatment prevents neuronal cell death caused by PrP (106-126). abstract_id: PUBMED:24564312 Detection of misfolded prion protein in retina samples of sheep and cattle by use of a commercially available enzyme immunoassay. Objective: To determine the usefulness of retina samples for detection of disease-associated prion protein by use of a commercially available enzyme immunoassay (EIA) intended for rapid identification of sheep and cattle with transmissible spongiform encephalopathies (TSEs). Samples: Retina, brainstem at the level of the obex, and retropharyngeal lymph node samples obtained from 15 TSE-inoculated sheep (scrapie [n = 13] or transmissible mink encephalopathy passaged through a bovid [2]); retina and brainstem samples obtained from 11 TSE-inoculated cattle (transmissible mink encephalopathy passaged through a bovid [7] or classical BSE [4]); and negative control tissue samples obtained from 2 sheep and 2 cattle that were not inoculated with TSEs. Procedures: Tissue samples were homogenized and analyzed for detection of abnormally folded disease-associated prion protein with a commercially available EIA and 2 confirmatory assays (western blot analysis or immunohistochemical analysis). Results: Retina sample EIA results were in agreement with results of brainstem sample EIA or confirmatory assay results for negative control animals and TSE-inoculated animals with clinical signs of disease. However, TSE-inoculated animals with positive confirmatory assay results that did not have clinical signs of disease had negative retina sample EIA results. Retina sample EIA results were in agreement with brainstem sample immunohistochemical results for 4 TSE-inoculated sheep with negative retropharyngeal lymph node EIA results. Conclusions And Clinical Relevance: Results of this study suggested that retina samples may be useful for rapid EIA screening of animals with neurologic signs to detect TSEs. abstract_id: PUBMED:25311251 Identification of clinical target areas in the brainstem of prion-infected mice. Aims: While prion infection ultimately involves the entire brain, it has long been thought that the abrupt clinical onset and rapid neurological decline in laboratory rodents relates to involvement of specific critical neuroanatomical target areas. The severity and type of clinical signs, together with the rapid progression, suggest the brainstem as a candidate location for such critical areas. In this study we aimed to correlate prion pathology with clinical phenotype in order to identify clinical target areas. Method: We conducted a comprehensive survey of brainstem pathology in mice infected with two distinct prion strains, which produce different patterns of pathology, in mice overexpressing prion protein (with accelerated clinical onset) and in mice in which neuronal expression was reduced by gene targeting (which greatly delays clinical onset). Results: We identified specific brainstem areas that are affected by prion pathology during the progression of the disease. In the early phase of disease the locus coeruleus, the nucleus of the solitary tract, and the pre-Bötzinger complex were affected by prion protein deposition. This was followed by involvement of the motor and autonomic centres of the brainstem. Conclusions: Neurodegeneration in the locus coeruleus, the nucleus of the solitary tract and the pre-Bötzinger complex predominated and corresponded to the manifestation of the clinical phenotype. Because of their fundamental role in controlling autonomic function and the overlap with clinical signs in sporadic Creutzfeldt-Jakob disease, we suggest that these nuclei represent key clinical target areas in prion diseases. abstract_id: PUBMED:26091261 A Naturally Occurring Bovine Tauopathy Is Geographically Widespread in the UK. Many human neurodegenerative diseases are associated with hyperphosphorylation and widespread intra-neuronal and glial associated aggregation of the microtubule associated protein tau. In contrast, animal tauopathies are not reported with only senescent animals showing inconspicuous tau labelling of fine processes albeit significant tau aggregation may occur in some experimental animal disease. Since 1986, an idiopathic neurological condition of adult cattle has been recognised in the UK as a sub-set of cattle slaughtered as suspect bovine spongiform encephalopathy cases. This disorder is characterised by brainstem neuronal chromatolysis and degeneration with variable hippocampal sclerosis and spongiform change. Selected cases of idiopathic brainstem neuronal chromatolysis (IBNC) were identified from archive material and characterised using antibodies specific to several tau hyperphosphorylation sites or different isoforms of the tau microtubule binding region. Labelling was also carried out for alpha synuclein, ubiquitin, TDP43, Aβ 1-42, Aβ 1-40. Widespread tau labelling was identified in all IBNC brains examined and with each of seven tau antibodies recognising different hyperphosphorylated sites. Labelling with each antibody was associated with dendrites, neuronal perikarya and glia. Thus IBNC is a sporadic, progressive neurological disease predominantly affecting aged cattle that occurs throughout the UK and is associated with hyperphosphorylation of tau, a rare example of a naturally-occurring tauopathy in a non-primate species. Secondary accumulation of alpha synuclein and ubiquitin was also present. The neuropathology does not precisely correspond with any human tauopathy. The cause of IBNC remains undetermined but environmental factors and exposure to agrochemicals needs to be considered in future aetiological investigations. Answer: Idiopathic Brainstem Neuronal Chromatolysis (IBNC) is a rare neurological disease of adult cattle that was recognized in a subset of cattle submitted under the BSE Orders in Scotland between 1988 and 1991, where lesions of BSE were absent. The incidence of IBNC during that period was 7 cases per 100,000 beef suckler cows over the age of 6 years (PUBMED:18826563). The disease is characterized by brainstem neuronal chromatolysis and degeneration, with variable hippocampal sclerosis and spongiform change (PUBMED:26091261). When the brains of 15 IBNC cases were tested by immunohistochemistry, all showed abnormal labeling for prion protein (PrP), although the pattern of PrP labeling in the brain was distinct from that seen in other ruminant prion diseases. Interestingly, brains of IBNC cattle did not reveal abnormal PrP isoforms when tested by commercial test kits and did not reveal PrPres when tested by Western blotting using stringent proteinase digestion methods. However, some weakly protease-resistant isoforms of PrP may be detected when tissues are examined using mild proteinase digestion techniques (PUBMED:18826563). The study of IBNC is significant because it raises the possibility of a significant increase in the scope of prion disease or demonstrates that widespread and consistent PrP alterations may not be confined to prion diseases. Further studies, including transmission experiments, are needed to establish whether IBNC is a condition in which prion protein is abnormally regulated or if it is yet another example of an infectious cattle prion disease (PUBMED:18826563). IBNC is associated with hyperphosphorylation of tau, making it a rare example of a naturally-occurring tauopathy in a non-primate species. The neuropathology does not precisely correspond with any human tauopathy, and the cause of IBNC remains undetermined, with environmental factors and exposure to agrochemicals being potential areas for future etiological investigations (PUBMED:26091261). In conclusion, IBNC may be a novel prion protein-related disorder of cattle, but its exact relationship to prion diseases and its etiology remain to be fully elucidated.
Instruction: Do pretransplant C-peptide levels influence outcomes in simultaneous kidney-pancreas transplantation? Abstracts: abstract_id: PUBMED:18374116 Do pretransplant C-peptide levels influence outcomes in simultaneous kidney-pancreas transplantation? Objective: To analyze outcomes in simultaneous kidney-pancreas transplantation (SKPT) recipients who retain C-peptide production at the time of SKPT. Methods: This retrospective analysis of SKPTs from January 2002 through January 2007 compared outcomes between patients with absent or low C-peptide levels (&lt;2.0 ng/mL, group A) with those having levels &gt; or =2.0 ng/mL (group B). Results: Among 74 SKPTs, 67 were in group A and seven in group B (mean C-peptide level 5.7 ng/mL). During transplantation, group B subjects were older (mean age 51 vs 41 years, P = .006); showed a later age of onset of diabetes (median 35 vs 13 years, P = .0001); weighed more (median 77 vs 66 kg, P = .24); had a greater proportion of African-Americans (57% vs 13%, P = .004); and had a longer pretransplant duration of dialysis (median 40 vs 14 months, P = .14). With similar median follow-up of 40 months, death-censored kidney (95% group A vs 100% group B, P = NS) and pancreas (87% group A vs 100% group B, P = NS) graft survival rates were similar, but patient survival (94% group A vs 71% group B, P = .03) was greater in group A. At 1-year follow-up, there were no significant differences in rejection episodes, surgical complications, infections, readmissions, hemoglobin A1C or C-peptide levels, serum creatinine, or MDRD GFR levels. Conclusions: Diabetic patients with measurable C-peptide levels before transplant were older, overweight, more frequently African-American and had a later age of onset of diabetes, longer duration of pretransplant dialysis, and reduced patient survival compared to insulinopenic patients undergoing SKPT. The other outcomes were similar. abstract_id: PUBMED:34599533 Do pretransplant C-peptide levels predict outcomes following simultaneous pancreas-kidney transplantation? A matched case-control study. Following simultaneous pancreas-kidney transplantation (SPKT), survival outcomes are reported as equivalent in patients with detectable pretransplant C-peptide levels (Cp+) and a "type 2″ diabetes mellitus (DM) phenotype compared to type 1 (Cp negative [Cp-]) DM. We retrospectively compared 46 Cp+ patients pretransplant (≥2.0 ng/mL, mean 5.4 ng/mL) to 46 Cp- (level &lt; 0.5 ng/mL) case controls matched for recipient age, gender, race, and transplant date. Early outcomes were comparable. Actual 5-year patient survival (91% versus 94%), kidney graft survival (69% versus 86%, p = .15), and pancreas graft survival (60% versus 86%, p = .03) rates were lower in Cp+ versus Cp- patients, respectively. The Cp+ group had more pancreas graft failures due to insulin resistance (13% Cp+ versus 0% Cp-, p = .026) or rejection (17% Cp+ versus 6.5% Cp-, p = .2). Post-transplant weight gain &gt; 5 kg occurred in 72% of Cp+ versus 26% of Cp- patients (p = .0001). In patients with functioning grafts, mean one-year post-transplant HbA1c levels (5.0 Cp+ versus 5.2% Cp-) were comparable, whereas Cp levels were higher in Cp+ patients (5.0 Cp+ versus 2.6 ng/mL Cp-). In this matched case-control study, outcomes were inferior in Cp+ compared to Cp- patients following SPKT, with post-transplant weight gain, insulin resistance, and rejection as potential mitigating factors. abstract_id: PUBMED:36706060 Higher Fasting Pretransplant C-peptide Levels in Type 2 Diabetics Undergoing Simultaneous Pancreas-kidney Transplantation Are Associated With Posttransplant Pancreatic Graft Dysfunction. Background: Among selected patients with type 2 diabetes mellitus (T2DM), simultaneous pancreas and kidney (SPK) transplants can be an effective option. However, data are limited about outcomes in T2DM SPK recipients based on the pretransplant C-peptide levels. Methods: In this study, we reviewed all T2DM SPK recipients and categorized them based on the pretransplant fasting C-peptide levels into 3 groups: low (≤2 ng/mL), medium (&gt;2-8 ng/mL), and high (&gt;8 ng/mL). Several measures of graft failures (GFs), graft dysfunction, and composite outcomes were of interest. Results: There were a total of 76 SPK recipients (low, n = 14; medium, n = 47; high, n = 15). At the last follow-up, the low group did not reach any outcome; in contrast, 11 (23%) in the medium group and 5 (33%) in the high group reached the uncensored composite outcome; 6 (13%) in the medium group and 2 (13%) in the high group had GF; and 8 (17%) in the medium group and 4 (26.7%) in the high group reached the death-censored composite outcomes. In a fully adjusted model, each pretransplant C-peptide unit was not associated with an increased risk of the composite outcome, GF, or death-censored composite outcomes. However, in multivariate analysis with limited adjustment, pretransplant C-peptide was associated with the composite outcome (hazard ratio: 1.18, 95% confidence interval, 1.01-1.38; P = 0.03) and death-censored composite outcome (hazard ratio: 1.20; 95% confidence interval, 1.01-1.42; P = 0.03). Conclusions: Although limited by the small sample size, we found excellent outcomes among T2DM SPK recipients overall. However, higher levels of pretransplant C-peptide may be associated with inferior posttransplant outcomes that include graft dysfunction. abstract_id: PUBMED:35044001 Simultaneous pancreas-kidney transplantation in Caucasian versus African American patients: Does recipient race influence outcomes? The influence of African American (AA) recipient race on outcomes following simultaneous pancreas-kidney transplantation (SPKT) is uncertain. Methods: From 11/01 to 2/19, we retrospectively studied 158 Caucasian (C) and 57 AA patients (pts) undergoing SPKT. Results: The AA group had fewer patients on peritoneal dialysis (30% C vs. 14% AA), more patients with longer dialysis duration (28% C vs. 51% AA), more sensitized (PRA ≥20%) patients (6% C vs. 21% AA), and more patients with pretransplant C-peptide levels ≥2.0 ng/ml (11% C vs. 35% AA, all P &lt; .05). With a mean 9.2 year follow-up, patient survival (65% C vs. 77% AA, P = .098) slightly favored the AA group, whereas kidney (55% C vs. 60% AA) and pancreas (48% C vs. 54% AA) graft survival rates (GSRs) were comparable. Death-censored kidney (71% C vs. 68% AA) and pancreas (both 62%) GSRs demonstrated that death with a functioning graft (DWFG) was more common in C vs. AA patients (23% C vs. 12% AA, P = .10). The incidence of death-censored dual graft loss (usually rejection) was 7% C versus 21% AA (P = .005). Conclusions: Following SPKT, AA patients are at a greater risk for dual immunological graft loss whereas C patients are at greater risk for DWFG. abstract_id: PUBMED:9645815 Successful long-term kidney-pancreas transplants in diabetic patients with high C-peptide levels. Background: Pancreas transplants are rarely done in type 2 (noninsulin dependent) diabetic patients. Most researchers believe that in type 2 diabetic patients, peripheral insulin resistance plays a central role and also is associated with relative insulin deficiency or an insulin secretory defect. This suggests that in patients receiving transplants, the new beta cells will be overstimulated, leading to beta cell "exhaustion" and graft failure. Methods: Early in our experience, simultaneous pancreas-kidney transplant candidates were selected using only clinical criteria for type 1 diabetes, i.e., early onset of diabetes and rapid onset of insulin use. Pretransplant sera were available for C-peptide analysis in 70 of 94 of those patients. Forty-four percent (31/70) were African American (AA). Results: Thirteen patients (12 AA) with a nonfasting C-peptide level &gt;1.37 ng/ml were identified. In these patients with high C-peptide levels, pancreas and kidney survival rates were 10O%. The results did not differ statistically from the low C-peptide group (&lt; or =1.37 ng/ ml). There were no differences between patient and pancreas-kidney survival rates when the patients were separated into AA and non-AA groups. The follow-up was 1-89 months, with a mean of 45.5 months. Conclusions: Long-term pancreas graft function is attainable and beta cell "exhaustion" does not occur in patients with high preoperative C-peptide (&gt;1.37 ng/ ml) levels. AA and non-AA patients have equivalent long-term patient, kidney, and pancreas-kidney graft survival rates. abstract_id: PUBMED:32143871 Elevated C-peptide Levels Are Associated With Acute Rejection in Kidney Pancreas Transplantation. Background: We assessed whether allograft rejection or failure can be predicted by an acute increase in C-peptide production from the transplanted pancreas. Methods: Patients with a minimum of 5 years of follow-up post simultaneous pancreas-kidney transplant were identified. C-peptide levels were obtained during clinic visits routinely. Graft failure was defined as return to dependence on insulin therapy or return to dialysis for pancreas and kidney grafts, respectively. Protocol kidney allograft biopsies were performed at 3 and 12 months. For-cause biopsies were also performed. Results: Acute rejections were detected in 11 patients on biopsy results of the renal allograft. C-peptide levels drawn prior to documented rejections were significantly higher in patients with acute rejection than patients with borderline or no rejection (P = .006). Receiver operating characteristics curves for C-peptide indicated greater accuracy in predicting rejection than simultaneously drawn serum creatinine or lipase. Conclusions: Higher C-peptide levels in simultaneous pancreas-kidney recipients is associated with acute rejection vs nonrejection. abstract_id: PUBMED:11211183 Successful long-term kidney-pancreas transplants regardless of C-peptide status or race. Background: We have previously shown that our patient population of 60% minority races has end-stage renal disease primarily as a result of diabetes mellitus and hypertension. It therefore was logical to explore the restoration of normal insulin production and renal function by simultaneous pancreas-kidney (SPK) transplantation, without regard to race. This study represents new analyses integrating race with C-peptide status and reports the outcome of 136 SPK transplantations performed over the last 10 years. Results: Of the 49 African-Americans with diabetes mellitus and end-stage renal disease, 60% were type I and 40% were type II, based on C-peptide levels. In comparison, only 16% of Caucasians were type II. The average age at onset of diabetes mellitus was 15.7 years for type I compared with 20.7 years for type II (P&gt;0.05). The actuarial 10-year survival rates for the 136 SPKs were 91.79% (patient), 85.07% (pancreas), and 83.58% (kidney). The type I and type II survival rates were similar in the two diabetic groups. Conclusions: The data strongly suggest that pretransplant C-peptide status does not influence the outcome of SPK transplantation in patients with renal failure from diabetes mellitus. SPK transplants should be offered to all suitable diabetic patients with renal failure regardless of C-peptide status or race. abstract_id: PUBMED:20832562 Outcomes after simultaneous pancreas and kidney transplantation and the discriminative ability of the C-peptide measurement pretransplant among type 1 and type 2 diabetes mellitus. Background: Earlier studies reporting outcomes after pancreas transplantation have included a combination of C-peptide cutoffs and clinical criteria to classify type 2 diabetes mellitus (T2DM). However, because the kidney is the major site for C-peptide catabolism, C-peptide is unreliable to discriminate the type of diabetes in patients with kidney disease. Methods: To improve the discriminative power and better classify the type of diabetes, we used a composite definition to identify T2DM: presence of C-peptide, negative glutamic acid decarboxylase antibody, absence of diabetic ketoacidosis, and use of oral hypoglycemics. Additionally among T2DM patients with end-stage renal disease (ESRD), body mass index of &lt;30 kg/m(2) and use of &lt;1 u/kg of insulin per day were selection criteria for suitablity for simultaneous pancreas and kidney transplantation (SPKT). We compared graft and patient survival between T1DM and T2DM after SPKT. Results: Our study cohort consisted of 80 patients, 10 of whom were assigned as T2DM based on our study criteria. Approximately 15% of patients with T1DM had detectable C-peptide. Cox regression survival analyses found no significant differences in allograft (pancreas and kidney) or patient survival between the 2 groups. The mean creatinine clearance at 1 year estimated by the modification of Diet in Renal Disease (MDRD) equation was not significantly different between the 2 groups. Among those with 1 year of follow-up, all patients with T2DM had glycosylate hemoglobin of &lt;6.0 at 1 year versus 92% of those with T1DM. Conclusion: SPKT should be considered in the therapeutic armamentarium for renal replacement in selected patients with T2DM and ESRD. Use of C-peptide measurements for ESRD patients can be misleading as the sole criterion to determine the type of diabetes. abstract_id: PUBMED:30230218 C-peptide levels do not correlate with pancreas allograft failure: Multicenter retrospective analysis and discussion of the new OPT definition of pancreas allograft failure. The OPTN Pancreas Transplantation Committee performed a multicenter retrospective study to determine if undetectable serum C-peptide levels correspond to center-reported pancreas graft failures. C-peptide data from seven participating centers (n = 415 graft failures for transplants performed from 2002 to 2012) were analyzed pretransplant, at graft failure, and at return to insulin. One hundred forty-nine C-peptide values were submitted at pretransplant, 94 at return to insulin, and 233 at graft failure. There were 77 transplants with two available values (at pretransplant and at graft failure). For recipients in the study with pretransplant C-peptide &lt;0.75 ng/mL who had a posttransplant C-peptide value available (n = 61), graft failure was declared at varying levels of C-peptide. High C-peptide values at graft failure were not explained by nonfasting testing or by individual center bias. Transplant centers declare pancreas graft failure at varying levels of C-peptide and do not consistently report C-peptide data. Until February 28, 2018, OPTN did not require reporting of posttransplant C-peptide levels and it appears that C-peptide levels are not consistently used for evaluating graft function. C-peptide levels should not be used as the sole criterion for the definition of pancreas graft failure. abstract_id: PUBMED:21168662 Simultaneous kidney-pancreas transplantation for end-stage renal disease patients with insulin-dependent diabetes and detectable C-peptide. Introduction: There is controversy regarding the place of simultaneous pancreas-kidney (SPK) transplantation in end-stage renal disease (ESRD) patients with insulin-dependent diabetes mellitus (IDDM) and detectable c-peptide. We sought to compare outcomes of recipients with and without pretransplantation c-peptide. Methods: This retrospective single-center review included consecutive primary SPK transplantations performed between September 2007 and May 2010. Demographic characteristics and outcomes were compared between recipients with and without pretransplantation c-peptide. Results: Seven of 25 (28%) consecutive SPK transplant recipients with a diagnosis of IDDM and ESRD had detectable c-peptide prior to transplantation. The mean c-peptide level was 6.3 ± 6.1 ng/mL. For those recipients with and without c-peptide, mean age at diagnosis of IDDM (12.4 ± 7.8 vs 17.1 ± 6.6 years; P = not significant [NS]), duration of IDDM prior to transplantation (30 ± 10 vs 23 ± 9 years; P = NS), and body mass index (25.9 ± 4.5 vs 26.7 ± 4.5 kg/m(2); P = NS) were equivalent between the groups. With a median follow-up of 17 months (range, 3-35 months) there was 1 graft loss (due to cardiovascular death) among the 25 patients. At the most recent follow-up, for recipients with and without c-peptide, both the mean serum creatinine (1.3 ± 0.6 vs 1.0 ± 0.2 ng/mL; P = NS) and the mean HbA1c level (5.3 ± 0.4 vs 5.3 ± 0.5; P = NS) were equivalent between the groups. Conclusion: For nonobese ESRD patients diagnosed with IDDM at a young age, the presence of detectable c-peptide should not influence the decision to proceed with SPK transplantation. Answer: Pretransplant C-peptide levels do appear to influence outcomes in simultaneous kidney-pancreas transplantation (SKPT), although the nature and extent of this influence vary across studies. One study found that diabetic patients with measurable C-peptide levels before transplant were older, overweight, more frequently African-American, had a later age of onset of diabetes, longer duration of pretransplant dialysis, and reduced patient survival compared to insulinopenic patients undergoing SKPT. However, other outcomes such as kidney and pancreas graft survival rates were similar between the two groups (PUBMED:18374116). Another study reported that outcomes were inferior in patients with detectable pretransplant C-peptide levels compared to those without, with post-transplant weight gain, insulin resistance, and rejection as potential mitigating factors. Specifically, the study found lower actual 5-year patient survival, kidney graft survival, and pancreas graft survival rates in patients with detectable C-peptide levels (PUBMED:34599533). A further study indicated that higher levels of pretransplant C-peptide may be associated with inferior posttransplant outcomes that include graft dysfunction among type 2 diabetes mellitus (T2DM) SPK recipients (PUBMED:36706060). However, other research has shown that long-term pancreas graft function is attainable in patients with high preoperative C-peptide levels, and that African American and non-African American patients have equivalent long-term patient, kidney, and pancreas-kidney graft survival rates (PUBMED:9645815). Additionally, one study suggested that pretransplant C-peptide status does not influence the outcome of SPK transplantation in patients with renal failure from diabetes mellitus, advocating that SPK transplants should be offered to all suitable diabetic patients with renal failure regardless of C-peptide status or race (PUBMED:11211183). In summary, while some studies suggest that higher pretransplant C-peptide levels may be associated with certain adverse outcomes post-transplantation, other studies indicate that long-term success is achievable regardless of C-peptide status. The influence of pretransplant C-peptide levels on SKPT outcomes is complex and may be affected by various factors, including patient demographics, the presence of insulin resistance, and the criteria used to define diabetes type and transplant success.
Instruction: Red fluorescence in reef fish: a novel signalling mechanism? Abstracts: abstract_id: PUBMED:18796150 Red fluorescence in reef fish: a novel signalling mechanism? Background: At depths below 10 m, reefs are dominated by blue-green light because seawater selectively absorbs the longer, 'red' wavelengths beyond 600 nm from the downwelling sunlight. Consequently, the visual pigments of many reef fish are matched to shorter wavelengths, which are transmitted better by water. Combining the typically poor long-wavelength sensitivity of fish eyes with the presumed lack of ambient red light, red light is currently considered irrelevant for reef fish. However, previous studies ignore the fact that several marine organisms, including deep sea fish, produce their own red luminescence and are capable of seeing it. Results: We here report that at least 32 reef fishes from 16 genera and 5 families show pronounced red fluorescence under natural, daytime conditions at depths where downwelling red light is virtually absent. Fluorescence was confirmed by extensive spectrometry in the laboratory. In most cases peak emission was around 600 nm and fluorescence was associated with guanine crystals, which thus far were known for their light reflecting properties only. Our data indicate that red fluorescence may function in a context of intraspecific communication. Fluorescence patterns were typically associated with the eyes or the head, varying substantially even between species of the same genus. Moreover red fluorescence was particularly strong in fins that are involved in intraspecific signalling. Finally, microspectrometry in one fluorescent goby, Eviota pellucida, showed a long-wave sensitivity that overlapped with its own red fluorescence, indicating that this species is capable of seeing its own fluorescence. Conclusion: We show that red fluorescence is widespread among marine fishes. Many features indicate that it is used as a private communication mechanism in small, benthic, pair- or group-living fishes. Many of these species show quite cryptic colouration in other parts of the visible spectrum. High inter-specific variation in red fluorescence and its association with structures used in intra-specific signalling further corroborate this view. Our findings challenge the notion that red light is of no importance to marine fish, calling for a reassessment of its role in fish visual ecology in subsurface marine environments. abstract_id: PUBMED:24870049 Fairy wrasses perceive and respond to their deep red fluorescent coloration. Fluorescence enables the display of wavelengths that are absent in the natural environment, offering the potential to generate conspicuous colour contrasts. The marine fairy wrasse Cirrhilabrus solorensis displays prominent fluorescence in the deep red range (650-700 nm). This is remarkable because marine fishes are generally assumed to have poor sensitivity in this part of the visual spectrum. Here, we investigated whether C. solorensis males can perceive the fluorescence featured in this species by testing whether the presence or absence of red fluorescence affects male-male interactions under exclusive blue illumination. Given that males respond aggressively towards mirror-image stimuli, we quantified agonistic behaviour against mirrors covered with filters that did or did not absorb long (i.e. red) wavelengths. Males showed significantly fewer agonistic responses when their fluorescent signal was masked, independent of brightness differences. Our results unequivocally show that C. solorensis can see its deep red fluorescent coloration and that this pattern affects male-male interactions. This is the first study to demonstrate that deep red fluorescent body coloration can be perceived and has behavioural significance in a reef fish. abstract_id: PUBMED:36560895 Long-wavelength-sensitive (lws) opsin gene expression, foraging and visual communication in coral reef fishes. Coral reef fishes are diverse in ecology and behaviour and show remarkable colour variability. Investigating the visual pigment gene (opsin) expression in these fishes makes it possible to associate their visual genotype and phenotype (spectral sensitivities) to visual tasks, such as feeding strategy or conspecific detection. By studying all major damselfish clades (Pomacentridae) and representatives from five other coral reef fish families, we show that the long-wavelength-sensitive (lws) opsin is highly expressed in algivorous and less or not expressed in zooplanktivorous species. Lws is also upregulated in species with orange/red colours (reflectance &gt;520 nm) and expression is highest in orange/red-coloured algivores. Visual models from the perspective of a typical damselfish indicate that sensitivity to longer wavelengths does enhance the ability to detect the red to far-red component of algae and orange/red-coloured conspecifics, possibly enabling social signalling. Character state reconstructions indicate that in the early evolutionary history of damselfishes, there was no lws expression and no orange/red coloration. Omnivory was most often the dominant state. Although herbivory was sometimes dominant, zooplanktivory was never dominant. Sensitivity to long wavelength (increased lws expression) only emerged in association with algivory but never with zooplanktivory. Higher lws expression is also exploited by social signalling in orange/red, which emerged after the transition to algivory. Although the relative timing of traits may deviate by different reconstructions and alternative explanations are possible, our results are consistent with sensory bias whereby social signals evolve as a correlated response to natural selection on sensory system properties in other contexts. abstract_id: PUBMED:26107282 Spectral Diversity and Regulation of Coral Fluorescence in a Mesophotic Reef Habitat in the Red Sea. The phenomenon of coral fluorescence in mesophotic reefs, although well described for shallow waters, remains largely unstudied. We found that representatives of many scleractinian species are brightly fluorescent at depths of 50-60 m at the Interuniversity Institute for Marine Sciences (IUI) reef in Eilat, Israel. Some of these fluorescent species have distribution maxima at mesophotic depths (40-100 m). Several individuals from these depths displayed yellow or orange-red fluorescence, the latter being essentially absent in corals from the shallowest parts of this reef. We demonstrate experimentally that in some cases the production of fluorescent pigments is independent of the exposure to light; while in others, the fluorescence signature is altered or lost when the animals are kept in darkness. Furthermore, we show that green-to-red photoconversion of fluorescent pigments mediated by short-wavelength light can occur also at depths where ultraviolet wavelengths are absent from the underwater light field. Intraspecific colour polymorphisms regarding the colour of the tissue fluorescence, common among shallow water corals, were also observed for mesophotic species. Our results suggest that fluorescent pigments in mesophotic reefs fulfil a distinct biological function and offer promising application potential for coral-reef monitoring and biomedical imaging. abstract_id: PUBMED:29136460 Graphitic Nitrogen Triggers Red Fluorescence in Carbon Dots. Carbon dots (CDs) are a stable and highly biocompatible fluorescent material offering great application potential in cell labeling, optical imaging, LED diodes, and optoelectronic technologies. Because their emission wavelengths provide the best tissue penetration, red-emitting CDs are of particular interest for applications in biomedical technologies. Current synthetic strategies enabling red-shifted emission include increasing the CD particle size (sp2 domain) by a proper synthetic strategy and tuning the surface chemistry of CDs with suitable functional groups (e.g., carboxyl). Here we present an elegant route for preparing full-color CDs with well-controllable fluorescence at blue, green, yellow, or red wavelengths. The two-step procedure involves the synthesis of a full-color-emitting mixture of CDs from citric acid and urea in formamide followed by separation of the individual fluorescent fractions by column chromatography based on differences in CD charge. Red-emitting CDs, which had the most negative charge, were separated as the last fraction. The trend in the separation, surface charge, and red-shift of photoluminescence was caused by increasing amount of graphitic nitrogen in the CD structure, as was clearly proved by XPS, FT-IR, Raman spectroscopy, and DFT calculations. Importantly, graphitic nitrogen generates midgap states within the HOMO-LUMO gap of the undoped systems, resulting in significantly red-shifted light absorption that in turn gives rise to fluorescence at the low-energy end of the visible spectrum. The presented findings identify graphitic nitrogen as another crucial factor that can red-shift the CD photoluminescence. abstract_id: PUBMED:26514914 Selective Probing of Gaseous Ammonia Using Red-Emitting Carbon Dots Based on an Interfacial Response Mechanism. Solid-state fluorescence sensing is one of the most appealing detection techniques because of its simplicity and convenience in practical operation. Herein, we report the development of a red-emitting carbon dots (RCDs)-based material as a solid-state fluorescence sensor for the selective probing of gaseous ammonia. The RCDs were prepared by a low-cost, one-step carbonization method using sugar cane bagasse as the carbon precursor. The pristine RCDs were then directly coated on polyvinylidene fluoride membrane to produce a new fluorescence sensor capable of selectively distinguishing toxic gaseous ammonia from other analyte vapors through sensitive fluorescence quenching with a low detection limit. More importantly, the interfacial response mechanism occurring on the surface of the RCDs has been studied by X-ray photoelectron spectroscopy, Fourier-transform infrared spectroscopy, and Raman measurements. The results indicate that fluorescence quenching in the RCDs might result from ammonia-induced Michael addition through insertion of N into the C=C group and deprotonation of the carboxyl group. To the best of our knowledge, this is the first report that provides clear insight into the mechanism of surface chemistry on CDs in the solid state. abstract_id: PUBMED:26608504 Homogeneity of coral reef communities across 8 degrees of latitude in the Saudi Arabian Red Sea. Coral reef communities between 26.8 °N and 18.6 °N latitude in the Saudi Arabian Red Sea were surveyed to provide baseline data and an assessment of fine-scale biogeography of communities in this region. Forty reefs along 1100 km of coastline were surveyed using depth-stratified visual transects of fish and benthic communities. Fish abundance and benthic cover data were analyzed using multivariate approaches to investigate whether coral reef communities differed with latitude. A total of 215 fish species and 90 benthic categories were recorded on the surveys. There were no significant differences among locations in fish abundance, species richness, or among several diversity indices. Despite known environmental gradients within the Red Sea, the communities remained surprisingly similar. The communities do, however, exhibit subtle changes across this span of reefs that likely reflect the constrained distributions of several species of reef fish and benthic fauna. abstract_id: PUBMED:34699835 Reef accumulation is decoupled from recent degradation in the central and southern Red Sea. Reefs are biogenic structures that result in three-dimensional accumulations of calcium carbonate. Over geological timescales, a positive balance between the production and accumulation of calcium carbonate versus erosional and off-reef transport processes maintains positive net accretion on reefs. Yet, how ecological processes occurring over decadal timescales translate to the accumulation of geological structures is poorly understood, in part due to a lack of studies with detailed time-constrained chronologies of reef accretion over decades to centuries. Here, we combined ecological surveys of living reefs with palaeoecological reconstructions and high-precision radiometric (U-Th) age-dating of fossil reefs represented in both reef sediment cores and surficial dead in situ corals, to reconstruct the history of community composition and carbonate accumulation across the central and southern Saudi Arabian Red Sea throughout the late Holocene. We found that reefs were primarily comprised of thermally tolerant massive Porites colonies, creating a consolidated coral framework, with unconsolidated branching coral rubble accumulating among massive corals on shallow (5-8 m depth) exposed (windward), and gently sloping reef slopes. These unconsolidated reef rubble fields were formed primarily from ex situ Acropora and Pocillopora coral fragments, infilled post deposition within a sedimentary matrix. Bayesian age-depth models revealed a process of punctuated deposition of post-mortem coral fragments transported from adjacent reef environments. That a large portion of Saudi Arabian Red Sea reef slopes is driven by allochthonous deposition (transportation) has important implications for modeling carbonate budgets and reef growth. In addition, a multi-decadal lag exists between the time of death for branching in situ coral and incorporation into the unconsolidated reef rubble. This indicates that recent climate related degradation in the 21st century has not had an immediately negative effect on reef building processes affecting a large portion of the reef area in the Saudi Arabian Red Sea. abstract_id: PUBMED:25030989 Red fluorescence increases with depth in reef fishes, supporting a visual function, not UV protection. Why do some marine fishes exhibit striking patterns of natural red fluorescence? In this study, we contrast two non-exclusive hypotheses: (i) that UV absorption by fluorescent pigments offers significant photoprotection in shallow water, where UV irradiance is strongest; and (ii) that red fluorescence enhances visual contrast at depths below -10 m, where most light in the 'red' 600-700 nm range has been absorbed. Whereas the photoprotection hypothesis predicts fluorescence to be stronger near the surface and weaker in deeper water, the visual contrast hypothesis predicts the opposite. We used fluorometry to measure red fluorescence brightness in vivo in individuals belonging to eight common small reef fish species with conspicuously red fluorescent eyes. Fluorescence was significantly brighter in specimens from the -20 m sites than in those from -5 m sites in six out of eight species. No difference was found in the remaining two. Our results support the visual contrast hypothesis. We discuss the possible roles fluorescence may play in fish visual ecology and highlight the possibility that fluorescent light emission from the eyes in particular may be used to detect cryptic prey. abstract_id: PUBMED:35007045 N,S-Codoped Carbon Dots with Red Fluorescence and Their Cellular Imaging. The emergence of carbon dots (C-dots) has aroused increasing attention owing to their excellent chemical and physical properties, such as favorable biocompatibility and an outstanding fluorescence (FL) property. Most reported C-dots show blue emission, which hinders their applications in the biomedical field due to the strong FL background of biosamples. Therefore, strategies for the achievement of long-wavelength fluorescent C-dots are urgently needed. Herein, red emissive biocompatible C-dots were prepared from polythiophene (PTh) through chemical cutting by nitric acid. Various methods were utilized to shed light on the luminescence mechanism of the C-dots. The results demonstrated that FL emission of the prepared C-dots was mainly dominated by sp2 domains. The C-dots were well-applied in in vitro imaging. This work prepared red fluorescent C-dots from the perspective of the structure of C-dots, which would benefit the regulation of the FL of C-dots. Answer: Red fluorescence in reef fish has been observed and is considered a novel signaling mechanism. At depths below 10 meters, where red light from the sun is mostly absorbed by seawater, reef fish have been found to exhibit pronounced red fluorescence under natural, daytime conditions. This fluorescence is often associated with guanine crystals and is typically found around the eyes or head, and on fins that are involved in intraspecific signaling. The fluorescence patterns vary substantially even between species of the same genus, suggesting a role in intraspecific communication. The goby Eviota pellucida, for example, has been shown to have a long-wave sensitivity that overlaps with its own red fluorescence, indicating that it can see its own fluorescence (PUBMED:18796150). Further supporting the signaling hypothesis, the fairy wrasse Cirrhilabrus solorensis has been shown to perceive and respond to its deep red fluorescent coloration. When the red fluorescence of these fish was masked, males exhibited significantly fewer agonistic responses, demonstrating that the fluorescence affects male-male interactions and is perceived by the fish (PUBMED:24870049). The expression of long-wavelength-sensitive (lws) opsin genes in coral reef fishes has been linked to foraging and visual communication. Fishes with high lws opsin expression, such as algivorous species and those with orange/red colors, may have enhanced abilities to detect algae and orange/red-colored conspecifics, which could facilitate social signaling (PUBMED:36560895). These findings challenge the previous assumption that red light is irrelevant for reef fish and suggest that red fluorescence plays a significant role in fish visual ecology, particularly in subsurface marine environments where it may function as a private communication mechanism (PUBMED:18796150).
Instruction: Underrepresented minorities in academic medicine: is Florida better off? Abstracts: abstract_id: PUBMED:32025561 Supporting underrepresented minority women in academic dermatology. The lack of underrepresented minorities in medicine (UIM) in dermatology has been well established, but the challenges faced by UIM women in dermatology have not yet been explored. UIM women belong simultaneously to more than one underrepresented group and therefore face complex challenges that are common to women and underrepresented minorities. Yet, the literature that focuses explicitly on UIM women as a group is scarce. This commentary provides insight into some of the challenges faced by UIM women in academic dermatology and provides specific recommendations to support these women through mentorship, professional development, and legitimization of their contributions to institutional diversity and service efforts. abstract_id: PUBMED:32336395 Women and Underrepresented Minorities in Academic Anesthesiology. The demographics of the United States is changing with 51% of the population being female, and 32% of the population identifying as an underrepresented minority (URM, ie, African American/black, Hispanic/Latino, American Indian/Alaska Native, Native Hawaiian/Pacific Islander). Women and URMs have been historically underrepresented in medicine and in academic anesthesiology. This article provides an overview of the current status of women and URM faculty in academic anesthesiology and provides a framework for academic advancement. Throughout the text, the terms woman/women are used, as opposed to female, as the terms woman/women refer to gender, and female refers to biological sex. abstract_id: PUBMED:30241421 Academic Career Progression of Chinese-Origin Pharmacy Faculty Members in Western Countries. Background: The field of Pharmacy education is experiencing a paucity of underrepresented minorities (URMs) faculty worldwide. The aim of this study is to investigate the current professional status of Chinese-origin pharmacy faculty members, who are considered as a good model of URMs at pharmacy academia in western countries, and identify the influencing factors to their academic career progression in academic careers. Methods: An online questionnaire was sent to Chinese-origin academic staffs at pharmacy schools in US, UK, Canada, Australia, and New Zealand. The survey comprised demographic information, educational background, and the influencing factors to academic career progression. Results: The vast majority of Chinese faculty members who worked in US were male. Individuals with junior academic title comprised the largest proportion. Over 75% of Chinese-origin pharmacy academics were involved in scientific disciplines (e.g., pharmaceutics, pharmacology, and medicinal chemistry). Usually, Chinese-origin academic members spent 4 years obtaining their first academic jobs after finishing PhD degree, and need 5⁻6 years to get academic promotion. The contributing factors of academic promotion were high quality publications and external funding. Conclusion: Our research offers a deep insight into academic career progression for URMs and give some valuable advice for their pharmacy academic paths. abstract_id: PUBMED:11411975 Female and underrepresented minority faculty in academic departments of family medicine: are women and minorities better off in family medicine? Background: Several studies have shown that the percentage of women represented in senior academic positions at US medical schools is lower than the percentage of men in senior positions. Similarly, the percentage of minority faculty members represented in senior academic positions is lower than that of their majority counterparts. This study assessed whether these findings were also present in departments of family medicine and identified any factors related to the institution or department that favored academic success for women and minorities. Methods: Data regarding faculty workforce composition, including faculty rank and rank for women and underrepresented minorities, were extracted from a comprehensive survey of departments of family medicine at US allopathic medical schools. The data are based on faculty workforce in 1997 and include responses from 58 (51%) of all schools with a department of family medicine. Results: Faculty in departments of family medicine were more likely to be female (41% versus 25%) and an underrepresented minority (9% versus 4%), compared with all academic medicine disciplines. However, women in full-time positions were less likely than men, and minorities were less likely than nonminorities, to be either an associate or full professor. We could find no institutional or departmental characteristics that were associated with academic success for women or minority faculty members. Conclusions: While women and underrepresented minorities are more common to the faculty workforce in family medicine, members of both of these groups are not well represented in senior faculty ranks. abstract_id: PUBMED:35691036 How well do urology residency program webpages recruit underrepresented minorities? Introduction: Historically, the field of medicine has suffered from a lack of diversity. This project examines if urology residency program websites were actively attempting to recruit underrepresented minority applicants with the hypothesis that while some programs would attempt to attract such applicants on their website, the majority would not. Materials And Methods: A cross-sectional analysis of program webpages for information regarding underrepresented minorities was performed. Electronic Residency Application Service residency database was used to identify 130 urology residency programs. Three were no longer accepting residents and were not included. The publicly available webpages of 137 urology residency training programs identified were reviewed. Results: Only 26.3% (36) of programs included any information regarding diversity or inclusion on their webpage. The most common references to diversity were a link to a Department of Diversity and Inclusion (28, 20.4%) and information regarding a "commitment to diversity" (28, 20.4%). Only two programs included all seven categories searched for. Conclusions: Residency program websites may be an important tool to recruit underrepresented minorities and currently there is significant room for improvement. Given that urology is already behind other fields in terms of representation, it is especially important to make an active, visible attempt to recruit underrepresented minorities. abstract_id: PUBMED:25396113 Minority Underrepresentation in Academia: Factors Impacting Careers of Surgery Residents. Background: Underrepresentation of minorities within academic surgery is an ever present problem with a profound impact on healthcare. The factors influencing surgery residents to pursue an academic career have yet to be formally investigated. We sought to elucidate these factors, with a focus on minority status. Methods: A web-based questionnaire was sent to all administered to all ACGME-accredited general surgery programs in the United States. The main outcome was the decision to pursue a fully academic versus non-academic career. Multivariable logistic regression was used to identify characteristics impacting career choice. Results: Of the 3,726 residents who received the survey, a total of 1,217 residents completed it - a response rate of 33%. Forty-seven percent planned to pursue non-academic careers, 35% academic careers, and 18% were undecided. There was no association between underrepresented minority status and academic career choice (Odds Ratio = 1.0, 95% Confidence Interval 0.6 - 1.6). Among all residents, research during training (OR=4.0, 95% CI 2.7-5.9), mentorship (OR=2.1, 95% CI 1.6-2.9), and attending a residency program requiring research (OR=2.3, 95% CI 1.5-3.4) were factors associated with choosing an academic career. When the analysis was performed among only senior residents (i.e., 4th and 5th year residents), a debt burden &gt;$150,000 was associated with choosing a non-academic career (OR=0.4, 95% CI 0.1-0.9). Conclusions: Underrepresented minority status is not associated with career choice. Intentional recruitment of minorities into research-oriented training programs, increased mentorship and research support among current minority residents, and improved financial options for minorities may increase the number choosing an academic surgical career. abstract_id: PUBMED:33015287 Gender and race trends in academic rank of dermatologists at top U.S. institutions: A cross-sectional study. Background: Despite growth in the proportion of women and underrepresented minorities in the dermatology workforce, gender and racial differences persist. Objective: This study sought to analyze the differences in academic rank between genders and races while controlling for achievement indicators. Methods: Data from 15 institutions, selected based on residency program rankings, were analyzed. Results: Women were well-represented as assistant professors (60.7% vs. 37.6% of men; p &lt; .001) and underrepresented as full professors (17.0% vs. 37.6%; p &lt; .001). However, in a multivariable analysis controlling for career duration, publications per year, National Institutes of Health funding, and PhD degrees, neither gender (adjusted odds ratio [aOR]: 1.01; confidence interval [CI], 0.55-2.73) nor race (aOR: 1.24; CI, 0.53-2.92) was associated with a higher academic rank. Career duration (aOR: 1.24; CI, 1.18-1.30), publications per year (aOR: 1.48; CI, 1.28-1.74), and National Institutes of Health research funding (aOR: 4.29; CI, 1.53-12.88) were predictive of higher academic rank. Conclusion: Our findings confirm that for equal levels of achievement, men and women are promoted similarly in dermatology, yet reasons for disparity in research output and funding for women and minority dermatologists need further study. abstract_id: PUBMED:31478091 Strategies for Attracting Women and Underrepresented Minorities in Urology. Purpose Of Review: This article summarizes the current state of underrepresented minorities (URM) and women within urology. Specific strategies to promote the recruitment and retention of URM and women within urology are discussed. Recent Findings: Minorities and women remain underrepresented within urology, a disparity that has roots as early as medical school and persists throughout residency and practice. This is likely due to implicit and explicit bias, lack of role models, and persistent disparities in compensation and promotion. Strategies to improve recruitment and retention of these individuals should focus on increasing early exposure to urology, opportunities for mentorship, and support in professional development. Creation of a supportive institutional culture, consideration of alternative career advancement paths, and intentional diversity-focused recruitment efforts are also critical. Successful recruitment and retention of URM and women in urology requires specific, directed efforts to increase opportunities for exposure, mentorship, and career promotion at a programmatic, institutional, and national level. abstract_id: PUBMED:34804325 Understanding Differences in Underrepresented Minorities and First-Generation Student Perceptions in the Introductory Biology Classroom. We used quantitative methods to better understand the perceptions of students in an introductory biology course (Biology 101) at a small, liberal arts college (SLAC) that is also a primarily white institution (PWI). In pre/post surveys, we asked students questions related to their attitudes and beliefs about their professor, classmates, and Biology 101. We were especially interested in the responses and outcomes of underrepresented minorities (URM) and first-generation (FG) students. Our findings suggest URM and FG students have a decreased sense of belonging and increased perceptions of exclusion and differential treatment due to race. These findings can explain, in part, the disparity in Biology 101 grade and STEM (science, technology, engineering, and math) attrition. abstract_id: PUBMED:33431653 Equity for women and underrepresented minorities in STEM: Graduate experiences and career plans in chemistry. Recent events prompted scientists in the United States and throughout the world to consider how systematic racism affects the scientific enterprise. This paper provides evidence of inequities related to race-ethnicity and gender in graduate school experiences and career plans of PhD students in the top 100 ranked departments in one science, technology, engineering, and math (STEM) discipline, chemistry. Mixed-model regression analyses were used to examine factors that might moderate these differences. The results show that graduate students who identified as a member of a racial/ethnic group traditionally underrepresented in chemistry (underrepresented minorities, URM) were significantly less likely than other students to report that their financial support was sufficient to meet their needs. They were also less likely to report having supportive relationships with peers and postdocs. Women, and especially URM women, were significantly less likely to report supportive relationships with advisors. Despite their more negative experiences in graduate school, students who identified as URM expressed greater commitment to finishing their degree and staying in the field. When there was at least one faculty member within their departments who also identified as URM they were also more likely than other students to aspire to a university professorship with an emphasis on research. Men were significantly more likely than women to express strong commitment to finishing the PhD and remaining in chemistry, but this difference was stronger in top-ranked departments. Men were also more likely than women to aspire to a professorship with an emphasis on research, and this difference remained when individual and departmental-level variables were controlled. Answer: The provided abstracts do not specifically address the status of underrepresented minorities in academic medicine in Florida. They discuss the challenges and representation of underrepresented minorities and women in various medical and academic fields, such as dermatology, anesthesiology, pharmacy, family medicine, urology, and surgery, as well as in STEM disciplines more broadly. These abstracts highlight the underrepresentation of these groups in senior academic positions, the importance of mentorship and professional development, and the need for intentional recruitment and support strategies to improve diversity in these fields (PUBMED:32025561, PUBMED:32336395, PUBMED:30241421, PUBMED:11411975, PUBMED:35691036, PUBMED:25396113, PUBMED:33015287, PUBMED:31478091, PUBMED:34804325, PUBMED:33431653). However, none of the abstracts provide data or analysis specific to the state of Florida or compare Florida to other states or regions regarding the representation of underrepresented minorities in academic medicine. Therefore, based on the provided abstracts, it is not possible to determine whether Florida is better or worse off in terms of underrepresented minority representation in academic medicine.
Instruction: Does nonylphenol promote the growth of uterine fibroids? Abstracts: abstract_id: PUBMED:24835858 Does nonylphenol promote the growth of uterine fibroids? Objective: To study the effect and mechanism of action of nonylphenol (NP), an environmental oestrogen, on uterine leiomyoma (UL) cells. Methods: Primary culture and subculture of human UL cells, identified as smooth muscle cells by immunocytochemical staining with a monoclonal anti-α-smooth muscle actin antibody, were performed. The viability of cells treated with various concentrations of NP for 24, 48 and 72h was determined by CCK-8 assay. mRNA expression of oestrogen receptor α (ERα), insulin-like growth factor 1 (IGF-1) and vascular endothelial growth factor (VEGF) was detected using real-time quantitative polymerase chain reaction, and protein expression was detected using Western blot analysis for all groups. Results: NP promoted the growth of UL cells and expression of ERα, IGF-1 and VEGF; this was positively correlated with the concentration and duration of NP treatment. Conclusion: NP promotes the growth of UL cells. The mechanism of action appears to be over-expression of IGF-1 and VEGF, up-regulated by ERα, resulting in the growth of UL cells. abstract_id: PUBMED:29970676 The effect of TGF-β signaling on regulating proliferation of uterine leiomyoma cell via ERα signaling activated by bisphenol A, octylphenol and nonylphenol in vitro. Objectives: To study the transforming growth factor beta (TGF-β) signaling pathway in interactions with estrogen receptor alpha (ERα) signaling pathway mediating the growth of human uterine leiomyoma (UL) activated by phenolic environmental estrogens (EEs). Methods: The subcultured UL cells were used to determine the validation of TGF-β3 for the viability of human UL cells using CCK-8 assay, mRNA expressions of ERα, and c-fos by quantitative reverse transcription polymerase chain reaction method, and expressions of p-Smad3, SnoN, and c-fos proteins by Western blot assay in each treatment group. Results: Compared with each of EEs or TGF-β3 treatment, slightly decrease in the proliferation rate of UL was detected in the coexistence of each EE with TGF-β3. Interestingly, mRNA expressions of ERα and c-fos reduced in the setting of coexistence of TGF-β3 and EEs. Somehow, the expression of p-Smad3 and c-fos proteins significantly decreased in each of E2, bisphenol A (BPA), nonylphenol (NP), and octylphenol (OP) group, as well as the expression of SnoN protein significantly reduced only in BPA and NP groups, followed by TGF-β3 treatment. With the overlaid action of ICI 182,780, the expression of p-Smad3 protein significantly increased in OP group, but slightly increased in E2, BPA, NP, and OP groups. However, compared with the control group, the expression of SnoN and c-fos proteins significantly decreased in the same setting. Conclusion: Both ERα signaling pathway and TGF-β signaling pathway have different roles in governing UL cell proliferation. The phenolic EEs can be a promoter to the proliferation of UL cells, which is mediated by ERα signaling pathway and cross-talked with TGF-β signaling pathway. abstract_id: PUBMED:32590215 Determination of urinary carnitine levels as a potential indicator of uterine fibroids caused by nonylphenol exposure. Our previous studies have shown that uterine fibroids are associated with nonylphenol (NP) exposure, and the changes of carnitines in critical reproductive tissues and body fluids could be used to indicate the female reproductive toxicity caused by NP exposure. In this work, on the basis of further clarifying the correlation between NP exposure level and uterine fibroids, the possibility of the urinary carnitine levels as a potential indicator of uterine fibroids caused by NP exposure was discussed. The urine samples were collected from 84 female volunteers: the control group of 34 healthy women without gynecological disease and 50 uterine fibroids patients, respectively. Methods were respectively established for the determination of NP and eight carnitines in human urine samples by ultra-performance liquid chromatography-tandem mass spectrometry (UPLC-MS/MS). The results showed that the NP level of uterine fibroids group was significantly higher than that of control group (P = 0.002), indicating that NP exposure was an important environmental factor in the occurrence of uterine fibroids. It was further found that in urine samples of the uterine fibroids group, the levels of L-Carnitine (C0), L-Acetyl-carnitine (C2), L-Octanoyl-carnitine (C8), Tetradecanoyl-carnitine (C14), Oleoyl-carnitine (C18:1) and Linoleoyl-carnitine (C18:2) had obviously increased compared with those in the control group (P &lt; 0.001; &lt; 0.001; &lt; 0.001; = 0.003; &lt; 0.001; = 0.010). The concentrations of L-Hexanoyl-carnitine (C6) and L-Palmitoyl-carnitine (C16) in the uterine fibroids group were also higher than those in the control group, although the difference was not statistically significant (P &gt; 0.05). The results suggested that the changes in urinary carnitine levels might be a potential indicator to help to warn of the risk of uterine fibroids caused by NP exposure at the early stage. abstract_id: PUBMED:26224478 Transforming growth factor-β signaling pathway cross-talking with ERα signaling pathway on regulating the growth of uterine leiomyoma activated by phenolic environmental estrogens in vitro. The aim of this paper is to study the participation of transforming growth factor-β (TGF-β) signaling pathway in mediating the growth of human uterine leiomyoma (UL) activated by phenolic environmental estrogens (EEs), via the interaction between TGF-β and ER signaling pathways. The UL cells were prepared by primary culture and subculture methods. To validate the role of TGF-β3 (5 ng/ml) for the viability of human uterine leiomyoma cells, CCK-8 assay was performed in each of five treatment groups including E2 group (E2 10(9) mol/l), BPA group (bisphenol A 10 μmol/l), NP group (nonylphenol 32 μmol/l), OP group (octylphenol 8 μmol/l), or control group (DMSO only). Subsequently, qRT-PCR was applied to detect mRNA expressions of ERα and c-fos, while western blot assay was used to test the expressions of p-Smad3, SnoN, and c-fos proteins in all settings mentioned above; the expressions were compared among different groups, and also in settings with and without synchronous treatment of ICI 182,780. Primarily cultured UL cells were successfully established. Compared with the control group, there were statistically significant increases in the proliferation rate of the UL cells in all EE groups or treated with TGF-β3 only (p &lt; 0.05). Nevertheless, a slight decrease in proliferation rate of UL was detected in coexistence with TGF-β3 in all EE groups (p &gt; 0.05). Interestingly, mRNA expressions of ERα and c-fos reduced in the setting of coexistence of TGF-β3 and EEs compared to isolated EE treatment (p &lt; 0.05). Compared with the control group, the expression of p-Smad3 and c-fos proteins significantly decreased (p &lt; 0.05) in each of E2, BPA, NP, and OP group, and the expression of SnoN protein also significantly reduced only in BPA and NP groups (p &lt; 0.05), followed by TGF-β3 treatment. When adding ICI 182,780, the expression of p-Smad3 protein significantly increased in OP group (p &lt; 0.05), but slightly increased in E2, BPA, NP, and OP groups (p &gt; 0.05). However, compared with the control group, the expressions of SnoN and c-fos proteins significantly decreased (p &lt; 0.05) after adding ICI182,780. Moreover, there was a significant statistical difference in the expression of p-Smad3, SnoN, and c-fos proteins between pre- and post-treatment of ICI 182,780 in all groups (p &lt; 0.05). The ERα signaling pathway and TGF-β signaling pathway have different roles in the control of UL cell proliferation. The phenolic EEs can be a promoter of UL cell proliferation, which is mediated by ERα signaling pathway and its cross-talking with TGF-β signaling pathway. Both less exposure to EEs and blockade of TGF signaling pathway are necessary strategies to prevent UL. abstract_id: PUBMED:26842765 Phenolic environmental estrogens in urine and blood plasma from women with uterine leiomyoma: Epidemiological survey. Aim: To explore the effect of phenolic environmental estrogens (EE) on women with uterine leiomyoma (UL). Methods: Urine and blood plasma samples were collected from 300 patients diagnosed with UL at the Affiliated Zhongda Hospital of Southeast University between December 2013 and December 2014. Control urine and blood plasma samples were collected from 300 women who are either patients without UL or healthy volunteers presenting to the same hospital for physical examination during the same period. Bisphenol A (BPA), nonylphenol (NP) and octylphenol (OP) concentration in these samples was measured using solid phase extraction (SPE) coupled with liquid chromatography-tandem mass spectrometry. Results: The OP concentration in urine and blood plasma was significantly higher in the UL group compared with the control group (r = 0.224, P = 0.001). Urine BPA concentration was not significantly different between the UL group and the control group (r = 0.009, P = 0.896). There was also no statistically significant difference in urine NP concentration between the two groups (r = 0.057, P = 0.419). On logistic regression, exposure concentration of urine BPA (OR, 1.129; 95%CI: 1.081-1.179) and NP (OR, 1.165; 95%CI: 1.025-1.324) was associated with UL genesis (P &lt; 0.05). Nevertheless, there was no significant difference in blood plasma concentration of BPA, OP and NP between the two groups (P &gt; 0.05). Conclusion: Urine and blood plasma EE exposure levels in women, especially the urine level, was related to the incidence of UL. abstract_id: PUBMED:24060595 Measurement of phenolic environmental estrogens in human urine samples by HPLC-MS/MS and primary discussion the possible linkage with uterine leiomyoma. A method was established for the determination of three phenolic environmental estrogens, namely bisphenol A (BPA), nonylphenol (NP) and octylphenol (OP), in urine from women of uterine leiomyoma group (n=49) and control group (n=29), by using solid-phase extraction (SPE) coupled with liquid chromatography-tandem mass spectrometry (HPLC-MS/MS). Urine samples were spiked with 2,4,6-tribromophenyl-terminated tetrabromobisphenol-A carbonate oligomer (TBBPA) and nonylphenol D8 (NP-D8) as internal standard (I.S.) and de-conjugated by adding β-glucuronidase and sulfatase before the SPE. The extraction recoveries of BPA, NP and OP were more than 73.3%; the standard curve was linear over the validated concentrations in the range of 1.0-100.0ng/mL and the limits of detection (LOD) of BPA, NP and OP were 0.32ng/mL, 0.18ng/mL and 0.15ng/mL, respectively. Moreover, by analysing quality control urine samples in 5 days, the results showed that the method was precise and accurate, for the intra- and inter-day CV% within 15.2%. Except that OP was not found (&lt;LOQ) in any of the control urine samples, the three phenolic environmental estrogens were detected in all urine samples. For the uterine leiomyoma women, the mean concentrations of BPA, NP and OP were 13.9±12.7ng/mL, 2.77±2.22ng/mL and 4.09±5.51ng/mL (mean±SD), respectively. For the control group, the mean concentrations of BPA and NP were 8.50±12.2ng/mL and 3.84±3.90ng/mL (mean±SD), respectively. The Wilcoxon rank sum test was employed for the comparison of BPA and NP between and control in 2 subgroups defined by the number of gravidity (≤3 and &gt;3). NP levels were significantly higher in uterine leiomyoma patients than control group in low gravidity subgroup. Though BPA levels in experimental and control groups were not significantly different, the mean levels and concentration distribution were different. The study suggested that there is certain relationship between exposure concentrations of phenolic environmental estrogens and uterine leiomyoma disease. abstract_id: PUBMED:33516137 The influence of phenolic environmental estrogen on the transcriptome of uterine leiomyoma cells: A whole transcriptome profiling-based analysis. Objective: The study aimed to recognize potential molecular targets and signal pathways whereby phenolic environmental estrogen promotes the proliferation of uterine leiomyoma cells. Methods: Primary cultured cell lines of uterine leiomyoma were treated with 0.1% DMSO, 10.0μmol/L Bisphenol A (BPA), and 32.0μmol/L Nonylphenol (NP) for 48 h before RNA-seq was performed. Those genes affected by BPA and NP were identified. Then, Gene Ontology (GO) enrichment, Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway enrichment, and Protein-protein Interaction (PPI) analysis were performed. Quantitative real-time polymerase chain reaction (q-PCR) and western blot were used to verify the differentially expressed gene and protein. Results: Compared to with the control group, 739 differentially expressed genes were identified in both the BPA group and the NP group. GO enrichment analysis showed that the most enriched GO terms were connective tissue development and G1/S transition of mitotic cell cycle, and extracellular matrix. The results of KEGG enrichment analysis showed that differentially expressed mRNA were enriched mainly in three primary pathways, including environmental information processing, human diseases, and cellular processes. The cell cycle, PI3K-Akt signaling pathway are significantly enriched. The q-PCR and western blot verified the cell cycle associated genes and proteins were upregulated in both BPA group and NP group. Both BPA and NP activated the PI3K-AKT signaling pathway. Conclusion: Phenolic environmental estrogens may promote the proliferation and cell cycle progression of uterine leiomyoma cells through rapid non-genomic ER signaling, which leads to disordered cell cycle regulation and accelerates the transition of the cell cycle from G0/G1 phase to S phase. In addition, as an external stimulant, phenolic estrogen promotes the upregulation of inflammatory factors in uterine leiomyomas. abstract_id: PUBMED:17505012 Tumor-specific efficacy of transforming growth factor-beta RI inhibition in Eker rats. Purpose: Transforming growth factor beta (TGF-beta), which generally stimulates the growth of mesenchymally derived cells but inhibits the growth of epithelial cells, has been proposed as a possible target for cancer therapy. However, concerns have been raised that whereas inhibition of TGF-beta signaling could be efficacious for lesions in which TGF-beta promotes tumor development and/or progression, systemic pharmacologic blockade of this signaling pathway could also promote the growth of epithelial lesions. Experimental Design: We examined the effect of a TGF-beta inhibitor on mesenchymal (leiomyoma) and epithelial (renal cell carcinoma) tumors in Eker rats, which are genetically predisposed to develop these tumors with a high frequency. Results: Blockade of TGF-beta signaling with the ALK5/type I TGF-beta R kinase inhibitor, SB-525334, was efficacious for uterine leiomyoma; significantly decreasing tumor incidence and multiplicity, and reducing the size of these mesenchymal tumors. However, SB-525334 was also mitogenic and antiapoptotic for epithelial cells in the kidney and exacerbated the growth of epithelial lesions present in the kidneys of these animals. Conclusion: Although pharmacologic inhibition of TGF-beta signaling with SB-525334 may be efficacious for mesenchymal tumors, inhibition of this signaling pathway seems to promote the development of epithelial tumors. abstract_id: PUBMED:11938733 Affect of insulin-like growth factor I and estradiol on the growth of uterine leiomyoma To study roles of insulun-like growth factor 1(IGF-I) and estradiol(E2) in the growth of leiomyoma. Protein expressions of IGF-I receptor(IGF-I-R), estrogen receptor(ER), and cell proliferation associated nuclear antigen(ki-67) were analysed by streptavidin-peroxide method in 40 cases. Serum level and local concentration of IGF-I were determined by immunoradiometric assay(IRMA) in 20 cases. Serum level of estradiol was determined by radioimmunoassay in 20 cases. Twenty normal fertile women were studied as control. Protein expressions of IGF-I-R and ER in uterine leiomyoma were significantly higher than in myometrium(P &lt; 0.05); and local concentration of IGF-I positively correlated with ki-67 (r = 0.6513, P &lt; 0.05); and expression of estrogen receptor. The serum levels of IGF-I and E2 had no significant difference between leiomyoma patients and normal women. IGF-I and E2 promote the growth of leiomyoma and IGF-I may act as a mediator of estrogen. abstract_id: PUBMED:24255718 Measurement of phenolic environmental estrogens in women with uterine leiomyoma. Objectives: To investigate the effect of phenolic environmental estrogens on uterine leiomyoma from the perspective of clinical epidemiology. Methods: Urine and blood samples were collected from Han women with uterine leiomyoma and women without uterine leiomyoma, living in Nanjing, China, between September 2011 and February 2013. A total of 156 urine samples and 214 blood samples were collected from the uterine leiomyoma group and 106 urine samples and 126 blood plasma samples from the control group. Bisphenol A (BPA), nonylphenol (NP) and octylphenol (OP) concentrations were determined by solid-phase extraction (SPE) coupled with liquid chromatography-tandem mass spectrometry (HPLC-MS/MS). Results: Phenolic environmental estrogens in the uterine leiomyoma and control groups were compared based on: gravida&gt;3 and gravida ≤ 3. In participants with gravida&gt;3, urine OP concentration was significantly (P&lt;0.05) higher in the uterine leiomyoma group than in the control group. In participants with gravida ≤ 3, urine NP concentration was significantly (P&lt;0.05) higher in the uterine leiomyoma group compared to controls. Despite obstetric history, urine BPA mean exposure concentration was significantly (P&lt;0.05) different between uterine leiomyoma group and control group. The urine BPA concentration was not significantly (P&gt;0.05) different between gravida&gt;3 and gravida ≤ 3 patients. There was no significant (P&gt;0.05) difference in plasma concentrations of BPA, OP and NP between the leiomyoma group and control group. Mean exposure concentration and range of distribution of BPA, OP and NP plasma concentration differed between the uterine leiomyoma and control group. Conclusion: Exposure level of phenolic environmental estrogens in human was related with leiomyoma tumorigenesis. Answer: Yes, nonylphenol (NP) has been shown to promote the growth of uterine fibroids, also known as uterine leiomyoma (UL) cells. Studies have demonstrated that NP, an environmental estrogen, can enhance the viability and proliferation of UL cells. The mechanism of action appears to involve the over-expression of insulin-like growth factor 1 (IGF-1) and vascular endothelial growth factor (VEGF), which are up-regulated by estrogen receptor alpha (ERα), resulting in the growth of UL cells (PUBMED:24835858). Additionally, NP exposure has been associated with increased urinary levels of carnitines, which could potentially serve as indicators of uterine fibroids caused by NP exposure (PUBMED:32590215). Furthermore, the ERα signaling pathway, which can be activated by NP, has been shown to cross-talk with the transforming growth factor-beta (TGF-β) signaling pathway, influencing the proliferation of UL cells (PUBMED:29970676; PUBMED:26224478). Epidemiological surveys have also found a relationship between the exposure levels of NP and the incidence of UL, suggesting that NP is an important environmental factor in the occurrence of uterine fibroids (PUBMED:26842765; PUBMED:24060595; PUBMED:24255718).
Instruction: Is there an anti-androgen withdrawal syndrome for enzalutamide? Abstracts: abstract_id: PUBMED:24691670 Is there an anti-androgen withdrawal syndrome for enzalutamide? Background: The anti-androgen withdrawal syndrome (AAWS) can be seen in one-third of patients after discontinuation of first-generation non-steroidal anti-androgen therapy. With the introduction of new agents for anti-androgen therapy as well as alternate mechanisms of action, new therapeutic options before and after docetaxel chemotherapy have arisen (Ohlmann et al. in World J Urol 30(4):495-503, 2012). The question regarding the occurrence of an enzalutamide withdrawal syndrome (EWS) has not been evaluated yet. In this study, we assess prostate-specific antigen (PSA) response after discontinuation of enzalutamide. Methods: In total 31 patients with metastatic castration-resistant prostate cancer (mCRPC) underwent an enzalutamide withdrawal and were evaluated. Data were gathered from 6 centres in Germany. Patients with continuous oral administration of enzalutamide with rising serum PSA levels were evaluated, starting from enzalutamide withdrawal until subsequent therapy was initiated, follow-up ended or death of the patient occurred. Statistical evaluation was performed applying one-sided binomial testing using R-statistical software, version 3.0.1. Results: Mean withdrawal follow-up was 6.5 weeks (range 1-26.1 weeks). None of the 31 patients showed a PSA decline. Mean relative PSA rise over all patients was 73.9 % (range 0.5-440.7 %) with a median of 44.9 %. Conclusions: If existent, an AAWS is at least very rare for enzalutamide in patients with mCRPC after taxane-based chemotherapy and does not play a clinical role in this setting. This may be attributed to the different pharmacodynamics of enzalutamide. Longer duration of therapy or a longer withdrawal interval may reveal a rare EWS in the future. abstract_id: PUBMED:26332122 Similarities and Distinctions in Actions of Surface-Directed and Classic Androgen Receptor Antagonists. The androgen receptor (AR) surface-directed antagonist MJC13 inhibits AR function and proliferation of prostate cancer (PC) cells. These effects are related to arrest of an AR/chaperone complex in the cytoplasm. Here, we compared MJC13 and classic AR antagonists such as flutamide and bicalutamide. Microarray analysis and confirmatory qRT-PCR reveals that MJC13 and flutamide inhibit dihydrotestosterone (DHT)-dependent genes in LNCaP PC cells. Both compounds are equally effective on a genome wide basis and as effective as second generation AR antagonists (MDV3100, ARN-509) at selected genes. MJC13 inhibits AR binding to the prostate specific antigen (PSA) promoter more strongly than flutamide, consistent with different mechanisms of action. Examination of efficacy of MJC13 in conditions that reflect aspects castrate resistant prostate cancer (CRPC) reveals that it inhibits flutamide activation of an AR mutant (ART877A) that emerges during flutamide withdrawal syndrome, but displays greatly restricted gene-specific activity in 22Rv1 cells that express a constitutively active truncated AR and is inactive against glucocorticoid receptor (GR), which can co-opt androgen-dependent signaling networks in CRPC. Importantly, MJC13 inhibits AR interactions with SRC2 and β-catenin in the nucleus and, unlike flutamide, strongly inhibits amplification of AR activity obtained with transfected SRC2 and β-catenin. MJC13 also inhibits DHT and β-catenin-enhanced cell division in LNCaP cells. Thus, a surface-directed antagonist can block AR activity in some conditions in which a classic antagonist fails and may display utility in particular forms of CRPC. abstract_id: PUBMED:32929792 Impact of antiandrogen withdrawal syndrome in castration-resistant prostate cancer patients treated with abiraterone or enzalutamide. Objectives: To assess the impact of antiandrogen withdrawal syndrome after bicalutamide withdrawal in castration-resistant prostate cancer patients treated with androgen receptor-axis targeted agents. Methods: The study cohort comprised 94 patients treated with abiraterone (n = 34) or enzalutamide (n = 60) as a first-line androgen receptor-axis targeted agent for castration-resistant prostate cancer despite combined androgen blockade by castration with bicalutamide as the first-line therapy. The association between clinicopathological factors (including antiandrogen withdrawal syndrome) and therapeutic outcome after using abiraterone and enzalutamide was investigated. Results: The decline in the prostate-specific antigen level after use of abiraterone or enzalutamide was comparable between patients with and without antiandrogen withdrawal syndrome. Antiandrogen withdrawal syndrome (hazard ratio 3.84, 95% confidence interval 1.29-11.45; P = 0.016) was associated with a higher risk of progression on multivariate analysis, but not all-cause death after abiraterone use. Progression-free survival and overall survival after enzalutamide use did not differ between patients with and without antiandrogen withdrawal syndrome. Conclusions: The present data suggest a modest therapeutic efficacy of abiraterone in castration-resistant prostate cancer patients with anti-androgen withdrawal syndrome after bicalutamide withdrawal. abstract_id: PUBMED:32029552 Modeling Acquired Resistance to the Second-Generation Androgen Receptor Antagonist Enzalutamide in the TRAMP Model of Prostate Cancer. Enzalutamide (MDV3100) is a potent second-generation androgen receptor antagonist approved for the treatment of castration-resistant prostate cancer (CRPC) in chemotherapy-naïve as well as in patients previously exposed to chemotherapy. However, resistance to enzalutamide and enzalutamide withdrawal syndrome have been reported. Thus, reliable and integrated preclinical models are required to elucidate the mechanisms of resistance and to assess therapeutic settings that may delay or prevent the onset of resistance. In this study, the prostate cancer multistage murine model TRAMP and TRAMP-derived cells have been used to extensively characterize in vitro and in vivo the response and resistance to enzalutamide. The therapeutic profile as well as the resistance onset were characterized and a multiscale stochastic mathematical model was proposed to link the in vitro and in vivo evolution of prostate cancer. The model showed that all therapeutic strategies that use enzalutamide result in the onset of resistance. The model also showed that combination therapies can delay the onset of resistance to enzalutamide, and in the best scenario, can eliminate the disease. These results set the basis for the exploitation of this "TRAMP-based platform" to test novel therapeutic approaches and build further mathematical models of combination therapies to treat prostate cancer and CRPC.Significance: Merging mathematical modeling with experimental data, this study presents the "TRAMP-based platform" as a novel experimental tool to study the in vitro and in vivo evolution of prostate cancer resistance to enzalutamide. abstract_id: PUBMED:28971898 Antiandrogen withdrawal syndrome (AAWS) in the treatment of patients with prostate cancer. Antiandrogen withdrawal syndrome is an unpredictable event diagnosed in patients with hormone-sensitive prostate cancer treated with combined androgen blockade therapy. It is defined by prostate-specific antigen value reduction, occasionally associated with a radiological response, that occurs 4-6 weeks after first-generation antiandrogen therapy discontinuation. New-generation hormonal therapies, such as enzalutamide and abiraterone acetate, improved the overall survival in patients with metastatic castration-resistant prostate cancer, and recent trials have also shown the efficacy of abiraterone in hormone-sensitive disease. In the last few years, several case reports and retrospective studies suggested that the withdrawal syndrome may also occur with these new drugs. This review summarizes literature data and hypothesis about the biological rationale underlying the syndrome and its potential clinical relevance, focusing mainly on new-generation hormonal therapies. Several in vitro studies suggest that androgen receptor gain-of-function mutations are involved in this syndrome, shifting the antiandrogen activity from antagonist to agonist. Several different drug-specific point mutations have been reported. The association of the withdrawal syndrome for enzalutamide and abiraterone needs confirmation by additional investigations. However, new-generation hormonal therapies being increasingly used in all stages of disease, more patients may experience the syndrome when stopping the treatment at the time of disease progression, although the clinical relevance of this phenomenon in the management of metastatic castration-resistant prostate cancer remains to be defined. abstract_id: PUBMED:25563847 Switching and withdrawing hormonal agents for castration-resistant prostate cancer. The antiandrogen withdrawal syndrome (AAWS) is characterized by tumour regression and a decline in serum PSA on discontinuation of antiandrogen therapy in patients with prostate cancer. This phenomenon has been best described with the withdrawal of the nonsteroidal antiandrogens, bicalutamide and flutamide, but has also been reported with a wide range of hormonal agents. Mutations that occur in advanced prostate cancer and induce partial activation of the androgen receptor (AR) by hormonal agents have been suggested as the main causal mechanism of the AAWS. Corticosteroids, used singly or in conjunction with abiraterone, docetaxel and cabazitaxel might also be associated with the AAWS. The discovery of the Phe876Leu mutation in the AR, which is activated by enzalutamide, raises the possibility of withdrawal responses to novel hormonal agents. This Review focusses on the molecular mechanisms responsible for withdrawal responses, the role of AR mutations in the development of treatment resistance, and the evidence for the sequential use of antiandrogens in prostate cancer therapy. The implications of AR mutations for the development of novel drugs that target the AR are discussed, as are the challenges associated with redefining the utility of older treatments in the current therapeutic landscape. abstract_id: PUBMED:24094574 No androgen withdrawal syndrome for enzalutamide: a report of disease dynamics in the postchemotherapy setting. N/A abstract_id: PUBMED:24210913 Re: Christoph A. Von Klot, Markus A. Kuczyk, Axel S. Merseburger. No androgen withdrawal syndrome for enzalutamide: a report of disease dynamics in the postchemotherapy setting. Eur Urol 2014;65:258-9: Enzalutamide withdrawal syndrome: fact or fiction? N/A abstract_id: PUBMED:24906049 Is there an antiandrogen withdrawal syndrome with enzalutamide? Objective: To examine prostate-specific antigen (PSA) levels after enzalutamide discontinuation to assess whether an antiandrogen withdrawal syndrome (AAWS) exists with enzalutamide. Methods: We retrospectively identified 30 consecutive patients with metastatic prostate cancer who were treated with enzalutamide after docetaxel. Post-discontinuation PSA results were available for all patients and were determined at 2-weekly intervals until starting further anticancer systemic therapy. PSA withdrawal response was defined as a PSA decline by ≥50% from the last on-treatment PSA, with a confirmed decrease ≥3 weeks later. Patient characteristics were evaluated in relation to the AAWS using univariate logistic regression analysis. Results: The median (range) patient age was 70.5 (56-86) years and the median (range) follow-up was 9.0 (0.5-16) months. The most common metastatic sites were the bone (86.7%) and lymph nodes (66.7%). Most patients (70%) had previously received abiraterone and 12 patients (40%) had also received cabazitaxel. The median (range) treatment duration with enzalutamide was 3.68 (1.12-21.39) months. PSA levels after enzalutamide withdrawal were monitored for a median (range) time of 35 (10-120) days. Only one patient (3.3%) had a confirmed PSA response ≥50% after enzalutamide discontinuation. One patient (3.3%) had a confirmed PSA response of between 30 and 50% and another patient (3.3%) had an unconfirmed PSA response of between 30 and 50%. The median overall survival was 15.5 months (95% CI 8.1-24.7). None of the factors analysed in the univariate analysis were significant predictors of PSA decline after enzalutamide discontinuation. Conclusions: This retrospective study provides the first evidence that enzalutamide may have an AAWS in a minority of patients with metastatic castration-resistant prostate cancer. Further studies are needed to confirm the existence of an enzalutamide AAWS and to assess its relevance in prostate cancer management. abstract_id: PUBMED:25683877 Enzalutamide withdrawal syndrome: is there a rationale? N/A Answer: The existence of an anti-androgen withdrawal syndrome (AAWS) for enzalutamide is a topic of investigation, and the evidence appears to be mixed. One study involving 31 patients with metastatic castration-resistant prostate cancer (mCRPC) who underwent enzalutamide withdrawal did not show any prostate-specific antigen (PSA) decline after discontinuation of enzalutamide, suggesting that if an AAWS exists for enzalutamide, it is at least very rare in patients with mCRPC after taxane-based chemotherapy and does not play a clinical role in this setting (PUBMED:24691670). Another retrospective study identified 30 consecutive patients with metastatic prostate cancer treated with enzalutamide after docetaxel and found that only one patient (3.3%) had a confirmed PSA response ≥50% after enzalutamide discontinuation, indicating that there may be an AAWS in a minority of patients with metastatic castration-resistant prostate cancer (PUBMED:24906049). However, the evidence for an AAWS with enzalutamide is not conclusive, and further studies are needed to confirm its existence and assess its relevance in prostate cancer management. It is also important to note that the clinical relevance of this phenomenon in the management of metastatic castration-resistant prostate cancer remains to be defined (PUBMED:28971898). In summary, while there is some indication that an AAWS for enzalutamide may occur in a small subset of patients, the phenomenon appears to be rare and its clinical significance is still uncertain.
Instruction: Can MRI rule out bladder and rectal invasion in cervical cancer to help select patients for limited EUA? Abstracts: abstract_id: PUBMED:16310245 Can MRI rule out bladder and rectal invasion in cervical cancer to help select patients for limited EUA? Objectives: Although invasion of the bladder or rectum is rare in cervical carcinoma, endoscopic assessment of both organs is part of the standard FIGO clinical staging system, with associated increase in cost and risk of complications. Our objective was to evaluate whether MRI could be used to select patients who did not require invasive staging of the bladder or rectum. Methods: Two observers, blinded to the results of cystoscopy and endoscopic examination of the rectum, retrospectively reviewed the MR images of 112 patients with cervical carcinoma. A 5-point invasion score was used to determine bladder and rectal invasion (1 = no invasion, 5 = definite invasion). A confidence score of 3 or above was used to identify patients with possible bladder or rectal involvement. The results of cystoscopy and endoscopic examination of the rectum were recorded and correlated with the MR findings. Results: MRI was negative for both bladder and rectal invasion in 94/112 patients. Cystoscopy and endoscopic examination of the rectum were confirmed to be normal in all 94 cases. MRI identified 12 patients with possible rectal invasion, 2 confirmed at endoscopy. MRI identified 14 patients with possible bladder invasion, one confirmed at cystoscopy. Using a low threshold cut-off score of &gt;3 to predict invasion resulted in a 100% negative predictive value (NPV) in detection of bladder and rectal invasion. Conclusion: The absence of bladder or rectal invasion can be diagnosed with sufficient confidence using an MRI scoring system to safely obviate the need for invasive cystoscopic or endoscopic staging in the majority of patients with cervical cancer. This could potentially lead to a reduction in staging costs and morbidity. abstract_id: PUBMED:22195331 Reliability of magnetic resonance imaging for bladder or rectum invasion in cervical cancer. Objective: To find out whether a negative finding on magnetic resonance imaging (MRI) can rule out bladder or rectum mucosal invasion safely without cystoscopy or sigmoidoscopy. Study Design: We retrospectively reviewed the medical records of patients with cervical cancer between January 2005 and December 2009. MRI findings on bladder or rectum were classified as follows: (1) definite evidence of mucosal invasion, (2) suggestive of mucosal invasion or (3) no evidence of mucosal invasion. We correlated MRI findings with the cystoscopy and sigmoidoscopy findings. Results: A total of 257 patients were enrolled in this study. Sixteen patients had at least suspicious bladder invasion, and 1 patient had definite rectal invasion on MRI. Of these 17 patients, 6 patients had actual bladder mucosa invasion, and 1 patient had rectal mucosa invasion. The remaining 240 had negative MRI, cystoscopy and sigmoidoscopy findings. The MRI sensitivity, specificity, positive predictive value, negative predictive value and accuracy of bladder or rectum mucosal invasion were 100%, 96.0%, 41.25%, 100% and 96.1%, respectively. Conclusion: Skipping cystoscopy or sigmoidoscopy based on the absence of invasion on MRI is safe enough without concern for understaging. abstract_id: PUBMED:27008430 The prognostic value of rectal invasion for stage IVA uterine cervical cancer treated with radiation therapy. Background: The prognostic value of rectal invasion is still unclear in stage IVA cervical cancer. The objective of this study is to evaluate patient outcome and prognostic factors in stage IVA cervical cancer treated with radiation therapy. Methods: A retrospective review of the medical records of patients treated with definitive photon radiation therapy for pathologically proven stage IVA cervical cancer between 1980 and 2010 was performed. Eligible patients for the present study were diagnosed with clinical stage IVA cervical cancer by cystoscopy or/and proctoscopy, and they received definitive radiation therapy consisting of a combination of external beam radiotherapy and high-dose-rate brachytherapy. All patients underwent CT scans of the abdomen and pelvis. Results: Among the 67 stage IVA patients studied, 53 patients were stage IVA on the basis of bladder invasion, 7 according to rectal mucosal invasion, and 7 because of both bladder and rectal mucosal invasion. Median follow-up of all patients and surviving patients was 19 months (range, 2-235 months) and 114 months (range, 14-223 months), respectively. The 5-year local control (LC), disease-free survival (DFS), and overall survival (OS) rate were 55, 17, and 24%, respectively. Rectal invasion had significant impact on DFS, but bladder invasion had the opposite effect (p = 0.00006 and 0.005, respectively). There were significant differences of LC, DFS and OS rates between patients with and without rectal invasion (p = 0.006, 0.00006 and 0.05, respectively). Conclusions: Patients with stage IVA cervical cancer had poor prognosis, with 5-year survival of only 24%. Furthermore, in stage IVA, rectal invasion might be a worse prognostic factor than bladder invasion. abstract_id: PUBMED:20846984 Prognostic significance of MRI-detected bladder muscle and/or serosal invasion in patients with cervical cancer treated with radiotherapy. In cervical cancer, the prognostic significance of bladder wall invasion on MRI without pathological evidence of mucosal invasion is not known. From 454 consecutive patients with cervical cancer who were treated with radiation, we reviewed images and analysed the outcome of 92 patients with the Federation of International Gynecology and Obstetrics (FIGO) stage IIIB-IVA. We analysed the patients in three groups, normal, wall (muscle and/or serosal) invasion and mucosal invasion, according to the findings on the MRI. Kaplan-Meier life table analysis and the log-rank test were used to assess the survival rates and differences according to prognostic factors. MRI detected abnormalities in the bladder wall in 42 patients (45.6%): wall invasion in 24 and mucosal invasion in 18. 5 of 18 patients, suspected on MRI to have mucosal invasion, showed no pathological evidence of mucosal invasion. Median follow-up period was 34 months. 3-year cause-specific survival (CSS) in the normal group compared with the wall invasion group was 76.2% vs 71.4% (p = 0.48). 3-year CSS for the wall invasion group compared with the mucosal invasion group was 71.4% vs 54.3% (p = 0.04). Mucosal invasion on MRI (p = 0.03) and concurrent chemoradiotherapy (p = 0.01) was significant for CSS. The prognosis for patients with cervical cancer with evidence of muscle and/or serosal invasion of the bladder on MRI may not differ from that for patients without abnormality on MRI. In patients with the MRI finding of bladder mucosal invasion, further studies should be conducted regarding the role of cystoscopy to determine the need for pathological confirmation. abstract_id: PUBMED:29507691 Locally advanced cervical cancer with bladder invasion: clinical outcomes and predictive factors for vesicovaginal fistulae. Objective: We report outcomes of cervical cancer patients with bladder invasion (CCBI) at diagnosis, with focus on the incidence and predictive factors of vesicovaginal fistula (VVF). Results: Seventy-one patients were identified. Twenty-one (30%) had para-aortic nodal involvement. Eight had VVF at diagnosis. With a mean follow-up time of 34.2 months (range: 1.9 months-14.8 years), among 63 patients without VVF at diagnosis, 15 (24%) developed VVF. A VVF occurred in 19% of patients without local relapses (9/48) and 40% of patients with local relapse (6/15). Two-year overall survival (OS), disease-free survival (DFS) and local control rates were 56.4% (95% CI: 44.1-67.9%), 39.1% (95% CI: 28.1-51.4%) and 63.8% (95% CI: 50.4-75.4%), respectively. Para-aortic nodes were associated with poorer OS (adjusted HR = 3.78, P-value = 0.001). In multivariate analysis, anterior tumor necrosis on baseline MRI was associated with VVF formation (63% vs 0% at 1 year, adjusted-HR = 34.13, 95% CI: 4.07-286, P-value = 0.001), as well as the height of the bladder wall involvement of &gt;26 mm (adjusted-HR = 5.08, 95% CI: 1.38-18.64, P-value = 0.014). Conclusions: A curative intent strategy including brachytherapy is feasible in patients with CCBI, with VVF occurrence in 24% of the patients. MRI patterns help predicting VVF occurrence. Methods: Patients with locally advanced CCBI treated with (chemo)radiation ± brachytherapy in our institute from 1989 to 2015 were analyzed. Reviews of baseline magnetic resonance imaging (MRI) scans were carried out blind to clinical data, retrieving potential parameters correlated to VVF formation (including necrosis and tumor volume). abstract_id: PUBMED:28757404 Intermediate dose-volume parameters and the development of late rectal toxicity after MRI-guided brachytherapy for locally advanced cervix cancer. Purpose: The dose delivered to the most exposed 2 cm3 [Formula: see text] of the rectum and bladder is predictive of late rectal and bladder toxicity in cervix cancer patients. We investigated the predictive value of intermediate doses to the rectum and bladder for late rectal/bladder toxicity after MRI-guided brachytherapy for patients with locally advanced cervix cancer. Methods And Materials: Toxicity was prospectively graded using Common Toxicity Criteria for Adverse Events v4.0 and retrospectively verified for women with Stage IB-IVA cervix cancer treated consecutively with MRI-guided brachytherapy between 2008 and 2013. The minimum equivalent dose in 2 Gy fractions delivered to 0.1, 1, 2, 5, and 10 cm3 of the rectum and bladder and the absolute volume of the rectum receiving 55, 60, 65, 70, and 75 Gy3 (V55-75) were collected. The association between dose-volume parameters and Grade 2+ rectal/bladder toxicity was examined using logistic regression. Results: With a median followup of 44 months, cumulative incidences of Grade 2+ rectal and bladder toxicity among the 106 patients were 29% and 15% at 3 years, respectively. All the dose-volume parameters were significantly associated with late Grade 2+ rectal and bladder toxicity (p &lt; 0.05), except for bladder [Formula: see text] . On multivariable logistic regression, both [Formula: see text] &gt; 70 Gy3 and V55 &gt; 11 cm3 (p &lt; 0.05) were predictive of late Grade 2+ rectal toxicity, with improved model fitting and higher area under the curve compared with the model with [Formula: see text] &gt; 70 Gy3 alone. Conclusions: In this study, V55 was predictive of late Grade 2+ rectal toxicity. Adding V55 to [Formula: see text] improved prediction accuracy. abstract_id: PUBMED:30028181 Contrast-enhanced ultrasonography vs MRI for evaluation of local invasion by cervical cancer. Objective:: The purpose of this study is to compare contrast-enhanced ultrasound (CEUS) to MRI for evaluating local invasion of cervical cancer. Methods:: A total of 108 patients with cervical cancer were included in this study. All the enrolled patients were Stage IIA2-IVB according to the International Federation of Obstetrics and Gynecology and treated with volumetric modulated arc therapy. Tumour size in different dimensions was compared between MRI and CEUS. The correlation coefficients (r) between MRI and CEUS for diagnosing local invasion, parametrial extension, and invasion to vagina, uterine corpus and adjacent organs were assessed. Results:: Measurements by MRI and CEUS were strongly correlated in the three dimensions: left-right r = 0.84, craniocaudal r = 0.86 and anteroposterior r = 0.88. Vaginal and parametrial invasion were detected by both MRI and CEUS with moderate concordance, and invasion of uterine corpus, bladder and rectum with good concordance. Conclusion:: CEUS is comparable to MRI for measuring tumour size, with good concordance for evaluating invasion of cervical cancer. Advances In Knowledge:: CEUS is a less expensive non-invasive modality for assessment of tumour size and invasion of cervical cancer. abstract_id: PUBMED:8290183 Transvaginal ultrasonographic diagnosis of bladder-wall invasion in patients with cervical cancer. Objective: To evaluate the use of transvaginal ultrasonography for diagnosing invasion of the bladder by cervical cancer. Methods: Twenty-one women with stages Ib-IIIb cervical cancer underwent radical hysterectomy or staging laparotomy. All had computed tomography (CT) scans and cystoscopic examinations, and five also underwent magnetic resonance imaging (MRI). During transvaginal ultrasonography, a transvaginal transducer was inserted into the anterior fornix of the vagina and the bladder wall was studied in the sagittal plane. The moveability of the bladder wall was assessed by the ability of the bladder to slide along the uterine cervix when the probe was pushed up against the bladder from the anterior fornix. Moveability was considered to indicate an intact bladder wall. Results: The accuracy of transvaginal ultrasonography was superior to that of the other methods for detecting bladder-wall invasion by cervical cancer. The accuracy was 95% for transvaginal ultrasonography, 76% for CT, 86% for cystoscopy, and 80% for MRI. Conclusion: Transvaginal ultrasonographic examination is useful for detecting invasion of the bladder wall by cervical cancer. abstract_id: PUBMED:24377037 A retrospective analysis of rectal and bladder dose for gynecological brachytherapy treatments with GZP6 HDR afterloading system. Aim: The aim of this work is to evaluate rectal and bladder dose for the patients treated for gynecological cancers. Background: The GZP6 high dose rate brachytherapy system has been recently introduced to a number of radiation therapy departments in Iran, for treatment of various tumor sites such as cervix and vagina. Materials And Methods: Our analysis was based on dose measurements for 40 insertions in 28 patients, treated by a GZP6 unit between June 2009 and November 2010. Treatments consisted of combined teletherapy and intracavitary brachytherapy. In vivo dosimetry was performed with TLD-400 chips and TLD-100 microcubes in the rectum and bladder. Results: The average of maximum rectal and bladder dose values were found to be 7.62 Gy (range 1.72-18.55 Gy) and 5.17 Gy (range 0.72-15.85 Gy), respectively. It has been recommended by the ICRU that the maximum dose to the rectum and bladder in intracavitary treatment of vaginal or cervical cancer should be lower than 80% of the prescribed dose to point A in the Manchester system. In this study, of the total number of 40 insertions, maximum rectal dose in 29 insertions (72.5% of treatment sessions) and maximum bladder dose in 18 insertions (45% of treatments sessions) were higher than 80% of the prescribed dose to the point of dose prescription. Conclusion: In vivo dosimetry for patients undergoing treatment by GZP6 brachytherapy system can be used for evaluation of the quality of brachytherapy treatments by this system. This information could be used as a base for developing the strategy for treatment of patients treated with GZP6 system. abstract_id: PUBMED:25892290 A literature review of the association between diffusion-weighted MRI derived apparent diffusion coefficient and tumour aggressiveness in pelvic cancer. Diffusion-weighted magnetic resonance imaging (DW-MRI) is used extensively to improve tumour detection and localization because it offers excellent soft tissue contrast between malignant and non-malignant tissues. It also provides a quantitative biomarker; the apparent diffusion coefficient (ADC) can be derived from DW-MRI sequences using multiple diffusion weightings. ADC reflects the tumour microenvironment, e.g. cell membrane integrity and cellularity and has potential for reporting on tumour aggressiveness. This review focuses on the use of the DW-MRI derived imaging biomarker ADC to reflect tumour aggressiveness and its potential impact in managing pelvic cancer patients. The clinical studies which evaluate the role of ADC in pelvic tumours (prostate, bladder, rectal, ovary, cervix and uterus) are summarized and the evidence linking ADC values with tumour aggressiveness is evaluated. Answer: Yes, MRI can be used to rule out bladder and rectal invasion in cervical cancer, which can help select patients for limited endoscopic urinary assessment (EUA). Studies have shown that MRI has a high negative predictive value (NPV) for detecting bladder and rectal invasion in patients with cervical cancer. For instance, one study demonstrated that using a low threshold cut-off score of >3 to predict invasion resulted in a 100% NPV for the detection of bladder and rectal invasion, suggesting that the absence of invasion can be diagnosed with sufficient confidence using an MRI scoring system (PUBMED:16310245). Another study reported that MRI had a sensitivity of 100%, specificity of 96.0%, and an NPV of 100% for bladder or rectum mucosal invasion, concluding that skipping cystoscopy or sigmoidoscopy based on the absence of invasion on MRI is safe without concern for understaging (PUBMED:22195331). Furthermore, contrast-enhanced ultrasonography (CEUS) has been compared to MRI for evaluating local invasion of cervical cancer, and it was found that CEUS is comparable to MRI for measuring tumor size and evaluating invasion, with good concordance for assessing invasion of the uterine corpus, bladder, and rectum (PUBMED:30028181). Additionally, transvaginal ultrasonography has been shown to have a higher accuracy (95%) than other methods, including CT and MRI, for detecting bladder-wall invasion by cervical cancer (PUBMED:8290183). In summary, MRI can effectively rule out bladder and rectal invasion in cervical cancer, which can assist in selecting patients for limited EUA, potentially reducing staging costs and morbidity associated with invasive procedures.
Instruction: The Perceval S aortic valve has the potential of shortening surgical time: does it also result in improved outcome? Abstracts: abstract_id: PUBMED:23673064 The Perceval S aortic valve has the potential of shortening surgical time: does it also result in improved outcome? Background: Sutureless aortic valve prostheses have the potential of shortening surgical time. However, whether shorter operative times may also result in improved patient outcomes remains to be established. Methods: One hundred patients underwent minimally invasive isolated aortic valve replacement. Of these, 50 patients received a Perceval (Sorin Group, Saluggia, Italy) bioprosthesis (group P) and 50 patients received a non-Perceval valve (group NP). Results: The group P patients were older (77.5 ± 5.3 versus 71.7 ± 10 years, p = 0.001) and at higher risk (logistic European System for Cardiac Operative Risk Evaluation [EuroSCORE] 9.9 ± 6.5 versus 4.3 ± 1, p = 0.001) than group NP patients. One implant failure occurred in group P (p = 0.5), and conversion to full sternotomy was necessary in 1 patient from each group. Aortic cross-clamp and cardiopulmonary bypass times were 39.4% and 34% shorter in group P (both p &lt; 0.001). Within 30 days, a total of 5 patients died (2 in group P and 3 in group NP, p = 0.5). No significant differences were observed between groups in postoperative arrhythmias and need for pacemaker implantation (p = 0.3 and p = 0.5, respectively). Despite the higher surgical risk, group P patients less frequently required blood transfusion (1.1 ± 1.1 units versus 2.3 ± 2.8 units, p = 0.007), and had a shorter intensive care unit stay (1.9 ± 0.7 versus 2.8 ± 1.9 days, p = 0.002) and a shorter intubation time (9.2 ± 3.6 hours versus 15 ± 13.8 hours, p = 0.01). Group NP patients had a mean prosthesis size significantly smaller than for group P (23 ± 2 mm versus 23.9 ± 1.1 mm, p = 0.01). The Perceval valve provided comparable hemodynamic performance to that of non-Perceval valves (mean gradient 8.4 ± 6 mm Hg versus 10 ± 4.9 mm Hg, p = 0.24). Conclusions: Sutureless implantation of the Perceval valve is associated with shorter cross-clamp and cardiopulmonary bypass times, resulting in improved clinical outcome. In addition, it compares favorably with conventional valves in terms of mortality and outcome variables. abstract_id: PUBMED:34412989 Valve-in-Valve Transcatheter Aortic Valve Implantation for the Failing Surgical Perceval Bioprosthesis. Introduction: The Perceval Valve has been increasingly used in Surgical Aortic Valve Replacement (SAVR) recently due to ease of implantation. However, we have seen some cases of relatively early haemodynamic failure of the Perceval valve and these patients may then present for valve-in-valve transcatheter aortic valve implantation (ViV-TAVI). Experience of ViV-TAVI in the Perceval valve is limited. Methods: We report our experience of VIV-TAVI in four cases of early-failing Perceval valves, two with stenosis and two with regurgitation. We also review the literature with regard to ViV-TAVI for this indication. Results: Four patients aged between 66 and 78 years presented with Perceval valve dysfunction an average of 4.6 years following SAVR. All cases underwent Heart Team discussion and a ViV-TAVI procedure was planned thereafter. Strategies to ensure crossing through the centre of the valve and not outside any portion of the frame were found to be essential. Three patients had self-expanding valves implanted and one had a balloon-expandable prosthesis. The average aortic valve area (AVA) improved from 0.8 cm2 pre-procedure to 1.5 cm2 post-procedure*. The mean gradient (MG) improved from 35.5 mmHg (range 19.7-53 mmHg) pre-procedure to 14.8 mmHg (range 7-30 mmHg) post-procedure. In one patient a MG of 30 mmHg persisted following valve deployment. There were no significant peri-procedural complications. Conclusions: ViV-TAVI is a useful option for failed Perceval prostheses and appears safe and effective in this small series. Crossing inside the whole frame of the Perceval valve is essential. abstract_id: PUBMED:32728358 Initial experience with the Perceval S sutureless aortic valve. Introduction: Surgical treatment of the aortic valve represents the gold standard, and thus aortic valve replacement (AVR) is one of the most commonly performed cardiac operations. Aim: To evaluate the early outcome of aortic valve replacement with the Perceval S sutureless aortic bioprosthesis. Material And Methods: This was a retrospective analysis of 24 patients (mean age: 71 ±5 years), who underwent aortic valve replacement with a Perceval S valve. Concomitant coronary artery bypass grafting (CABG) was performed in 9 patients. Patients were evaluated preoperatively, at hospital discharge, and once during follow-up. Results: A total of 15 of 24 patients underwent isolated sutureless aortic valve replacement (mean aortic cross-clamp time: 60 ±14 minutes; mean bypass time: 90 ±23 minutes). Coronary bypass grafting was performed in 9 patients (mean aortic cross-clamp time: 78 ±23 minutes; mean bypass time: 111 ±31 minutes). Hospital mortality was nil. Mean and peak transvalvular pressure gradients were 10 ±2 mm Hg and 21 ±3 mm Hg at follow-up, respectively. Moderate or severe aortic regurgitation did not develop in any patients during the follow-up period. No valve thrombosis, thromboembolic events, or structural valve deterioration were observed. Conclusions: In our experience with sutureless aortic valve replacement, the surgical procedure is shown to be safe. The early haemodynamic performance seems favourable. By shortening the aortic cross-clamp and bypass times we can notice advantages, especially in high-risk patients. Minimally invasive access seems to be facilitated. Larger studies are needed to confirm our data and determine the long-term durability of the Perceval S sutureless bioprosthesis. abstract_id: PUBMED:35414814 Perceval S, sutureless aortic valve: cost-consequence analysis. Introduction: Sutureless aortic valve prostheses have the potential of shortening ischemic time. Aim: We conducted the present study to assess the clinical and economic impact of the biological, sutureless, self-expanding Perceval S valve since the effect of shortened operative times on hospital costs remains unclear. Material And Methods: This is a retrospective analysis. From January 2018 to January 2019, 29 patients underwent isolated aortic valve replacement with the Crown PRT bioprosthetic Aortic Valve, whereas 35 patients underwent aortic valve replacement with Perceval S (auto-expanded, sutureless, bioprosthesis). Preoperative data, hospital outcome, and health care resource consumption were compared, using χ2 and t-test. Results: Aortic cross-clamp, cardiopulmonary bypass, and operation times were significantly shorter in the Perceval S group (p &lt; 0.001). Patients in the sutureless group required blood transfusion less frequently (p = 0.03) and had a shorter intensive care unit (ICU) stay (p = 0.01). Hospital stay (p = 0.57) and pacemaker implantation were similar between groups. The reduction of aortic cross-clamp, extracorporeal circulation times, and ICU stay resulted in reduced resource consumption in the sutureless group. Conclusions: The use of the Perceval S valve is clinically safe and effective. A shorter procedural time in the sutureless group is associated with better clinical outcomes and reduced hospital costs. abstract_id: PUBMED:33169414 Extended hockey stick aortotomy for Perceval valve implantation: How to reshape the aortic root. The Perceval valve is a true sutureless aortic bioprosthesis. Overall, excellent performances have been demonstrated in terms of hemodynamic outcomes, safety, and versatility of use; furthermore, as a sutureless valve option, it has shown to reduce the surgical burden, shortening the operative times, and simplifying minimally invasive procedures. Since the valve has got a high frame profile, the recommended implantation technique requires a high and transverse aortotomy. In case of unplanned Perceval valve implantation, when an extended aortotomy is required, we have come up with a simple technique to reshape the aortic root before the valve is delivered in place: symmetry is pivotal to prevent folding issues and to improve the annular sealing. Although we discuss an out-of-recommendation use, in our experience that technique has shown to be safe and effective. abstract_id: PUBMED:27328925 Early experiences with the Sorin Perceval S arteficial biological valve Objectives: We examined the Sorin Perceval S artificial biological valve implantation techniques, and present the initial experiences in our unit. Methods: In the last 1.5 years, 27 patients had been implanted with Sorin Perceval S biological arteficial valve due to aortic valve disease. The device was mainly used in high-risk patients, in reoperative circumstances, in cases of calcified aortic root, and in elderly patients. Results: The valve implantation time, aortic cross clamp time is shorter, but the risk of the operation cannot be eliminated entirely, because of the high risk patients' severe comorbidities. Furthermore, we performed echocardiography in the postoperative period, which demonstrated that the valve function is excellent, the valve fitted tightly in the anulus, and there was no paravalvular leakage. Conclusions: The Sorin Perceval S biological arteficial aortic valve is safe to use in high risk patients, and the surgical procedure is easier in case of partial sternotomy, too. abstract_id: PUBMED:37346435 Proximal dislocation of a sutureless Perceval valve after the second aortic valve replacement. Sutureless aortic valve replacement is a widely used technique that facilitates short cross-clamping time and has excellent hemodynamic outcomes. However, it is associated with paravalvular leakage or migration risk. We report a surgical case using a sutureless aortic valve replacement in a 74-year-old male patient with a history of previous aortic valve replacement. He underwent redo aortic valve replacement with the Perceval valve (Corcym Canada Corp, Vancouver, Canada), which got dislocated with moderate-to-severe aortic paravalvular leakage at 3 months after implantation. Our observations suggested that redo aortic valve replacement using a sutureless aortic valve can cause valve dislocation as it might be difficult to clearly remove calcification and excess tissue and implant the valve to crimp on the annulus precisely. This report also highlights the challenges of performing repeat aortic valve replacement after explanting the Perceval valve, which includes the aortotomy height and annulus identification. Although the Perceval valve can help improve patient prognosis, careful implantation and thorough follow-up examinations are warranted. abstract_id: PUBMED:34917364 Perceval sutureless aortic valve replacement after ascending aortic replacement. In patients with a narrow sinotubular junction, small sinus of Valsalva, or extensibility loss in the aortic root, aortic valve replacement (AVR) with a standard valve is challenging due to limited surgical field. Detailed preoperative measurements of the aortic root render performing AVR using the Perceval valve easy. abstract_id: PUBMED:38135591 Use of Sutureless Valve in Aortic Root Enlargement. Aim: The small aortic annulus is a surgical challenge in patients undergoing aortic valve replacement which may lead to patient prosthesis mismatch. Management options include aortic root enlargement, aortic root replacement, and the use of sutureless valves. In this case series, we report our results with aortic root enlargement, sutureless valve implantation, and benchtop modelling of the radial forces exerted. Methods: Five patients underwent aortic root enlargement and insertion of the Perceval valve as part of the management strategy to enlarge their effective orifice area. We further investigate this strategy with a benchtop model to quantify the radial forces exerted by the Perceval valve on the aortic annulus. Radial and hoop forces on the aortic annulus and inner ring of the Perceval valve were recorded using a Mylar force tester. Results: Five female patients with native annulus between 18mm-20mm underwent root enlargement and insertion of a Perceval S valve. The postoperative course was uncomplicated for all patients except for one who required a permanent pacemaker insertion. Transvalvular pressure gradients remained low at up to 4 years of follow-up (12 mmHg-21 mmHg), with no evidence of paravalvular leak. Benchtop testing demonstrated radial forces exerted at the annulus in all-size Perceval S valves to be within physiological variables, whereas compressive forces required to deform the valves were supraphysiological. Conclusions: The deployment of a sutureless valve within a surgical enlarged aortic root is a feasible solution in patients with a small aortic root. abstract_id: PUBMED:36517036 Conventional Prostheses versus Sutureless Perceval for Aortic Valve Replacement: A Meta-Analysis. Purpose: Perceval sutureless valves have gained popularity. Whether this implant performs superior to the traditional sutured prosthesis remains unclear. This meta- analysis compared the Perceval implants versus the sutured conventional valves for aortic valve replacement (AVR). Methods: This meta-analysis was conducted according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines. The following databases were accessed: PubMed, Google Scholar, Web of Science, Scopus, and EMBASE. All clinical investigations comparing Perceval versus the conventional prostheses for AVR were considered. Results: The Perceval group demonstrated higher rate of pacemaker implantation (P &lt;0.00001). Aortic cross-clamp (ACC) time (P &lt;0.00001) and cardiopulmonary bypass (CPB) time (P &lt;0.00001) were shorter in the Perceval group. Similarity was found in mean and peak pressure gradient (P = 0.8 and P = 0.2, respectively), mean aortic valve area (P = 0.3), length of intensive care unit (P = 0.4) and hospital stay (P = 0.2), rate of revision (P = 0.11), hemorrhages (P = 0.05), paravalvular leak (P = 0.3), cerebrovascular complication (P = 0.7), and early mortality (P = 0.06). Conclusion: Given the shorter ACC time and CPB time, Perceval AVR can be an alternative in high-risk patients. The higher rate of pacemaker implantation following Perceval may limit its routine implantation. Answer: The Perceval S aortic valve does have the potential to shorten surgical time, and there is evidence suggesting that it may also result in improved outcomes. A study comparing 50 patients who received a Perceval bioprosthesis with 50 patients who received a non-Perceval valve found that despite the Perceval group being older and at higher risk, they experienced shorter aortic cross-clamp and cardiopulmonary bypass times, required blood transfusions less frequently, had shorter intensive care unit stays, and shorter intubation times. These factors contributed to improved clinical outcomes, and the Perceval valve provided comparable hemodynamic performance to that of non-Perceval valves (PUBMED:23673064). Additionally, a cost-consequence analysis showed that the use of the Perceval S valve was clinically safe and effective, with shorter procedural times associated with better clinical outcomes and reduced hospital costs (PUBMED:35414814). Another study on early experiences with the Perceval S valve indicated that the surgical procedure was safe, with favorable early hemodynamic performance, and that the shorter aortic cross-clamp and bypass times could be particularly advantageous in high-risk patients (PUBMED:32728358). However, it is important to note that while the Perceval valve can improve patient prognosis, careful implantation and thorough follow-up examinations are warranted due to potential risks such as paravalvular leakage or valve migration (PUBMED:37346435). Furthermore, a meta-analysis comparing Perceval sutureless valves with traditional sutured prostheses for aortic valve replacement found that while the Perceval group had shorter aortic cross-clamp and cardiopulmonary bypass times, they also demonstrated a higher rate of pacemaker implantation, which may limit its routine implantation (PUBMED:36517036). In conclusion, the Perceval S aortic valve has been associated with shorter surgical times and improved clinical outcomes in some studies, but careful consideration of the potential risks and patient-specific factors is necessary.
Instruction: Female and underrepresented minority faculty in academic departments of family medicine: are women and minorities better off in family medicine? Abstracts: abstract_id: PUBMED:11411975 Female and underrepresented minority faculty in academic departments of family medicine: are women and minorities better off in family medicine? Background: Several studies have shown that the percentage of women represented in senior academic positions at US medical schools is lower than the percentage of men in senior positions. Similarly, the percentage of minority faculty members represented in senior academic positions is lower than that of their majority counterparts. This study assessed whether these findings were also present in departments of family medicine and identified any factors related to the institution or department that favored academic success for women and minorities. Methods: Data regarding faculty workforce composition, including faculty rank and rank for women and underrepresented minorities, were extracted from a comprehensive survey of departments of family medicine at US allopathic medical schools. The data are based on faculty workforce in 1997 and include responses from 58 (51%) of all schools with a department of family medicine. Results: Faculty in departments of family medicine were more likely to be female (41% versus 25%) and an underrepresented minority (9% versus 4%), compared with all academic medicine disciplines. However, women in full-time positions were less likely than men, and minorities were less likely than nonminorities, to be either an associate or full professor. We could find no institutional or departmental characteristics that were associated with academic success for women or minority faculty members. Conclusions: While women and underrepresented minorities are more common to the faculty workforce in family medicine, members of both of these groups are not well represented in senior faculty ranks. abstract_id: PUBMED:11841983 Academic ranks and medical schools of underrepresented minority faculty in family medicine departments. Purpose: To examine academic rankings and educational backgrounds of underrepresented minority (URM) family medicine faculty and compare their academic ranks with national trends. The authors also determined the extent to which international and historically black educational institutions contributed URM faculty to family medicine. Method: In 1999 questionnaires were sent to 129 family medicine departments asking for academic ranks and educational institutions attended by their URM faculty. Comparisons were made between URM faculty's academic ranks and all family medicine faculty, medical school minority faculty, and medical school faculty. Results: A total of 80% of URM faculty were assistant professors or instructors, and 4.4% were professors. URM family medicine faculty had significantly lower rankings compared with medical school minority faculty and all family medicine faculty. URM family medicine faculty at historically black medical schools were more likely to have received their degrees from historically black undergraduate institutions and medical schools than were URM family medicine faculty at non-historically-black medical schools. Conclusions: URM family medicine faculty appear to experience a double disadvantage: being minority and working for family medicine departments. Their academic ranks remain far below those of both minority medical school faculty and family medicine faculty, a discouraging finding considering the current shortage of URM faculty in family medicine departments. Historically black medical schools cannot address the shortage alone, so non-historically-black medical schools need to both recruit URM faculty and follow up with appropriate mentoring of those faculty. abstract_id: PUBMED:31953638 Releasing the Net to Promote Minority Faculty Success in Academic Medicine. Underrepresented minority faculty in academic medicine continue to be underrepresented in academic health centers across the country. Their underrepresentation impacts advancements in clinical care, education, and discovery and slows our forward progress in the field. Underrepresented in medicine faculty includes people who are black or African American, Hispanic or Latinx, or Native American. Barriers to underrepresented faculty recruitment, retention, and advancement include minority and gratitude taxes, imposter syndrome, and a lack of an appreciation of power distance and distance traveled. This article reviews five barriers to progress in achieving appropriate diversity among faculty and leadership of academic health centers, focusing on the multiplying effects of these barriers and potential steps forward. abstract_id: PUBMED:26931546 Status of Tenure Among Black and Latino Faculty in Academic Medicine. Tenure has been used for years to recruit, promote, and retain faculty in higher education and has been associated with job security and academic freedom. Absence of tenure and not being in tenure-earning tracks is grouped with the challenges faced by underrepresented minorities in academic medicine. Those challenges include being found at the assistant professor rank more often, having more clinical responsibilities, and not being in leadership positions as often as compared to non-minority faculty. The role of tenure and tenure tracks is unclear as it relates to the presence of minority faculty. This article presents a look at the status of tenure among black and Latino faculty in academic medicine at US medical schools. abstract_id: PUBMED:31940426 Leadership Pathways in Academic Family Medicine: Focus on Underrepresented Minorities and Women. Background And Objectives: Leadership positions in academic medicine lack racial and gender diversity. In 2016, the Council of Academic Family Medicine (CAFM) established a Leadership Development Task Force to specifically address the lack of diversity among leadership in academic family medicine, particularly for underrepresented minorities and women. Approach: The task force was formed in August 2016 with members from each of the CAFM organizations representing diversity of race, gender, and academic position. The group met from August 2016 to December 2017. The task force reviewed available leadership development programming, and through consensus identified common pathways toward key leadership positions in academic family medicine-department chairs, program directors, medical student education directors, and research directors. consensus development: The task force developed a model that describes possible pathways to several leadership positions within academic family medicine. Additionally, we identified the intentional use of a multidimensional mentoring team as critically important for successfully navigating the path to leadership. Conclusions: There are ample opportunities available for leadership development both within family medicine organizations and outside. That said, individuals may require assistance in identifying and accessing appropriate opportunities. The path to leadership is not linear and leaders will likely hold more than one position in each of the domains of family medicine. Development as a leader is greatly enhanced by forming a multidimensional team of mentors. abstract_id: PUBMED:32336395 Women and Underrepresented Minorities in Academic Anesthesiology. The demographics of the United States is changing with 51% of the population being female, and 32% of the population identifying as an underrepresented minority (URM, ie, African American/black, Hispanic/Latino, American Indian/Alaska Native, Native Hawaiian/Pacific Islander). Women and URMs have been historically underrepresented in medicine and in academic anesthesiology. This article provides an overview of the current status of women and URM faculty in academic anesthesiology and provides a framework for academic advancement. Throughout the text, the terms woman/women are used, as opposed to female, as the terms woman/women refer to gender, and female refers to biological sex. abstract_id: PUBMED:25472784 The experiences of underrepresented minority faculty in schools of medicine. Introduction: Faculty of color in schools of medicine play an essential role in addressing health disparities, increasing diversity in healthcare, and improving health professions education. Yet inadequate progress has been made in increasing the numbers of faculty of color in medical schools. The reasons for this gap, and ways to address it, are poorly understood. Methods: We conducted a grounded theory study of 25 of faculty from groups historically underrepresented in academic medicine at 17 schools in the United States. Faculty were interviewed in person (n=4, 16%) and by telephone (n=21, 84%). Results: We identified two processes that contribute to a greater understanding of the experiences of faculty of color: patterns of exclusion and control, and surviving and thriving. We also identified one outcome - faculty of color having influence. Conclusions: Strong support from leaders, mentors, and peers to nurture and protect faculty of color in schools of medicine is needed to counteract the negative effects of racism and to promote the positive effects this group has on diversity and excellence in medical education. Specific strategies for survival and success are described. abstract_id: PUBMED:17019910 Improving the retention of underrepresented minority faculty in academic medicine. Background: Although several studies have outlined the need for and benefits of diversity in academia, the number of underrepresented minority (URM) faculty in academic health centers remains low, and minority faculty are primarily concentrated at the rank of assistant professor. In order to increase the diversity of the faculty of the University of California, San Diego (UCSD) School of Medicine, the UCSD National Center for Leadership in Academic Medicine, in collaboration with the UCSD Hispanic Center of Excellence, implemented a junior faculty development program designed in part to overcome the differential disadvantage of minority faculty and to increase the academic success rate of all faculty. Methods: Junior faculty received counseling in career and research objectives; assistance with academic file preparation, introduction to the institutional culture; workshops on pedagogy and grant writing; and instrumental, proactive mentoring by senior faculty. Results: After implementation of the program, the retention rate of URM junior faculty in the school of medicine increased from 58% to 80% and retention in academic medicine increased from 75% to 90%. Conclusion: A junior faculty development program that integrates professional skill development and focused academic career advising with instrumental mentoring is associated with an increase in the retention of URM faculty in a school of medicine. abstract_id: PUBMED:31348063 Trends in Tenure Status in Academic Family Medicine, 1977-2017: Implications for Recruitment, Retention, and the Academic Mission. Purpose: Tenure status has important implications for medical school faculty recruitment and retention and may affect educational quality, academic freedom, and collegiality. However, tenure trends in academic family medicine are unknown. This study aimed to describe trends in tenure status of family medicine faculty overall and by gender and status of minorities underrepresented in medicine (URM) in Liaison Committee on Medical Education-accredited medical schools. Method: Association of American Medical Colleges Faculty Roster data were used to describe trends in tenure status of full-time family medicine faculty, 1977 to 2017. Bivariate and trend analyses were conducted to assess associations and describe patterns between tenure status and gender, race, and ethnicity. Interdepartmental variations in tenure trends over the years were also examined. Results: Among family medicine faculty, the proportions of faculty tenured or on a tenure track dropped more than threefold from 1977 (46.6%; n = 507/1,089) to 2017 (12.7%; n = 729/5,752). Lower proportions of women and URM faculty were tenured or on a tenure track than male and non-URM faculty, respectively. But the gaps among them were converging. Compared with other clinical departments, family medicine had the highest proportion of faculty (74.6%; n = 4,291/5,752) not on a tenure track in 2017. Conclusions: Proportion of tenure positions significantly decreased among family medicine faculty in U.S. medical schools. While gaps between male and female faculty and among certain racial/ethnic groups remained for family medicine tenure status, they have decreased over time, mainly because of a substantial increase in nontenured positions. abstract_id: PUBMED:10693848 Specialty choices, compensation, and career satisfaction of underrepresented minority faculty in academic medicine. Purpose: Despite efforts to increase the numbers of underrepresented minorities (URMs), only 3.9% of medical school faculty are URMs. The authors compared the specialty choices, compensation, and career satisfaction of minority faculty with those of their majority counterparts to determine whether there were differences that might affect the recruitment and retention of minority faculty. Method: In 1995, the authors mailed a self-administered survey to a stratified random sample of 3,013 eligible full-time salaried faculty in 24 randomly selected medical schools. Those schools, which had at least 200 faculty, did not include the Puerto Rican or historically black medical schools. Results: Of the eligible faculty surveyed, 1,807 (60%) responded; 1,463 were majority faculty, 195 were URM faculty, and 149 were other-minority faculty. Similar proportions of the three groups were in the primary care specialties. Only 11% of the URM respondents were in basic science departments. There was no significant difference in adjusted mean compensation between majority, URM, and other-minority faculty. However, URM faculty were significantly less satisfied with their careers (adjusted scores: 60 versus &gt; 65; p = .001) and more often considered leaving academic medicine within five years (58% versus &lt; 45%). Conclusion: Given the demographic changes of the U.S. population, these issues should be addressed by deans and department heads in order to enhance recruitment and facilitate retention of URM faculty in academic medicine. Answer: The data suggest that while women and underrepresented minority (URM) faculty are more prevalent in family medicine compared to other academic medicine disciplines, they are not necessarily "better off" in terms of senior academic positions. Women in full-time positions and minorities are less likely to be either an associate or full professor in departments of family medicine (PUBMED:11411975). URM faculty in family medicine have significantly lower academic rankings compared with medical school minority faculty and all family medicine faculty, indicating a double disadvantage for being minority and working in family medicine departments (PUBMED:11841983). Despite the presence of women and URMs in academic anesthesiology, they remain historically underrepresented, and their academic advancement is a concern (PUBMED:32336395). The tenure status among black and Latino faculty in academic medicine is also an issue, with these groups often found at the assistant professor rank, having more clinical responsibilities, and not being in leadership positions as frequently as their non-minority counterparts (PUBMED:26931546). Efforts to improve the retention of URM faculty in academic medicine, such as the junior faculty development program implemented by the University of California, San Diego School of Medicine, have shown positive results, increasing retention rates (PUBMED:17019910). However, the overall trend in tenure status for family medicine faculty has been declining, with a significant decrease in the proportion of faculty tenured or on a tenure track, which may affect recruitment and retention (PUBMED:31348063). In conclusion, while family medicine departments may have a higher representation of women and URMs compared to other disciplines, challenges in career advancement, tenure status, and leadership positions suggest that these groups are not necessarily in a better position within the field of family medicine.
Instruction: Can the biomass-ratio hypothesis predict mixed-species litter decomposition along a climatic gradient? Abstracts: abstract_id: PUBMED:24482152 Can the biomass-ratio hypothesis predict mixed-species litter decomposition along a climatic gradient? Background And Aims: The biomass-ratio hypothesis states that ecosystem properties are driven by the characteristics of dominant species in the community. In this study, the hypothesis was operationalized as community-weighted means (CWMs) of monoculture values and tested for predicting the decomposition of multispecies litter mixtures along an abiotic gradient in the field. Methods: Decomposition rates (mg g(-1) d(-1)) of litter from four herb species were measured using litter-bed experiments with the same soil at three sites in central France along a correlated climatic gradient of temperature and precipitation. All possible combinations from one to four species mixtures were tested over 28 weeks of incubation. Observed mixture decomposition rates were compared with those predicted by the biomass-ratio hypothesis. Variability of the prediction errors was compared with the species richness of the mixtures, across sites, and within sites over time. Key Results: Both positive and negative prediction errors occurred. Despite this, the biomass-ratio hypothesis was true as an average claim for all sites (r = 0·91) and for each site separately, except for the climatically intermediate site, which showed mainly synergistic deviations. Variability decreased with increasing species richness and in less favourable climatic conditions for decomposition. Conclusions: Community-weighted mean values provided good predictions of mixed-species litter decomposition, converging to the predicted values with increasing species richness and in climates less favourable to decomposition. Under a context of climate change, abiotic variability would be important to take into account when predicting ecosystem processes. abstract_id: PUBMED:23131299 Using the biomass-ratio and idiosyncratic hypotheses to predict mixed-species litter decomposition. Background And Aims: A test is made of the acceptability of the biomass-ratio hypothesis (BMRH), operationalized as community-weighted means (CWMs), and of a new hypothesis (idiosyncratic annulment), for predicting the decomposition of multispecies litter mixtures. Specifically, (1) does the BMRH based on monoculture decomposition rates introduce systematic over- or underestimation of rates in mixtures? and (2) does the degree of variability of these rates decrease with increasing species richness (SR) beyond that expected from purely mathematical causes? Methods: Decomposition rates (mg g(-1) d(-1)) of litter from six tree species in microcosms were measured under controlled conditions during 18 weeks of incubation, alone and in all possible combinations of two, three, five and six species. Observed mixture decomposition rates were compared with those predicted by the BMRH using CWMs calculated from the monoculture rates, and the variability of the differences were compared with the SR of the mixture. Key Results: Both positive and negative deviations from expectation occurred at all levels of SR. The average differences between observed rates of mixtures and those predicted were approximately zero. Although variability in the prediction errors was independent of the SR, this variability between different mixtures having the same number of species decreased with increasing SR such that mixtures with the most species converged on the predicted values. This decrease in variance was not due to idiosyncratic annulment of higher order interactions between species. Conclusions: The BMRH described the average response of litter mixtures. The decrease in variance and the convergence to the predicted values based on CWMs was not due to the 'idiosyncratic annulment' of species interactions but was a mathematical consequence of CWMs being sums of random variables. Since convergence occurs with increasing SR and since SR increases with increasing spatial scale, the spatial scale will be a determinant in the prediction of ecosystem processes, such as litter decomposition rates. abstract_id: PUBMED:35605871 Tree species identity and mixing ratio affected the release of several metallic elements from mixed litter in coniferous-broadleaf plantations in subtropical China. Planting broadleaf trees in coniferous forests has been shown to promote biogeochemical cycling in plantations; however, how species mixing influences litter decomposition and release of metallic elements from mixed coniferous-broadleaf litter remains unclear. An in situ litter decomposition experiment was conducted to examine the effect of 1) a mixture from coniferous litter (Pinus massoniana) with different individual broadleaved litter (Bretschneidera sinensis, Manglietia chingii, Cercidiphyllum japonicum, Michelia maudiae, Camellia oleifera) and 2) their mixing ratio (mass ratios of coniferous and broadleaf litter of 5:5, 6:4 and 7:3) on the release of metallic elements [calcium (Ca), magnesium (Mg), sodium (Na), potassium (K), manganese (Mn), iron (Fe), copper (Cu) and zinc (Zn)] during litter decomposition. We found that the identity of the broadleaf tree species in the mixed litter and the mixing ratio affected the release rates of metallic elements (p &lt; 0.05). After one year of decomposition, K, Mg, Mn and Zn were released, while Na, Ca, Fe and Cu accumulated in the mixed litter. Mixing increased the release of K, Ca, Na, Mg, Fe, Mn, Cu and Zn in more than one-third of the samples, but inhibited the release of K, Fe and Mn in less than 14% of the samples. Increasing the mixing ratio of coniferous to broadleaf litter enhanced the release of Na, Fe, Mn and Zn but decreased the release of Ca and Mg. Overall, these results highlight that mixed litter, particularly tree species identity and mixing ratio, can alter the release and enrichment of metallic elements during litter decomposition, thereby affecting the cycling of metallic elements in plantations with different species compositions. abstract_id: PUBMED:22430507 Microbial functional diversity associated with plant litter decomposition along a climatic gradient. Predicted changes in climate associated with increased greenhouse gas emissions can cause increases in global mean temperature and changes in precipitation regimes. These changes may affect key soil processes, e.g., microbial CO(2) evolution and biomass, mineralization rates, primary productivity, biodiversity, and litter decomposition, which play an important role in carbon and nutrient cycling in terrestrial ecosystems. Our study examined the changes in litter microbial communities and decomposition along a climatic gradient, ranging from arid desert to humid Mediterranean regions in Israel. Wheat straw litter bags were placed in arid, semi-arid, Mediterranean, and humid Mediterranean sites. Samples were collected seasonally over a 2-year period in order to evaluate mass loss, litter moisture, C/N ratio, bacterial colony-forming units (CFUs), microbial CO(2) evolution and biomass, microbial functional diversity, and catabolic profile. Decomposition rate was the highest during the first year of the study at the Mediterranean and arid sites. Community-level physiological profile and microbial biomass were the highest in summer, while bacterial CFUs were the highest in winter. Microbial functional diversity was found to be highest at the humid Mediterranean site, whereas substrate utilization increased at the arid site. Our results support the assumption that climatic factors control litter degradation and regulate microbial activity. abstract_id: PUBMED:26162398 A traits-based test of the home-field advantage in mixed-species tree litter decomposition. Background And Aims: Litter often decomposes faster in its environment of origin (at 'home') than in a foreign environment ('away'), which has become known as the home-field advantage (HFA). However, many studies have highlighted the conditional nature of the HFA, suggesting that current understanding of this phenomenon is not yet sufficient to generalize across systems. Methods: The HFA hypothesis was tested for mono-specific and mixed-species litter using a tree-based experiment that manipulated the functional identity and diversity of the host tree community. Litter types of varying quality were transplanted between several host tree communities and decomposition rates were measured using litterbags. Since the decomposer community should respond to traits of the litter input and not their taxonomic identity, a traits-based index of litter-tree similarity was developed. Key Results: Mono-specific litter exhibited HFA, but when the same litter was decomposed in mixture, this trend was not observed. Mixed-species litter decomposed on average no faster or slower than monoculture litter and exhibited both positive and negative species interactions. These non-additive interactions of decomposition rates in mixture were influenced by the degree of similarity between litter and tree traits. Both synergistic and antagonistic interactions decreased in magnitude with increasing litter-tree similarity such that mixture rates were predictable from monocultures. Conclusions: The HFA occurred more strongly for mono-specific litter than for the litter types mixed together because interactions between species may have masked this effect. However, when expressed as a function of trait similarity between litters and tree communities, the HFA was not detected. abstract_id: PUBMED:36348661 Differences in leaf and root litter decomposition in tropical montane rainforests are mediated by soil microorganisms not by decomposer microarthropods. Background: Plant litter decomposition is a key process in carbon and nutrient cycling. Among the factors determining litter decomposition rates, the role of soil biota in the decomposition of different plant litter types and its modification by variations in climatic conditions is not well understood. Methods: In this study, we used litterbags with different mesh sizes (45 µm, 1 mm and 4 mm) to investigate the effect of microorganisms and decomposer microarthropods on leaf and root litter decomposition along an altitudinal gradient of tropical montane rainforests in Ecuador. We examined decomposition rates, litter C and N concentrations, microbial biomass and activity, as well as decomposer microarthropod abundance over one year of exposure at three different altitudes (1,000, 2,000 and 3,000 m). Results: Leaf litter mass loss did not differ between the 1,000 and 2,000 m sites, while root litter mass loss decreased with increasing altitude. Changes in microbial biomass and activity paralleled the changes in litter decomposition rates. Access of microarthropods to litterbags only increased root litter mass loss significantly at 3,000 m. The results suggest that the impacts of climatic conditions differentially affect the decomposition of leaf and root litter, and these modifications are modulated by the quality of the local litter material. The findings also highlight litter quality as the dominant force structuring detritivore communities. Overall, the results support the view that microorganisms mostly drive decomposition processes in tropical montane rainforests with soil microarthropods playing a more important role in decomposing low-quality litter material. abstract_id: PUBMED:26240437 Decomposition of beech (Fagus sylvatica) and pine (Pinus nigra) litter along an Alpine elevation gradient: Decay and nutrient release. Litter decomposition is an important process for cycling of nutrients in terrestrial ecosystems. The objective of this study was to evaluate direct and indirect effects of climate on litter decomposition along an altitudinal gradient in a temperate Alpine region. Foliar litter of European beech (Fagus sylvatica) and Black pine (Pinus nigra) was incubated in litterbags during two years in the Hochschwab massif of the Northern Limestone Alps of Austria. Eight incubation sites were selected following an altitudinal/climatic transect from 1900 to 900 m asl. The average remaining mass after two years of decomposition amounted to 54% (beech) and 50% (pine). Net release of N, P, Na, Al, Fe and Mn was higher in pine than in beech litter due to high immobilization (retention) rates of beech litter. However, pine litter retained more Ca than beech litter. Altitude retarded decay (mass loss and associated C release) in beech litter during the first year only but had a longer lasting effect on decaying pine litter. Altitude comprises a suite of highly auto-correlated characteristics (climate, vegetation, litter, soil chemistry, soil microbiology, snow cover) that influence litter decomposition. Hence, decay and nutrient release of incubated litter is difficult to predict by altitude, except during the early stage of decomposition, which seemed to be controlled by climate. Reciprocal litter transplant along the elevation gradient yielded even relatively higher decay of pine litter on beech forest sites after a two-year adaptation period of the microbial community. abstract_id: PUBMED:36925466 Mycobiome structure does not affect field litter decomposition in Eucalyptus and Acacia plantations. Mixed tree plantations have been studied because of their potential to improve biomass production, ecosystem diversity, and soil quality. One example is a mixture of Eucalyptus and Acacia trees, which is a promising strategy to improve microbial diversity and nutrient cycling in soil. We examined how a mixture of these species may influence the biochemical attributes and fungal community associated with leaf litter, and the effects on litter decomposition. We studied the litter from pure and mixed plantations, evaluating the effects of plant material and incubation site on the mycobiome and decomposition rate using litterbags incubated in situ. Our central hypothesis was litter fungal community would change according to incubation site, and it would interfere in litter decomposition rate. Both the plant material and the incubation locale significantly affected the litter decomposition. The origin of the litter was the main modulator of the mycobiome, with distinct communities from one plant species to another. The community changed with the incubation time but the incubation site did not influence the mycobiome community. Our data showed that litter and soil did not share the main elements of the community. Contrary to our hypothesis, the microbial community structure and diversity lacked any association with the decomposition rate. The differences in the decomposition pattern are explained basically as a function of the exchange of nitrogen compounds between the litter. abstract_id: PUBMED:38382615 Decreased snow depth inhibits litter decomposition via changes in litter microbial biomass and enzyme activity. Decreased snow depth resulting from global warming has the potential to significantly impact biogeochemical cycles in cold forests. However, the specific mechanisms of how snow reduction affects litter decomposition and the underlying microbial processes remain unclear, this knowledge gap limits our ability to precisely predict ecological processes within cold forest ecosystems under climate change. Hence, a field experiment was conducted in a subalpine forest in southwestern China, involving a gradient of snow reduction levels (control, 50 %, 100 %) to investigate the effects of decreased snow on litter decomposition, as well as microbial biomass and activity, specifically focused on two common species: red birch (Betula albosinensis) and masters larch (Larix mastersiana). After one year of incubation, the decomposition rate (k-value) of the two types of litter ranged from 0.12 to 0.24 across three snow treatments. A significant lower litter mass loss, microbial biomass and enzyme activity were observed under decreased snow depth in winter. Furthermore, a hysteresis inhibitory effect of snow reduction on hydrolase activity was observed in the following growing season. Additionally, the high initial quality (lower C/N ratio) of red birch litter facilitated the colonization by a greater quantity of microorganisms, making it more susceptible to snow reduction compared to the low-quality masters larch litter. Structural equation models indicated that decreased snow depth hindered litter decomposition by altering the biological characterization of litter (e.g., microbial biomass and enzyme activity) and environmental variables (e.g., mean temperature and moisture content). The findings suggest that the potential decline in snow depth could inhibit litter decomposition by reducing microbial biomass and activity, implying that the future climate change may alter the material cycling processes in subalpine forest ecosystems. abstract_id: PUBMED:27896463 Litter mixture dominated by leaf litter of the invasive species, Flaveria bidentis, accelerates decomposition and favors nitrogen release. In natural ecosystems, invasive plant litter is often mixed with that of native species, yet few studies have examined the decomposition dynamics of such mixtures, especially across different degrees of invasion. We conducted a 1-year litterbag experiment using leaf litters from the invasive species Flaveria bidentis (L.) and the dominant co-occurring native species, Setaria viridis (L.). Litters were allowed to decompose either separately or together at different ratios in a mothproof screen house. The mass loss of all litter mixtures was non-additive, and the direction and strength of effects varied with species ratio and decomposition stage. During the initial stages of decomposition, all mixtures had a neutral effect on the mass loss; however, at later stages of decomposition, mixtures containing more invasive litter had synergistic effects on mass loss. Importantly, an increase in F. bidentis litter with a lower C:N ratio in mixtures led to greater net release of N over time. These results highlight the importance of trait dissimilarity in determining the decomposition rates of litter mixtures and suggest that F. bidentis could further synchronize N release from litter as an invasion proceeds, potentially creating a positive feedback linked through invasion as the invader outcompetes the natives for nutrients. Our findings also demonstrate the importance of species composition as well as the identity of dominant species when considering how changes in plant community structure influence plant invasion. Answer: The biomass-ratio hypothesis (BMRH) has been tested to predict mixed-species litter decomposition along a climatic gradient with varying results. According to one study, the BMRH, operationalized as community-weighted means (CWMs) of monoculture values, provided good predictions of mixed-species litter decomposition rates along a climatic gradient in central France. The hypothesis held true as an average claim for all sites, with variability decreasing with increasing species richness and in less favorable climatic conditions for decomposition (PUBMED:24482152). Another study found that while both positive and negative deviations from the BMRH occurred at all levels of species richness, the average differences between observed rates of mixtures and those predicted were approximately zero. The variability in prediction errors decreased with increasing species richness, suggesting that mixtures with more species converged on the predicted values. This decrease in variance was not due to idiosyncratic annulment of higher-order interactions between species but was a mathematical consequence of CWMs being sums of random variables (PUBMED:23131299). However, the BMRH's predictive power may not always be consistent across different contexts. For instance, the identity of broadleaf tree species in mixed litter and the mixing ratio affected the release rates of metallic elements during litter decomposition in subtropical China, indicating that species identity and mixing ratio can alter the release and enrichment of metallic elements, thereby affecting the cycling of metallic elements in plantations with different species compositions (PUBMED:35605871). Additionally, microbial functional diversity associated with plant litter decomposition along a climatic gradient showed that climatic factors control litter degradation and regulate microbial activity (PUBMED:22430507). In summary, while the BMRH can provide a general framework for predicting mixed-species litter decomposition along a climatic gradient, its accuracy may vary depending on specific conditions such as species identity, mixing ratio, and local climatic factors. The hypothesis tends to hold more accurately when considering average claims across multiple sites and conditions, with the predictive power improving with increasing species richness and in climates less favorable to decomposition. However, specific interactions between species and local environmental factors can lead to deviations from the predictions made by the BMRH.
Instruction: Radiology by nonradiologists: is report documentation adequate? Abstracts: abstract_id: PUBMED:17465166 Improving communication and documentation concerning preliminary and final radiology reports. The communication and documentation of radiology preliminary and final interpretations for the identification of discrepancies failed to evolve with radiology's transition away from film. Processes should facilitate documentation and distribution of preliminary findings to multiple practitioners in real time and should provide timely communication and documentation of additional findings. The evolution of the electronic medical record and digital radiology images provided an opportunity to establish effective, safe, and efficient methods of communication and documentation. Froedtert Hospital (Milwaukee, WI) used the Six Sigma process to improve communication and documentation of radiologic findings. Root causes of problems were determined, hypotheses were tested, and new processes were piloted and implemented. Froedtert Hospital improved its process for documenting and communicating preliminary and final interpretations and identifying discrepancies. abstract_id: PUBMED:16336062 Radiology by nonradiologists: is report documentation adequate? Objective: To determine if the quality of medical imaging reports differs significantly between radiologists and nonradiologists. Study Design: A retrospective nonblinded review of randomly selected chest and long bone x-ray reports by orthopedists and primary care physicians compared with randomly selected imaging reports generated by radiologists. Methods: We randomly selected 1 report from each of 50 high self-referring physicians privileged by 2 metropolitan New York area health plans for both bone and joint studies and chest x-rays for a total of 200 reports (50 bone and joint x-rays from each plan and 50 chest x-rays from each plan). We compared them with 50 randomly selected radiologist-generated reports. The reports were evaluated for quality based on the American College of Radiology's Guideline for Communication: Diagnostic Radiology. The data were analyzed by the 2-sample t-test between proportions at the 95% confidence interval. Results: Radiologists consistently provided higher-quality medical imaging reports than nonradiologists. Conclusions: To improve imaging service quality, all providers should be held to the same standards for reporting and communication of results. abstract_id: PUBMED:20889108 Documentation of nonroutine communications of critical or significant radiology results: a multiyear experience at a tertiary hospital. Purpose: The aim of this study was to determine the frequency of radiology reports that contain nonroutine communications of results and categorize the urgency of such communications. Methods: A rule-based text-query algorithm was applied to a database of 2.3 million radiology reports, which has an accuracy of 98% for classifying reports containing documentation of communications. The frequency of such communications by year, modality, and study type was then determined. Finally, 200 random reports selected by the algorithm were analyzed, and reports containing critical results were categorized according to ascending levels of urgency. Results: Critical or noncritical results to health care providers were present in 5.09% of radiology reports (116,184 of 2,282,923). For common modalities, documentation of communications were most frequent in CT (14.34% [57,537 of 402,060]), followed by ultrasound (9.55% [17,814 of 186,626]), MRI (5.50% [13,697 of 248,833]), and chest radiography (1.57% [19,840 of 1,262,925]). From 1997 to 2005, there was an increase in reports containing such communications (3.04% in 1997, 6.82% in 2005). More reports contained nonroutine communications in single-view chest radiography (1.29% [5,533 of 428,377]) than frontal/lateral chest radiography (0.80% [1,815 of 226,837]), diagnostic mammography (9.42% [3,662 of 38,877]) than screening mammography (0.47% [289 of 61,114]), and head CT (26.21% [20,963 of 79,985]) than abdominal CT (15.05% [19,871 of 132,034]) or chest CT (5.33% [3,017 of 56,613]). All of these results were statistically significant (P &lt; .00001). Of 200 random radiology reports indicating nonroutine communications, 155 (78%) had critical and 45 (22%) had noncritical results. Regarding level of urgency, 94 of 155 reports (60.6%) with critical results were categorized as high urgency, 31 (20.0%) as low urgency, 26 (16.8%) as medium urgency, and 4 (2.6%) as discrepant. Conclusions: From 1997 to 2005, there was a significant increase in documentation of nonroutine communications, which may be due to increasing compliance with ACR guidelines. Most reports with nonroutine communications contain critical findings. abstract_id: PUBMED:20677241 Communication and documentation of preliminary and final radiology reports. The "wet-read" consultation has been defined as a rapid response to a clinical question posed by a physician to a radiologist. These preliminary interpretations are often not well documented, have poor fidelity, and are subject to modifications and revisions. Moreover, preliminary interpretations may be subject to reinterpretation through a variety of scenarios. Recent technological advances in radiology have further hindered the ability to harmonize differences between preliminary and final interpretations and communicate these differences to treating physicians. High-fidelity simulation may represent a risk management strategy aimed at bridging the gap between radiology and communication technology. abstract_id: PUBMED:21351193 A survey of policies and procedures on the communication and documentation of radiologic interpretations. Research into emergency medicine (EM) diagnostic errors identified imaging as a contributing factor in 94% of cases. Discrepancies between the preliminary (trainee) and the final (attending) diagnostic imaging interpretation represent a system issue that is particularly prone to creating diagnostic errors. Understanding the types of systematic communication and documentation strategies developed by academic radiology departments to address differences between preliminary and final radiology interpretations to clinicians are threshold steps toward minimizing this risk. This study investigates policies and practices associated with the communication and documentation of preliminary and final radiologic interpretations among U.S. academic radiology departments through a questionnaire directed at radiology department chairs. abstract_id: PUBMED:24777574 Review of outside studies by radiology residents: national survey of chief residents. The purpose of this study was to establish practice patterns of radiology residents in regards to interpretation and reporting of outside studies for transferred patients. We performed a national survey of radiology residency chief residents, administered by email through the Association of Program Directors in Radiology (APDR). There were 81 chief resident respondents, representing 42.8 % of 187 total Accreditation Council for Graduate Medical Education (ACGME)-approved radiology residency training programs in the USA. In 97.5 % of programs, residents perform interpretations of outside studies. Up to 76.7 % of respondents state that when outside studies are reviewed by residents, an original report is available in less than one quarter of cases. While 55.1 % of respondents state that there is a mechanism for recording their findings and impressions for outside studies, only 32.1 % are aware of a policy requiring documentation. Of the respondents, 42.3 % report they have no means for documenting their findings and impressions on outside studies. Further, 65.4 % state that there is no policy requiring an attending to review and document agreement with their interpretation of outside studies. There is wide institutional variation in both policy and practice regarding reinterpretation of outside studies for patients transferred to academic hospitals. While the majority of radiology residents are providing the service of reinterpreting outside studies, only a minority of residency programs have a policy requiring (1) documentation of their impressions or (2) attending oversight and documentation of discrepant opinions. abstract_id: PUBMED:28162863 The Radiologist Is in, but Was it Worth the Wait? Radiology Resident Note Quality in an Outpatient Interventional Radiology Clinic. Preprocedural evaluation of patients in an interventional radiology (IR) clinic is a complex synthesis of physical examination and imaging findings, and as IR transitions to an independent clinical specialty, such evaluations will become an increasingly critical component of a successful IR practice and quality patient care. Prior research suggests that preprocedural evaluations increased patient's perceived quality of care and may improve procedural technical success rates. Appropriate documentation of a preprocedural evaluation in the medical record is also paramount for an interventional radiologist to add value and function as an effective member of a larger IR service and multidisciplinary health care team. The purpose of this study is to examine the quality of radiology resident notes for patients seen in an outpatient IR clinic at a single academic medical center before and after the adoption of clinic note template with reminders to include platelet count, international normalized ratio, glomerular filtration rate, and plan for periprocedural coagulation status. Before adoption of the template, platelet count, international normalized ratio, glomerular filtration rate and an appropriate plan for periprocedural coagulation status were documented in 72%, 82%, 42%, and 33% of patients, respectively. After adoption of the template, appropriate documentation of platelet count, international normalized ratio, and glomerular filtration rate increased to 96%, and appropriate plan for periprocedural coagulation status was documented in 83% of patients. Patient evaluation and clinical documentation skills may not be adequately practiced during radiology residency, and tools such as templates may help increase documentation quality by radiology residents. abstract_id: PUBMED:23986936 Online documentation of equipment maintenance and performance. A radiology department or imaging center relies on the proper operation of its equipment. This is accomplished through appropriate maintenance and independent medical physics monitoring. These activities represent an expensive but necessary effort to protect the original investment. Effective management requires accurate, timely, and useful documentation. The Georgia Regents Medical Center Department of Diagnostic, Therapeutic, and Interventional Radiology developed a comprehensive database encompassing both maintenance and monitoring activities associated with imaging equipment. This database is accessible throughout the facility via a website, providing a central hub for information distribution and access. The radiology engineering supervisor and quality control technologist make extensive use of this system and were key to its implementation and ongoing improvement. abstract_id: PUBMED:37032290 Increasing Utilization and Improving Documentation in a Radiology Critical Alert System. Effective communication of critical imaging findings is an important patient safety issue. Despite an increase in exam volumes, our institution saw a decrease in the number of alerts sent through our critical alert system, indicating that critical findings were not being communicated. The purpose of our interventions was to increase the number of critical alerts, while also improving documentation and improving our provider database. We used a program of education for our radiologists and systematic reinforcement to increase the usage of our critical alert system. We also implemented a new time-stamp macro in our dictation system to improve documentation of emergency alerts, and engaged with other departments to improve the contact information in our provider database. Our interventions led to an increase in the monthly number of critical alerts, most notably for findings that require clinical or imaging follow-up (17 alerts per month). There was also a steady improvement in documentation (96.9% compliance), along with an increase in the number of alerts to providers with current contact information (0.5% per month). Our efforts show that educational and collaborative efforts can result in improved communication of radiologic critical results. abstract_id: PUBMED:23595099 Structured reporting in radiology The radiology report plays a key role in the radiological procedure and is used both for documentation of internal procedures as well as in a variety of communication processes within and outside the radiology department. Due to the various communication processes the question arises whether and how the report should be structured. The first stage of such a structured report implies a thematic structure only. This corresponds to the current practice in Germany whereby the reports are structured according to the X-ray act. In the second stage a structure of the contents is added, which is usually implemented by selecting predefined text modules. In the third stage the wording of the text modules is selected from controlled vocabularies, such as the RadLex which allows complete encoding of the report. Templates like this are provided by the Radiological Society of North America (RSNA) reporting initiative and are available to the general public. As advantages of structured reports of levels 2 and 3 it is expected that they provide a better structure and greater clarity of the findings to the referring physician. By using predefined blocks of text a higher degree of completeness and a shorter reporting time is expected. These benefits have been tested in several studies but have not yet yielded clear results. Within a radiology department the structured reports are communicated according to the integrating the healthcare enterprise (IHE) controlled reporting workflow as DICOM structured reports and outside radiology departments the reports are exchanged as CDA objects using HL7 standards. Answer: The quality of medical imaging reports by nonradiologists has been found to be consistently lower than those provided by radiologists. A study designed to determine if there is a significant difference in the quality of reports between these two groups found that radiologists indeed provided higher-quality medical imaging reports (PUBMED:16336062). This suggests that to improve the quality of imaging service, all providers should adhere to the same standards for reporting and communication of results. The study used the American College of Radiology's Guideline for Communication: Diagnostic Radiology as a benchmark for quality and analyzed reports from orthopedists, primary care physicians, and radiologists. The data indicated that radiologists' reports were superior, leading to the conclusion that nonradiologists' report documentation may not be adequate and that there is a need for uniform standards across all providers.
Instruction: Are Hong Kong babies getting bigger? Abstracts: abstract_id: PUBMED:9215489 Are Hong Kong babies getting bigger? Objective: To establish recent birth weight trends in Hong Kong. Method: A total of 10,512 confinements for the years 1985-86, and 7857 for the years 1995-96 were analyzed. Result: There was a significant increase in maternal height, weight at booking, and maternal age, whereas the body-mass index was reduced slightly (P &lt; 0.0001). Parity increased significantly, with the percentage of parous women rising from 44.1% to 55.6% (P &lt; 0.0001). The percentage of female infants decreased from 49.5% to 47.9%. Despite these changes there was no significant difference in mean birth weights between the two groups. When birth weight was controlled for sex, parity, maternal height and weight there was a trivial increase of 15 g over time (P = 0.01). Conclusion: Birth weight has reached a plateau in Hong Kong despite a continuing increase in the regions' socioeconomic status, and evidence of improved nutritional well-being. abstract_id: PUBMED:20121569 Recent dramatic increase in the male-to-female sex ratio of babies born in Hong Kong. Aims: There is a rapid rise in the male-to-female sex ratio at birth in Hong Kong, which coincides with the influx of Mainland Chinese mothers crossing the border to give birth in Hong Kong. Our objective is to explore the sex ratio patterns among Hong Kong Chinese and Mainland Chinese. Methods: Analysis of the statistics from Hong Kong public hospitals from 2003 to 2007. Results: For the 194,602 babies studied, 140,962 (72.4%) were eligible (Hong Kong Chinese) and 52,741 (27.1%) were non-eligible (Mainland Chinese). The overall sex ratio at birth (defined as males per 1000 females) was 1088; 1078 for eligible and 1116 for non-eligible. For eligible persons, the sex ratios were 1060 for parity 0, 1073 for parity 1 and 1184 for parity 2 or above. For non-eligible persons, the sex ratios were 1047 for parity 0, 1149 for parity 1 and 1745 for parity 2 or above. Non-eligible persons of parity 1 or above were more likely to have sons than eligible persons of same parity (P&lt;0.000). Conclusion: We believe that the rapid rise of sex ratio in Hong Kong can be partly explained by the influx of Mainland Chinese who may have practiced sex selection more than Hong Kong Chinese. abstract_id: PUBMED:12627795 Craniofacial anthropometry of Hong Kong Chinese babies: the eye. Objectives: To provide a database of the canthal measurements of Chinese infants born in Hong Kong. Design: Prospective cross-sectional study. Setting And Sample Population: A total of 2,371 healthy singleton, born consecutively at the Prince of Wales Hospital and the Union Hospital from June 1998 to June 2000, were included in the study. The range of gestation was 33-42 weeks. Measurements And Results: Canthal parameters were measured and included the outer canthal distance (OCD), intercanthal distance (ICD) and palpebral fissure length (PFL). The data generally show higher values for males in the parameters measured. The measurements were also compared with data published overseas. PFL and OCD were significantly larger in Hong Kong Chinese newborn babies, whereas ICD was smaller. Conclusion: This study establishes the first set of gestational age-specific standard of such craniofacial parameters for Chinese newborn babies, potentially enabling early syndromal diagnosis. There are significant inter-racial differences in these canthal parameters. abstract_id: PUBMED:14989749 Auricular anthropometry of Hong Kong Chinese babies. Objectives: To provide a database of the auricular measurements of Chinese infants born in Hong Kong. Design: Prospective cross-sectional study. Setting And Sample Population: A total of 2384 healthy singleton, born consecutively at the Prince of Wales Hospital and the Union Hospital from October 1998 to September 2000, were included in the study. The range of gestation was 33-42 weeks. Measurements And Results: Measurements included ear width (EW), ear length (EL) and ear position (EP). The data show generally higher values for males in the parameters measured. When compared with previously published data for Caucasian and Jordanian term babies, Chinese babies have shorter EL. The ears were within normal position in nearly all our infants. Conclusion: The human ear appears to grow in a remarkably constant fashion. This study establishes the first set of gestational age-specific standard of the ear parameters for Chinese new-borns, potentially enabling early syndromal diagnosis. There are significant inter-racial differences in these ear parameters. abstract_id: PUBMED:687693 Intrauterine growth in Hong Kong Chinese. The recorded data on birth weight and gestational age of 16,836 babies born at Tsan Yuk Hospital in Hong Kong for the years 1970 to 1972 was analysed by computer. The means, standard deviations, and percentiles were separately obtained for nine sub-groups between 28 and 42 weeks and they were compared with similar series from Taiwan, Singapore, America and England. Chinese babies in Hong Kong were significantly heavier compared to most series between 32 and 35 weeks (p less than or equal to 0.001), and consistently lighter compared to all series studied between 38 and 42 weeks (p less than or equal to 0.001). Reasons for these findings are discussed. abstract_id: PUBMED:12962199 Facial anthropometry of Hong Kong Chinese babies. Objectives: To provide a database of the craniofacial measurements of Chinese infants born in Hong Kong. Design: Prospective cross-sectional study. Setting And Sample Population: A total of 2371 healthy singleton, born consecutively at the Prince of Wales Hospital and the Union Hospital from June 1998 to June 2000, were included in the study. The range of gestation was 33-42 weeks. Measurements And Results: Measurements included facial width (FW), facial height (FH), nasal length (NL), nasal width (NW), and length of the philtrum (PhilL). The facial, nasal, nasofacial and nasozygomatic indices were derived. The data show generally higher values for males in the parameters measured. The various indices remained remarkably constant and did not vary significantly between the two genders or with gestation. When compared with previously published data for white people term babies, Chinese babies have similar NW but shorter philtrum length. Conclusion: The human face appears to grow in a remarkably constant fashion as defined by the various indices of facial proportions. This study establishes the first set of gestational age-specific standard of such craniofacial parameters for Chinese new-borns, potentially enabling early syndromal diagnosis. There are significant inter-racial differences in these craniofacial parameters. abstract_id: PUBMED:32220824 An anatomy of waste generation flows in construction projects using passive bigger data. Understanding waste generation flow is vital to any evidence-based effort by policy-makers and practitioners to successfully manage construction project waste. Previous research has found that accumulative waste generation in construction projects follows an S-curve, but improving our understanding of waste generation requires its investigation at a higher level of granularity. Such efforts, however, are often constrained by lack of quality "bigger" data, i.e. data that is bigger than normal small data. This research aims to provide an anatomy of waste generation flow in building projects by making use of a large set of data on waste generation in 19 demolition, 59 foundation, and 54 new building projects undertaken in Hong Kong between 2011 and 2019. We know that waste is generated in far from a steady stream as it is always impacted by contingent factors. However, we do find that peaks of waste generation in foundation projects appear when project duration is at 50-85%, and in new building projects at 40-70% of total project time. Our research provides useful information for waste managers in developing their waste management plans, arranging waste hauling logistics, and benchmarking waste management performance. abstract_id: PUBMED:30486216 A contribution to the Mutillidae (Hymenoptera) of Hong Kong. A list of 12 species in nine genera occurring in Hong Kong is given. The hitherto unknown male of Bischoffitilla lamellata (Mickel, 1933) is described and illustrated. Six species are newly recorded from Hong Kong. The current number of known mutillid species of Hong Kong is doubled from six to 12. abstract_id: PUBMED:9656955 Emergency medicine in Hong Kong. Emergency medicine in Hong Kong is a relatively new specialty. Substantial progress toward recognition of emergency medicine in the territory has occurred during the past 2 years, and the Hong Kong College of Emergency Medicine is now formally recognized by the Royal College of Surgeons in the United Kingdom. With the transfer of the territory to chinese rule, the emergency medical system in Hong Kong may serve as a model for the development of emergency medicine in mainland China. abstract_id: PUBMED:15008559 The epidemiology of myopia in Hong Kong. Introduction: The prevalence of myopia amongst the Chinese has escalated in recent decades. While this refractive error was previously a little more than an inconvenience and a source of unwanted expense to the affected individuals, it is now sufficiently prevalent to warrant national concern. Myopia is also a major cause of low vision. This review aims to provide information on the prevalence of myopia in the Chinese community in Hong Kong. Materials And Methods: Based on a number of studies carried out mainly at the Centre for Myopia Research in The Hong Kong Polytechnic University, myopia in the Chinese community in Hong Kong is described. Results: Infants in Hong Kong appear to have refractive errors similar to infants of European descent; however, by the age of 7 years myopia is already prevalent. The annual incidence of myopia thereafter averages 11% to 12%; by the age of 17 years more than 70% are myopic, and this prevalence is greater than in people of European descent. In Hong Kong, myopia is much less prevalent in the older generation. Conclusion: The increase in prevalence of myopia over a short period of time is postulated to be due to environmental factors. Children of Chinese descent have a susceptibility to environmental factors which cause myopia. Answer: Based on the abstracts provided, it appears that while there have been some changes in the characteristics of Hong Kong babies over time, there is no significant increase in their birth weight. A study analyzing confinements from 1985-86 and 1995-96 found that despite a significant increase in maternal height, weight at booking, and maternal age, as well as a slight increase in parity, there was no significant difference in mean birth weights between the two groups. After controlling for sex, parity, maternal height, and weight, there was only a trivial increase of 15 g over time (PUBMED:9215489). This suggests that birth weight in Hong Kong has reached a plateau despite improvements in socioeconomic status and nutritional well-being. Other studies provided data on the craniofacial and auricular measurements of Hong Kong Chinese babies, indicating that there are significant inter-racial differences in these parameters (PUBMED:12627795, PUBMED:14989749, PUBMED:12962199). However, these studies do not directly address the question of whether Hong Kong babies are getting bigger in terms of overall size or weight. In conclusion, based on the available abstracts, there is no evidence to suggest a significant increase in the overall size of Hong Kong babies in terms of birth weight.
Instruction: Is adequate parenting compatible with maternal drug use? Abstracts: abstract_id: PUBMED:34009508 Maternal Parenting Behaviors Amplify Environmental Influences on Developmental Trajectories of Alcohol Use During Adolescence. Compared to peer alcohol use, less is known on how parenting practices may modify genetic and environmental contributions to alcohol use longitudinally across adolescence. This study examined whether two maternal parenting behaviors, supervision and harsh parenting, may suppress or amplify genetic and environmental influences on three distinct developmental trajectories of adolescent alcohol use: normative increasing, early-onset, and low trajectories. Participants were drawn from a longitudinal study of a population-based twin sample (N = 842, 84% European descent, 52.7% female). Adolescents self-reported their past year alcohol use at ages 13, 14, 15, and 17 years, and their mothers reported their supervision and harsh parenting when twins were 13, 15, and 17 years old. Maternal supervision amplified non-shared environmental influence on the normative increasing and early-onset trajectories, whereas maternal harsh parenting amplified shared environmental influence on the early-onset trajectory and non-shared environmental influence on the low trajectory, respectively. The findings suggest maternal parenting practices as a potent developmental context that modulates the environmental influences of other proximal processes on adolescent alcohol use, and suggest that family-based parenting-focused intervention could be especially beneficial for adolescents following the early-onset trajectory. abstract_id: PUBMED:35877545 Maternal Parenting Stress, Disengagement Strategies for Managing Parenting Stress, and Hazardous Alcohol Use among Mothers of Adolescents. Objective: According to the reward- stress dysregulation model of addiction and parenting, parenting stress may play a critical role in the etiology and maintenance of substance misuse. To date, limited work has examined how parenting stress may relate to hazardous alcohol use among mothers of adolescents. The present study examined if parenting stress interacted with maternal stress responses (i.e., disengagement coping, involuntary disengagement stress responses) in relation to maternal hazardous alcohol use. Method: Two hundred and fifty-three mothers (n = 147 hazardous alcohol users; 58% of the sample) of adolescents between the ages of 12 and 16 years participated in the current study. Mothers were recruited using Qualtrics panel services. Mothers reported on their parenting stress, strategies for managing parenting stress, and use of alcohol. Mothers also reported basic demographic information (Mage = 41.10 years, 86.6% identified as white, 81.1% of mothers were married). Interaction analyses were used to test hypotheses. Results: Maternal parenting stress interacted with maternal involuntary disengagement stress responses in relation to maternal hazardous alcohol use, such that at average and high levels of disengagement stress responses, parenting stress was associated with hazardous alcohol use. These results were robust to covariates (i.e., family income, maternal anxiety symptoms, maternal depressive symptoms, general levels of maternal stress, maternal cigarette use). Maternal disengagement coping strategies were not related to maternal hazardous alcohol use. Conclusion: Results suggest that parenting stress and involuntary disengagement stress responses may be associated with hazardous alcohol use among mothers of adolescents. abstract_id: PUBMED:37457390 Maternal COVID-19 Distress and Chinese Preschool Children's Problematic Media Use: A Moderated Serial Mediation Model. Introduction: Maternal distress increased during the COVID-19 pandemic, significantly impacting children's media use. The purpose of this study was to explore the influence mechanism of maternal COVID-19 distress on preschoolers' problematic media us through a moderated mediation model; specifically, we examined the possible mediating roles of parenting stress and negative instrumental use of media in parenting and the moderating role of supportive co-parenting. Methods: An online survey was conducted in a sample of 1357 children (Mage = 4.01, SD = 1.06; 47.4% boys) and their parents from six public kindergartens in Shanghai, China. The mothers provided information by completing measures on their levels of distress related to COVID-19, parenting stress levels, digital parenting practices, and perception of supportive co-parenting from their partners. Additionally, both parents rated their children's problematic media use. Results: (1) maternal COVID-19 distress was significantly and positively related to children's problematic media use; (2) this relationship was sequentially mediated by parenting stress and parents' negative instrumental use of media in parenting; and (3) supportive co-parenting moderated the serial mediation path by reducing the effect of maternal COVID-19 distress on parenting stress. Conclusion: The findings provide some support and guidance for preventing children's problematic media use and enhancing parental adaptation during the COVID-19 pandemic or in potentially adverse situations. abstract_id: PUBMED:18257792 Is adequate parenting compatible with maternal drug use? A 5-year follow-up. Introduction: This prospective, cohort study compares child protection outcomes over the first 5 years of life in a group of children born to self-declared drug-using mothers recruited during pregnancy (cases) and a group of children matched for gestational age, chronological age, maternal neighbourhood and place of delivery whose mothers made no such declaration of problematic drug use (controls). Methodology: We monitored local child protection registers to identify cohort members who came to the attention of the local authority. Results: Of the 71 original cases and 142 original controls, 55 (77%) and 96 (68%) remained in the area enrolled in local schools at 5 years of age. In total, 26 (47.3%) of the case children were subject to child protection procedures compared with 18 (18.8%) of the control children. This risk difference of 28.5% (95% CI 13.2% to 43.9%) has increased marginally since our previous report in this journal of child protection outcomes at 18 months of age (32% vs. 7%). However, the level of intervention deemed necessary to protect the child has increased significantly with six cases (compared with one control child) taken into the care of the local authority. Conclusions: Despite early maternal intentions and multiple supportive interventions, 27% of children born to women with significant substance abuse problems in our area required child protection during the pre-school years. Child protection risk assessment procedures need to weigh problematic maternal drug use heavily. Intervention studies with child welfare outcomes are needed to identify the most effective harm reduction strategies and inform public debate on how we can minimize child abuse related to substance misuse. abstract_id: PUBMED:25500130 The role of general parenting and cannabis-specific parenting practices in adolescent cannabis and other illicit drug use. Aims: To investigate general and cannabis-specific parenting practices in relation to adolescent cannabis and other illicit drug use. Methods: Data were derived from the Dutch National School Survey on Substance Use among students (N=3209; aged 12-16 years) and one of their parents in 2011. Results: Logistic regression analyses revealed that 1) parental cannabis use was significantly related to more adolescent lifetime and recent cannabis use, and 2) restrictive cannabis-specific parental rules were associated with less adolescent recent cannabis and lifetime use of other illicit drugs, even when controlled for sociodemographic factors, general parenting, adolescent tobacco use, and tobacco-specific parenting. In addition, no significant interaction was observed between parental cannabis use and cannabis-specific rules in their relation to adolescent cannabis and other illicit drug use, indicating that cannabis rules are evenly associated with adolescent drug use for families with and without parental cannabis experience. Conclusions: In addition to general parenting practices, restrictive cannabis-specific rules are related to lower adolescent cannabis and other illicit drug rates. Parents who ever used cannabis have children with a higher prevalence of cannabis use. However, their restrictive cannabis-specific rules are equally related to a lower chance of adolescent cannabis use. abstract_id: PUBMED:35805740 Cannabis Use and Parenting Practices among Young People: The Impact of Parenting Styles, Parental Cannabis-Specific Rules, and Parental Cannabis Use. Cannabis is the most commonly used illicit drug. Its use typically starts in adolescence, and parents play a key role in young people's cannabis use. Our study aimed to examine and compare the effects of parenting styles, parental cannabis-specific rules, and parental cannabis use on young people's cannabis use. The research sample consisted of 839 students from various secondary education programs in Slovenia, aged 14 to 21. Associations between the young people's lifetime cannabis use and their experience of parenting practices were assessed using logistic regression, with demographic, socioeconomic, educational, health, and risk behaviors controlled in a multivariate model. Maternal authoritative parenting (in comparison with permissive parenting), strict maternal, cannabis-specific rules, and parental cannabis non-use statistically significantly reduced the likelihood of young people's cannabis use. Its strongest predictor was parental cannabis use, followed by the mother's specific cannabis-use rules and maternal parenting style. The findings of our study can contribute to the development of public health policies to more effectively prevent cannabis use among adolescents and emerging adults, including by designing prevention programs aimed at strengthening parents' general and cannabis-specific practices and competences. abstract_id: PUBMED:10624734 The role of psychopathology in the parenting of drug-dependent women. This paper explores the parenting of drug-dependent women and the contributions of comorbid psychopathology to their parenting. A sample of 32 children whose mothers were dependent on opioid drugs during pregnancy and 37 children whose mothers were not drug users were followed from birth to middle childhood. Multivariate regression analyses were conducted contrasting whether maternal substance abuse or psychopathology was more closely linked to parenting behaviors and continuity in parenting over time. Maternal drug dependence was related to whether mothers were able to remain primary caregivers for their children over time, even after controlling for psychopathology. Maternal drug use was related to unresponsive and negative parenting behavior during mother-infant interaction, but this relation was largely accounted for by the effects of comorbid maternal psychopathology on parenting, particularly symptoms of antisocial and related personality disorders. For those children whose mothers continued to care for them into middle childhood, perceptions of their mothers as rejecting were related to maternal antisocial personality and maternal depression. Substance-abuse treatment for women should be integrated with interventions addressing their mental health and parenting needs. abstract_id: PUBMED:38458100 Relationship between maternal anxiety and infants' temperament: The mediating role of mindful parenting. Research has indicated that maternal anxiety does have an effect on infant temperament. Therefore, it is important to study the variables that could play a role in this relationship. In this study, we propose that mindful parenting could act as a mediator in this relationship. Thus, the main objective was to evaluate the relationship between maternal anxiety and child temperament (i.e., negative affectivity, surgency, and effortful control) through the mindful parenting of mothers. Mothers (N = 225) self-reported their anxiety, mindful parenting use, and the temperament of their old infants (aged 4-18 months). First, the reliability and validity results showed that the infant version of the Interpersonal Mindful Parenting questionnaire was a good tool for the assessment of mindful parenting among parents with infants. The five-factor structure of the questionnaire was confirmed; it involved self-regulation in the parenting relationship, listening with full attention, emotional awareness of the child, compassion for the child, and non-judgmental acceptance of parenting behavior. Correlational analyses showed that maternal anxiety was related to negative affectivity and effortful control in infants. Furthermore, mediational analyses indicated that the relation between maternal anxiety and infant negative affectivity was mediated by self-regulation in parenting and the emotional awareness of the child. In addition, the relation between maternal anxiety and infant effortful control was mediated by compassion for the child and listening with full attention. These results contribute to knowledge about the relation between maternal anxiety and child temperament, which may increase the risk of psychological symptoms. The results of this study suggest that promoting mindful parenting skills may be beneficial for affectivity and effortful control in infants. abstract_id: PUBMED:35084875 Maternal respiratory sinus arrhythmia contextualizes the relation between maternal anxiety and overprotective parenting. This current study examined maternal characteristics that predict the use of overprotective parenting in mothers of toddlers. Maternal respiratory sinus arrhythmia (RSA) reactivity was tested as a moderator of the relation between maternal anxiety and overprotective parentig. Mothers (n = 151) and their 2-year-old toddlers participated in a laboratory visit and returned for a follow-up visit 1 year later. At child age 2, mothers reported their own anxiety. Mothers' RSA reactivity was measured between a resting baseline and a standardized laboratory task, and overprotective parenting was observed in that task. Toddler fearful temperament (FT) was observed in a separate standardized task as well as reported by mothers. At child age 3, mothers' overprotective parenting behaviors were observed according to the same procedures so change from age 2 could be measured. Results revealed that maternal anxiety and maternal RSA at age 2 interacted to predict relative increases in overprotective parenting behaviors at age 3. At low levels of RSA reactivity, reflecting RSA suppression, maternal anxiety predicted lower levels of overprotective parenting. At high levels of RSA reactivity, reflecting RSA augmentation, maternal anxiety predicted higher levels of overprotective parenting. Our results suggest that RSA suppression may protect mothers with anxiety symptoms from engaging in overprotective parenting, whereas RSA augmentation may put mothers with anxiety symptoms at risk for engaging in overprotective parenting. Findings indicate that the interaction of multiple parental traits should be considered when working with parents and families on parenting behavior. (PsycInfo Database Record (c) 2022 APA, all rights reserved). abstract_id: PUBMED:38250635 The Effect of Paternal Co-Parenting on Preschool Children's Problem Behaviors: The Chain-Mediating Role of Maternal Parenting Burnout and Psychological Aggression. Objective: With the social changes, a growing number of women have joined the workforce, leading to a shift in the traditional roles of child-rearing. There has been a growing focus on the significance of fathers' roles in child development, particularly the influence of fathers on children's problematic behaviors, making it an increasingly prominent issue. However, there is limited understanding regarding the potential mechanisms through which fathers may exert influence on children's problem behaviors. To address this gap, this study sought to investigate the link between paternal co-parenting and preschool children's problem behaviors, and the mediating effects of maternal parenting burnout and psychological aggression. Methods: This study used the Personal Information Form and four scales to administer questionnaires to 1164 mothers of preschool children (Mage = 4.26 ± 0.85) in Guangdong Province, China. The collected data underwent processing and analysis using SPSS 22.0. Results: Paternal co-parenting demonstrated a significantly positive correlation with problem behaviors among preschool children. The impact of paternal co-parenting on children's problem behaviors was mediated by maternal parenting burnout, maternal psychological aggression, and the combined effect of maternal parenting burnout and psychological aggression. Conclusion: Maternal parenting burnout and maternal psychological aggression play a sequential mediating role between paternal co-parenting and problem behaviors among preschool children. This study revealed the internal mechanism through which paternal co-parenting influenced problem behaviors exhibited by children. It provides some evidence to support the important role of fathers in child development, and provides a reference for policymakers and educators to develop interventions for children's problem behaviors. Answer: The question of whether adequate parenting is compatible with maternal drug use is complex and multifaceted. The abstract from PUBMED:18257792 provides insight into this issue by comparing child protection outcomes over the first 5 years of life between children born to drug-using mothers and those born to non-drug-using mothers. The study found that a significantly higher proportion of children born to drug-using mothers were subject to child protection procedures, with an increased risk difference since an earlier report. Additionally, the level of intervention necessary to protect the child increased significantly, with more children from the drug-using group taken into the care of the local authority. Despite early maternal intentions and supportive interventions, a substantial percentage of children born to women with significant substance abuse problems required child protection during the pre-school years. The study concludes that child protection risk assessment procedures need to heavily weigh problematic maternal drug use and that intervention studies with child welfare outcomes are needed to identify effective harm reduction strategies (PUBMED:18257792). Furthermore, the abstract from PUBMED:10624734 explores the parenting of drug-dependent women and the contributions of comorbid psychopathology to their parenting. It found that maternal drug dependence was related to whether mothers could remain primary caregivers over time and was associated with unresponsive and negative parenting behavior. However, this relation was largely accounted for by the effects of comorbid maternal psychopathology, particularly symptoms of antisocial and related personality disorders. The study suggests that substance abuse treatment for women should be integrated with interventions addressing their mental health and parenting needs (PUBMED:10624734). In summary, the evidence suggests that maternal drug use can have a significant negative impact on parenting adequacy and child welfare outcomes, particularly when comorbid psychopathology is present. Adequate parenting appears to be compromised in the context of maternal drug use, necessitating comprehensive interventions that address both substance abuse and mental health issues to improve parenting practices and child protection.
Instruction: Do media portrayals of obesity influence support for weight-related medical policy? Abstracts: abstract_id: PUBMED:23668850 Do media portrayals of obesity influence support for weight-related medical policy? Objective: The purpose of this research was to examine the impact of nonstigmatizing visual portrayals of obesity on support for a discriminatory weight-based medical policy. Method: Participants read an online news story about a policy to deny fertility treatment to obese women that was accompanied by a nonstigmatizing, stigmatizing, or no image of an obese couple. A balanced discussion of the policy was presented, with information both questioning the policy as discriminatory and supporting the policy because of weight-related medical complications. Results: Results revealed that participants who viewed the nonstigmatizing image were less supportive of the policy to deny obese women fertility treatment and recommended the policy less strongly than participants who viewed the stigmatizing image. Furthermore, weaker perceptions of medical risk mediated the effect of the nonstigmatizing image on policy ratings. Conclusion: The findings indicate that simply eliminating stigmatizing media portrayals of obesity may help reduce bias. abstract_id: PUBMED:30133777 State Agency Support of Weight-Related School Policy Implementation. Background: In this study, we describe state agency strategies to support weight-related policy implementation in schools, and examine the association among state support, obesity prevalence, and strength of state policies governing school nutrition and physical education. Methods: The 2012 School Health Policies and Practices Study describes prevalence of implementation support state agencies provided to schools/districts. Implementation support items were analyzed by weight-related policy area (eg, advertising, wellness policy) and by type of support (eg, technical assistance). Results were summed to create a total weight-related policy support score. Linear regression was used to examine associations between policy support and state youth obesity prevalence (2011-2012 National Survey for Children's Health), overall and stratified by state policy strength (2012 Classification of Laws Associated with School Students). Results: States provided support most commonly for school meals and wellness policies (89% and 81%, respectively) and least often for after-school PE (26%). Most states (80%) provided technical assistance. The total weight-related policy support score had a significant positive association with state-level youth overweight/obesity prevalence (p = .03). Conclusion: State agencies appear to be responding to their youth obesity prevalence with technical support. Schools and state agencies should work in collaboration to provide a healthy school environment for all students. abstract_id: PUBMED:33646130 Interplay of Support, Comparison, and Surveillance in Social Media Weight Management Interventions: Qualitative Study. Background: There has been a significant increase in the trend of using social media as a platform to deliver weight management interventions. This illustrates a need to develop a holistic understanding of doctor-patient communication and peer-to-peer communication in social media interventions and to determine their influences on weight management for people with overweight or obesity. Such studies will highlight how social media can be more effectively integrated into weight management programs to enhance individuals' short-term and long-term weight management behaviors. Objective: The aim of this study was to examine patients' experiences with doctor-patient communication and peer interactions in a social media-based (WeChat) weight management program, and to describe the interplay of three social influence factors-social support, social comparison, and surveillance-in their weight control practices. The program, designed and implemented by the research team located in a tertiary referral hospital in a southeastern province in China, included both diet and physical activity components that targeted people with overweight or obesity. Methods: We conducted in-depth interviews with 32 program participants of different ages (mean 35.6, SD 7.7 years), gender (18 women), duration of program membership (mean 1.4 years), and weight loss outcomes (54% weight loss to 9% weight gain). All interview data were audio-recorded, transcribed, and translated using the translation-backtranslation technique. Nvivo software was used to facilitate the coding process. Results: Results of thematic analysis indicated the distinct functions of professionally led support and peer support. Professional support was presented in the form of knowledge infusion, efficacy enhancement, and provision of timely feedback. Peer support fostered empathy and sense of belonging, and had a mutually reinforcing relationship with peer comparison and peer-based surveillance. Peer comparison enhanced motivation and positive competition. However, it also reinforced negative group norms, and resulted in downturns in reference standards and collective inactivity. Social media surveillance prompted participants' reactions to the gaze from medical professionals and peers that could be encouraging or inhibiting. Surveillance enhanced vigilance with weight control norms; however, its influence weakened when participants chose to fake weight data and turn off notifications. Findings from this study illustrated the interrelated and fluctuating influences of support, comparison, and surveillance. Conclusions: The interactive traits of social media eased the practices of social support and social comparison, and created new forms of surveillance. This study contributes to an in-depth understanding of social media influences on individuals' weight control behaviors. Practical implications of the study concern improved strategies for maintaining the positive dynamics of social media interactions and preventing negative resistance to surveillance technology. Trial Registration: Chinese Clinical Trial Registry ChiCTR1900025861; http://www.chictr.org.cn/showprojen.aspx?proj=42497. abstract_id: PUBMED:28718352 Narratives for Obesity: Effects of Weight Loss and Attribution on Empathy and Policy Support. Despite an urgent need to address the issue of obesity, little research has examined the psychological factors that influence support for obesity-related policy initiatives, which represent an important tool for addressing this complex health issue. In the present study, we measured the degree to which people supported obesity-related policy interventions and empathized with a person struggling with obesity after reading a personal account of his or her situation. The narrative described an obese individual who was portrayed as either successfully losing weight or not, and as attributing his or her weight-loss outcome to personal or environmental factors. We found that protagonists who successfully lost weight and/or took personal responsibility for their situation elicited more empathy from participants, which was associated with support for societal policy interventions for obesity. These findings suggest that specific features of personal narratives influence support for obesity-related policies and highlight empathy as a mechanism through which such narratives affect obesity-related attitudes. abstract_id: PUBMED:34655859 Do media portrayals and social consensus information impact anti-fat attitudes and support for anti-weight discrimination laws and policies? Although weight stigma and discrimination are associated with increased body dissatisfaction and eating disorder risk, reduced opportunities, and poorer well-being, there are few legal protections for such discrimination in the U.S. We addressed one barrier to enacting protective legislation - public attitudes toward anti-weight discrimination laws - by assessing the impact of media representations of fatness and information about peer attitudes. Using a 2 × 2 experimental design, participants (N = 525) completed baseline assessments of political conservatism and weight bias and were randomly assigned to view fat-negative or fat-positive media content that was ostensibly supported or not supported by their peers, followed by questionnaires assessing fat phobia and legislative attitudes. Two-way ANCOVAs controlling for baseline weight bias and political conservatism indicated a significant effect for media framing, with greater fat phobia and less support for anti-discrimination laws and policies among those who viewed the fat-negative frame; however, effects for ostensible peer support and interaction effects were not significant. These preliminary findings suggest that efforts to shift media rhetoric may enhance support for anti-weight discrimination laws. Future research should investigate other barriers to anti-discrimination legislation and estimate their impact on body dissatisfaction, eating disorder risk, and other indicators of population health. abstract_id: PUBMED:28384631 Public Support for Weight-Related Antidiscrimination Laws and Policies. Objective: Weight-related discrimination is prevalent and associated with health impairments for those who are targeted, which underscores the need of antidiscrimination legislation. This study is the first to examine public support of weight-related antidiscrimination laws or policies in Germany, compared to the US and Iceland. Methods: In a representative German population sample (N = 2,513), public support for general and employment-specific weight-related antidiscrimination policies, weight-based victimization, and weight bias internalization were measured through established self-report questionnaires. Results: Half of the German population sample agreed with antidiscrimination policies. General antidiscrimination laws received lower support than employment-specific laws. Support for policies considering obesity a physical disability was greatest in Germany, whereas support for employment-specific antidiscrimination laws was lower in Germany than in the US and Iceland. Total support for weight-related antidiscrimination policies was significantly predicted by lower age, female gender, obese weight status, residence in West Germany, church membership, and readiness to vote in elections. Conclusion: German support for weight-related antidiscrimination policies is moderate. Increasing awareness about weight-related discrimination and laws prohibiting this behavior may help to promote policy acceptance. abstract_id: PUBMED:24754230 Beyond personal responsibility: effects of causal attributions for overweight and obesity on weight-related beliefs, stigma, and policy support. Objective: The objective of this research was to compare the effects of different causal attributions for overweight and obesity, among individuals with overweight and obesity, on weight-related beliefs, stigmatising attitudes and policy support. Design: In Study 1, an online sample of 95 US adults rated the extent to which they believed various factors caused their own weight status. In Study 2, 125 US adults read one of three randomly assigned online passages attributing obesity to personal responsibility, biology, or the 'food environment.' All participants in both studies were overweight or obese. Main Outcome Measures: All participants reported beliefs about weight loss, weight-stigmatising attitudes, and support for obesity-related policies. Results: In Study 1, biological attributions were associated with low weight-malleability beliefs and blame, high policy support, but high internalised weight bias. 'Food environment' attributions were not associated with any outcomes, while 'personal responsibility' attributions were associated with high prejudice and blame. In Study 2, participants who received information about the food environment reported greater support for food-related policies and greater self-efficacy to lose weight. Conclusion: Emphasising the role of the food environment in causing obesity may promote food policy support and health behaviours without imposing the negative consequences associated with other attributions. abstract_id: PUBMED:26395745 Effects of competing news media frames of weight on antifat stigma, beliefs about weight and support for obesity-related public policies. Objectives: In the popular news media, public health officials routinely emphasize the health risks of obesity and portray weight as under personal control. These messages may increase support for policies designed to reduce rates of obesity, but can also increase antifat stigma. Less often, the media cover 'Health at Every Size' or 'Fat Rights' perspectives that may have the opposite effects. We investigated how exposure to different 'fat frames' shifts attitudes about weight and support for obesity policies. Methods: Across four experiments (n=2187), people read constructed news articles framing fatness as negative (unhealthy, controllable, acceptable to stigmatize) or positive (healthy, uncontrollable, unacceptable to stigmatize). Results: Compared with people who read fat-positive frames, people who read fat-negative frames expressed more: belief in the health risks of being fat (d=0.95-1.22), belief weight is controllable (d=0.38-0.55), support for charging obese people more for health insurance (d=0.26-0.77), antifat prejudice (in three out of four experiments, d=0.28-0.39), willingness to discriminate against fat people (d=0.39-0.71) and less willingness to celebrate body size diversity (d=0.37-0.64). They were also less willing to say that women at the lower end of the obese range could be healthy at their weights. Effects on support for public policies, however, were generally small and/or nonsignificant. Compared with a control condition, exposure to fat-positive frames generally shifted attitudes more than fat-negative frames. In experiment 4, adding a message about the unacceptability of weight-based discrimination to unhealthy/controllable news articles only reduced antifat stigma on one of three measures compared with articles adding a discrimination-acceptable message. Conclusions: Exposure to different news frames of fat can shift beliefs about weight-related health risks and weight-based stigma. Shifting policy attitudes, however, is more challenging. abstract_id: PUBMED:25353085 Parental support for policy actions to reduce weight stigma toward youth in schools and children's television programs: trends from 2011 to 2013. Background: Despite the pervasiveness and negative physical and psychosocial implications of weight-based victimization (WBV) in youth, antibullying polices in schools rarely address this issue. Additionally, children's media perpetuates weight stigma, but regulation of stigmatizing media content is nonexistent. Methods: In 2011-2013, a diverse national sample of 2185 parents (n=716 in 2011, 755 in 2012, and 714 in 2013) was analyzed to evaluate parental support for four proposed policies across the 3 years. Actions addressed (1) strengthening policies to reduce weight stigma, (2) media portrayals of children with diverse body sizes, (3) media portrayals of such children engaged in healthy behaviors, and (4) antibullying policies in schools. Chi-square tests with Bonferroni's corrections and multiple logistic regression analyses were conducted. Results: Across time, support for policies to address weight stigma remained consistent or increased, primarily in 2012-2013. At least 86% of participants consistently favored implementing antibullying policies in schools. Parents became increasingly supportive of policies regulating television content to positively portray children of diverse body sizes and show such children engaged in health behaviors, as well as establishing weight-related antibullying policies. Specific predictors of support included gender, race, and political affiliation. Conclusions: There is a consistent and high level of parental support for weight stigma-related policies, particularly for antibullying policies. Findings can inform development of policies to rectify health and social disparities faced by overweight and obese youth. abstract_id: PUBMED:35706492 Influence and effects of weight stigmatisation in media: A systematic. Background: The lack of a comprehensive understanding of the role of mass media in perpetuating weight stigma hinders policy formulation. We reviewed the influence of mass media on weight stigmatisation and the effectiveness of media-based interventions designed to prevent or reduce stigma. Methods: We conducted a systematic review across seven databases from inception to December 2021. Included studies assessed exposure to or impact of weight stigma in mass media or examined interventions to reduce stigma through media in populations 12+ years. We synthesised data narratively, categorising studies based on similarity in focus to produce a set of integrated findings. The systematic review is registered in PROSPERO (No. CRD42020176306). Findings: One-hundred-and-thirteen records were eligible for inclusion from 2402 identified; 95 examined the prevalence of stigmatising content in mass media and its impact on stigma. Weight stigma was prevalent across media types, with the dominant discourse viewing overweight and obesity as an individual responsibility and overlooking systemic factors. Exposure to stigmatising content was found to negatively influence attitudes towards people with overweight or obesity. Few studies considered methods of reducing stigma in the media, with only two testing media-based interventions; their results were promising but limited. Interpretation: Weight stigma in media content is prevalent and harmful, but there is little guidance on reducing it. Future research focus needs to shift from assessing prevalence and impacts to weight stigma interventions. Funding: None. Answer: Yes, media portrayals of obesity do influence support for weight-related medical policy. Research has shown that when participants viewed nonstigmatizing images of obesity, they were less supportive of policies to deny obese women fertility treatment compared to participants who viewed stigmatizing images. Weaker perceptions of medical risk mediated the effect of the nonstigmatizing image on policy ratings, indicating that eliminating stigmatizing media portrayals of obesity may help reduce bias (PUBMED:23668850). Additionally, exposure to different 'fat frames' in the media can shift beliefs about weight-related health risks and weight-based stigma, although shifting policy attitudes is more challenging. Exposure to fat-positive frames generally shifted attitudes more than fat-negative frames (PUBMED:26395745). Furthermore, narratives that describe an obese individual who successfully lost weight and/or took personal responsibility for their situation elicited more empathy from participants, which was associated with support for societal policy interventions for obesity (PUBMED:28718352). These findings suggest that specific features of media narratives influence support for obesity-related policies and highlight empathy as a mechanism through which such narratives affect obesity-related attitudes.
Instruction: Insulin metabolism is altered in migraineurs: a new pathogenic mechanism for migraine? Abstracts: abstract_id: PUBMED:18052953 Insulin metabolism is altered in migraineurs: a new pathogenic mechanism for migraine? Background: Migraine is a complex biochemical dysfunction attributed to a disorder of the trigeminal and hypothalamic pathways. Impairment of glucose metabolism has been reported in migraine, but data are scanty and inconsistent. Objective: The main aim was to verify whether migraineurs have abnormalities of the glucose and insulin metabolism. We also studied correlations between blood glucose and insulin and between insulin levels and migraine severity. Patients And Methods: Patients with migraine or headache other than migraine, and healthy volunteers were included. All had general blood tests and a standard oral glucose tolerance test after a 12-hour fast, and glucose and insulin were measured. Results: Over a 6-month period, we recruited 84 migraineurs (73 women, 11 men), 25 patients with nonmigraine headache (20 women, 5 men), and 26 healthy controls (24 women, 2 men). Multivariate analysis confirmed a significant difference between groups for glucose levels (P &lt; .0001), but no significant time interaction. The differences were mostly between migraine and healthy controls (P &lt; .0001) and to a lesser extent between other headaches and healthy controls (P &lt; .05). A significant difference between groups was also found for insulin (P &lt; .0001), with a significant time interaction. The difference was confirmed for migraine compared to other headaches (P &lt; .0001) and healthy controls (P &lt; .0001). Conclusions: Blood glucose levels may be high in headache patients, but do not seem to be specific to migraineurs. Insulin levels were higher in migraineurs, and seemed specific to this group. These findings are in keeping with recent reports on the effects of insulin on brain functions and lend support to the possibility that insulin is involved in the pathogenesis of migraine. abstract_id: PUBMED:18377392 Insulin metabolism is altered in migraineurs. A new pathogenic mechanism for migraine? A comment. N/A abstract_id: PUBMED:35627115 Glucose-Related Traits and Risk of Migraine-A Potential Mechanism and Treatment Consideration. Migraine and glucose-related (glycaemic) traits (fasting glucose, fasting insulin, and type 2 diabetes) are common and complex comorbid disorders that cause major economic and social burdens on patients and their families. Studies on the relationship between migraine and glucose-related traits have yielded inconsistent results. The purpose of this review is to synthesise and discuss the information from the available literature on the relationship between fasting glucose, fasting insulin, and type 2 diabetes (T2D) with migraine. Publications on migraine and fasting glucose, migraine and fasting insulin, and migraine and T2D were identified from a PubMed and Google Scholar database search and reviewed for this article. Multiple publications have suggested that the comorbidity of migraine and glucose-related traits may have a similar complex pathogenic mechanism, including impaired glucose homeostasis, insulin resistance, reduced cerebrovascular reactivity, abnormal brain metabolism, shared genetic factors, neurotransmitters, and sex hormones. Furthermore, several studies have found a bi-directional link between migraine with insulin resistance and T2D. There is strong evidence for a biological association between migraine headache and glucose-related traits, and burgeoning evidence for shared genetic influences. Therefore, genetic research into these comorbid traits has the potential to identify new biomarkers and therapeutic targets and provide biological insight into their relationships. We encourage healthcare professionals to consider the co-occurrence of migraine with glucose-related traits in the evaluation and treatment of their patients. abstract_id: PUBMED:22435925 Reduced urinary glutamate levels are associated with the frequency of migraine attacks in females. Background And Purpose: Recent evidences indicate that glutamatergic homeostasis disorders are implicated in the pathogenesis of migraine. In particular, plasma and cerebrospinal fluid glutamate levels seem to be altered in migraine patients. However, the impacts of glutamate on migraine and especially on aura symptoms, alterations in the frequency of migraine attacks as well as investigations on glutamate on migraine-related metabolic dysfunctions, like hyperinsulinaemia, and an atherogenic lipid profile remain elusive to date. The aim of the present study was to investigate the impact of glutamate on migraine and related metabolic dysfunctions. Methods: We investigated the urinary glutamate levels of female migraineurs (n = 48) in the interictal phase and healthy controls (n = 48). Parameters of the insulin- and lipid metabolism, inflammatory parameters and anthropometric parameters were additionally determined. Results: Urinary glutamate levels of female migraineurs were significantly decreased with respect to the control group. Logistic regression revealed an odds ratio of 4.04 for migraine. We found a significant correlation with the time-period of patients' last attack and a significant inverse correlation with the annual frequency of migraine attacks. Other parameters of the insulin- and lipid metabolism, anthropometric and inflammatory parameters showed no significant correlation with glutamate levels. Conclusion: We show here that female migraineurs exhibit decreased urinary glutamate levels which are associated with a 4.04-fold higher risk for migraine and correlated with patients' frequency of migraine attacks. abstract_id: PUBMED:35296423 Migraine, Brain Glucose Metabolism and the "Neuroenergetic" Hypothesis: A Scoping Review. Increasing evidence suggests that migraine may be the result of an impaired brain glucose metabolism. Several studies have reported brain mitochondrial dysfunction, impaired brain glucose metabolism and gray matter volume reduction in specific brain areas of migraineurs. Furthermore, peripheral insulin resistance, a condition demonstrated in several studies, may extend to the brain, leading to brain insulin resistance. This condition has been proven to downregulate insulin receptors, both in astrocytes and neurons, triggering a reduction in glucose uptake and glycogen synthesis, mainly during high metabolic demand. This scoping review examines the clinical, epidemiologic and pathophysiologic data supporting the hypothesis that abnormalities in brain glucose metabolism may generate a mismatch between the brain's energy reserve and metabolic expenditure, triggering migraine attacks. Moreover, alteration in glucose homeostasis could generate a chronic brain energy deficit promoting migraine chronification. Lastly, insulin resistance may link migraine with its comorbidities, like obesity, depression, cognitive impairment and cerebrovascular diseases. PERSPECTIVE: Although additional experimental studies are needed to support this novel "neuroenergetic" hypothesis, brain insulin resistance in migraineurs may unravel the pathophysiological mechanisms of the disease, explaining the migraine chronification and connecting migraine with comorbidities. Therefore, this hypothesis could elucidate novel potential approaches for migraine treatment. abstract_id: PUBMED:29619630 Is Migraine Primarily a Metaboloendocrine Disorder? Purpose Of The Review: The goals of this review are to evaluate recent studies regarding comorbidity between migraine and different metabolic and endocrine disorders and to discuss the role of insulin resistance as a common pathogenetic mechanism of these diseases. Recent Findings: Recently, several studies showed that migraine is associated with insulin resistance, a condition in which a normal amount of insulin induces a suboptimal physiological response. All the clinical studies that used the oral glucose tolerance test to examine insulin sensitivity found that, after glucose load, there is in migraine patients a significant increase of both plasmatic insulin and glucose concentrations in comparison with controls. On the contrary, no association was found between migraine and type 2 diabetes, while type 1 diabetes seems to have a protective effect in the disease. Obesity and hypertension were shown to be risk factors for both episodic and chronic migraine. Metabolic syndrome has been recently associated mainly with migraine with aura and is now considered a risk factor also for medication overuse headache. Finally, a bidirectional association between migraine and hypothyroidism has been recently demonstrated, suggesting that common genetic or autoimmune mechanisms underlie both diseases. Recent studies showed that insulin receptor signaling and the related physiological responses are altered in migraine and may have a relevant pathogenic role in the disease. Further studies are warranted in order to better elucidate mechanisms underlying insulin resistance in migraine in order to develop new therapeutic strategies for this debilitating disease. abstract_id: PUBMED:19740122 Hyperinsulinaemia in migraineurs is associated with nitric oxide stress. There is growing evidence that alterations in the insulin and glucose metabolism may be involved in the pathogenesis of migraine. Nitric oxide (NO) stress has been associated with migraine. However, the role of NO on the insulin and glucose metabolism in migraineurs has remained elusive to date. The aim of the present study was to investigate the insulin and glucose metabolism in migraineurs and to determine possible interactions with the NO pathway. One hundred and twenty non-obese probands participated in this study, including 48 migraineurs and 72 healthy volunteers. Various parameters of the NO pathway, glucose metabolism as well as body measurement parameters were determined. We found a highly significantly increased insulin and Homeostasis Model Assessment (HOMA)-index in migraine patients, whereas fasting glucose was decreased. Logistic regression revealed an odds ratio of 5.67 for migraine, when comparing the lowest with the highest quartile of HOMA. Multivariate analysis showed that HOMA, waist-to-length ratio and nitrite as parameters of NO stress were highly significantly correlated. We show here that hyperinsulinaemia is associated with migraine and, furthermore, is correlated with increased NO stress. These findings represent a new pathophysiological mechanism that may be of clinical relevance. abstract_id: PUBMED:16979414 The messenger RNA profiles in liver, hypothalamus, white adipose tissue, and skeletal muscle of female Zucker diabetic fatty rats after topiramate treatment. Topiramate (TPM) is a novel neurotherapeutic agent approved for the treatment of epilepsy and for migraine prophylaxis. It has been observed that in obese-associated, type 2 diabetic rodent models, TPM treatment reduced the body weight gain, improved insulin sensitivity, and enhanced glucose-regulated insulin release. A long-term treatment with TPM thus ameliorated obesity and diabetic syndromes in female Zucker diabetic fatty rats and db/db mice. The molecular mechanisms of TPM antiobesity and antidiabetic effects remain unknown. We have applied DNA microarray technology to explore genes that might be involved in the mechanisms by which TPM improves insulin sensitivity and blood glucose handling, as well as body weight control. In female Zucker diabetic fatty rats, 7-day TPM treatment significantly reduced the plasma levels of glucose and triglyceride in a dose-dependent manner. The DNA microarray data revealed that TPM treatment altered messenger RNA profiles in liver, hypothalamus, white adipose tissue, and skeletal muscle. The most marked effect of TPM on gene expression occurred in liver with those genes related with metabolic enzymes and signaling regulatory proteins involved in energy metabolism. TPM treatment decreased messenger RNA amounts for sterol regulatory element binding protein-1c, stearoyl-coenzyme A (CoA) desaturase-1, choline kinase, and fatty acid CoA ligase, long chain 4. TPM also up-regulated 3 cholesterol synthesis genes. In addition, the short-term effect of TPM on gene expression was examined at 16 hours after a single administration. TPM markedly reduced hepatic expression of genes related with fatty acid synthesis, eg, stearoyl-CoA desaturase and acetyl-CoA carboxylase. TPM also changed genes related with fatty acid beta-oxidation, increased 3-2-trans-enoyl-CoA isomerase and mitochondrial acyl-CoA thioesterase, and decreased fatty acid CoA ligase (long chain 2 and long chain 5). These gene expression changes were independent of food intake as shown by pair feeding. Our results suggest that TPM regulates hepatic expression of genes involved in lipid metabolism, which could be part of the mechanisms by which TPM reduces plasma triglyceride levels in obese diabetic rodents. abstract_id: PUBMED:21518147 Oxidative stress is associated with migraine and migraine-related metabolic risk in females. Background And Purpose: Oxidative stress is discussed to be implicated in the pathophysiology of migraine. However, data are in part controversial and the possible underlying mechanisms remain elusive to date. The aim of this study was to investigate the oxidative stress status of female patients with migraine and its implications on migraine-related metabolic alterations. Methods: Oxidative stress markers malondialdehyde (MDA), 4-hydroxy-2-nonenal (HNE), carbonylated proteins, parameters of associated nitric oxide stress, inflammation, lipid- and glucose-metabolism were determined in the interictal phase in female patients with migraine and controls. Results: We found significantly increased HNE levels in female migraineurs compared with controls. Logistic regression analyses of HNE revealed an odds ratio for migraine of 4.55. HNE showed significant correlations with the nitric oxide pathway, the insulin- and the lipid-metabolism. Conclusions: We show here that increased oxidative stress is associated with migraine and contributes to migraine-related metabolic risk like nitrosative stress, an atherogenic lipid profile and hyperinsulinemia. Our data suggest that oxidative stress may represent a key event in the pathophysiology of migraine and a suitable therapeutic target. abstract_id: PUBMED:20825467 Increased matrix metalloproteinase activity is associated with migraine and migraine-related metabolic dysfunctions. Objective: Matrix metalloproteinases (MMPs) and their inhibitors (TIMPs) are discussed to be involved in the pathophysiology of migraine. Moreover, MMPs may also be involved in migraine-related metabolic alterations like an atherogenic lipid profile and hyperinsulinemia. The aim of this study was to investigate the impact of MMPs and TIMPs on migraine with and without aura and related metabolic dysfunctions. Methods: MMP activity, six MMPs and three TIMPs, parameters of the insulin and lipid metabolism as well as anthropometric parameters were determined in 124 non-obese subjects. Results: We found highly significant increased MMP activity in migraine patients independent of aura symptoms, which was associated with migraine with an odds ratio of 7.57. Interestingly, none of the determined MMPs and TIMPs showed significant different serum levels between migraine patients and healthy controls. We found significant correlations between MMP activity and parameters of the insulin and lipid metabolism, like Homeostasis Model Assessment index (HOMA index), cholesterol, triglycerides, and oxidized LDL. Conclusion: We show here that increased MMP activity is tightly associated with migraine and migraine-related hyperinsulinemia and atherogenic lipid alterations. Our findings represent a new pathophysiological mechanism, which may be of clinical relevance, especially in regard to therapeutic approaches using MMP inhibitors. Answer: The research suggests that insulin metabolism is indeed altered in individuals with migraines, which could represent a new pathogenic mechanism for the condition. Studies have found that migraineurs exhibit abnormalities in glucose and insulin metabolism, with higher insulin levels being specific to this group, indicating a potential involvement of insulin in the pathogenesis of migraine (PUBMED:18052953). The relationship between migraine and glucose-related traits, such as fasting glucose, fasting insulin, and type 2 diabetes, has been inconsistent, but there is evidence for a biological association and shared genetic influences between these conditions and migraine (PUBMED:35627115). Additionally, hyperinsulinaemia has been associated with migraine and is correlated with increased nitric oxide stress, suggesting a new pathophysiological mechanism that may be clinically relevant (PUBMED:19740122). Insulin resistance, where a normal amount of insulin induces a suboptimal physiological response, has been shown to be associated with migraine, and insulin receptor signaling alterations may have a significant pathogenic role in the disease (PUBMED:29619630). Furthermore, oxidative stress, which is associated with migraine, contributes to migraine-related metabolic risks such as nitrosative stress, an atherogenic lipid profile, and hyperinsulinemia (PUBMED:21518147). Increased matrix metalloproteinase activity, which is associated with migraine, also correlates with migraine-related metabolic dysfunctions like hyperinsulinemia and atherogenic lipid alterations (PUBMED:20825467). In summary, the alteration of insulin metabolism in migraineurs supports the hypothesis that insulin and related metabolic dysfunctions could play a role in the pathogenesis of migraine, offering potential new avenues for therapeutic strategies and a better understanding of the disease's comorbidities.
Instruction: Do surface treatments affect the optical properties of ceramic veneers? Abstracts: abstract_id: PUBMED:24840909 Do surface treatments affect the optical properties of ceramic veneers? Statement Of Problem: Surface treatments may affect the optical properties of ceramic veneers before cementation. Purpose: The purpose of this study was to evaluate whether various surface treatments affect the optical properties of different types of ceramic veneers. Material And Methods: Disk-shaped ceramic veneers (N=280) were prepared from the IPS e.max Press, e.max CAD, Empress Esthetic, e.max Ceram, and Inline ceramic systems with 0.5-mm and 1.0-mm thicknesses. The ceramics were divided into 4 groups: no surface treatments; etched with hydrofluoric acid; airborne-particle abraded with 30-μm Al2O3; and irradiated with erbium:yttrium-aluminum-garnet laser. A translucent shade of resin was chosen for cementation. Color parameters were examined with a colorimeter. Statistical analyses were done with 3-way ANOVA and the Bonferroni test (P=.05). Results: Significant interactions were noted between the surface treatments, ceramic type, and thickness for ΔE values (P=.01), and no significant interactions were noted for L* (P=.773), a* (P=.984), and b* (P=.998). The greatest color change occurred after airborne-particle abrasion with 0.5-mm-thick e.max Press (2.9 ΔE). Significant differences in ΔE values were found among the hydrofluoric acid, airborne-particle abrasion, and laser groups for 0.5-mm-thick ceramics, except IPS Inline, and among the hydrofluoric acid, airborne-particle abrasion, and laser groups for 1.0-mm-thick ceramics, except Empress Esthetic ceramics. Conclusions: The color change of the ceramics increased after the surface treatments, particularly as the ceramics became thinner. abstract_id: PUBMED:28590094 Optical Properties and Failure Load of Thin CAD/CAM Ceramic Veneers. This study aimed to compare optical properties and failure load of leucite (IPS Empress CAD LT) and lithium disilicate glass ceramic (IPS e.max CAD LT) materials in incisor veneers. Four groups of veneers were made on 36 bovine incisors with Cerec 3D milling unit (n=9/group): 0.5 and 0.3 mm thick leucite glass and 0.5 and 0.3 mm lithium disilicate glass veneers. The optical features were measured with CM-700d spectrophotometer using white and yellow try-in pastes. Differences were determined by means of ΔE value and luminance. The bovine teeth with veneers were mounted on acrylic resin blocks (Palapress, Heraeus Kulzer) and static loading test was conducted (LR30K plus, Lloyd Instruments). The color difference of veneers could be noticed with yellow and white cements when the material thickness increased from 0.3 to 0.5 mm (leucite ΔE yellow 4.4, white 6.0; lithiumdisilicate ΔE yellow 2.1, white 4.1). Both materials showed similar failure load with 0.5 mm veneers (leucite 1906 +/-319 N; lithiumdisilicate 2098 +/- 309 N). The failure load of 0.3 mm thick lithium dilisilicate veneers (2002 +/- 427 N) was comparable with the 0.5mm veneers. Ultrathin lithium disilicate glass ceramic veneers (0.3 mm) could be a potential option for clinical use. abstract_id: PUBMED:34041885 Effect of ceramic materials on the optical properties of porcelain veneers for tetracycline-stained teeth. Objectives: To analyze the chromatic properties and translucency of porcelain veneers made from different ceramic materials against the background of tetracycline-stained teeth. Methods: Porcelain specimens (A1, A3, B2, B4) measuring 0.50 mm in thickness were prepared by heat-press casting and layering. The L*, a*, and b* values of the specimens against simulated tetracycline tooth and black-and-white backgrounds were measured by a spectrophotometer, and color differences ΔE001 between specimens on simulated tetracycline backgrounds and the backgrounds themselves and ΔE002 between specimens on simulated tetracycline backgrounds and the white background were calculated. The translucent parameter (TP) was also evaluated. Results: The ΔE001 of feldspathic specimens (IPS d.SIGN) with the opaque layer was significantly greater than that of glass ceramic specimens (IPS e.max Press LT), and the ΔE001 of group B4 was consistently greater than those of the other color groups (P&lt;0.05). The ΔE002 values of all feldspathic specimens with the opaque layer were less than 1.25, and the ΔE002 values of the glass ceramic specimens were greater than 2.23. However, no significant difference was observed among the different color groups (P&gt;0.05). The TP values of feldspathic specimens with the opaque layer were significantly lower than those of glass ceramic specimens(P&lt;0.05), but no significant difference was observed among different color groups (P&gt;0.05). Conclusions: When changing the color of tetracycline-stained teeth, 0.50 mm-thick IPS d.SIGN feldspathic veneers with an opaque layer provide better chromatic properties than IPS e.max Press LT glass ceramic veneers. However, the translucency of feldspathic veneers is generally poorer than that of glass ceramic veneers. abstract_id: PUBMED:34627740 Comparison of feldspathic veneer surface treatments on ceramic bracket SBS, ARI and surface roughness after different debonding/polishing methods: An in vitro study. Objectives: To analyse, in vitro, surface properties and the shear bond strength after debonding and polishing procedures of ceramic brackets directly bonded to 0.3-0.5-mm thick feldspathic veneers. Materials And Methods: Fifty six feldspathic ceramic veneers samples (0.3 to 0.5-mm thick) were allocated into groups according to veneers surface treatment procedures: (S) glaze layer was retained; (SHF) hydrofluoric (HF) acid etch; (SOXA) Al2O3 sandblasting; and (SB) diamond burs roughening. Specimens were treated with silane Monobond N® and ceramic brackets bonded with Transbond XT®. Shear bonding strength (SBS) was assessed with a universal testing machine and ARI evaluated under a stereomicroscopic coupled to a digital camera. Remaining bonding composite was removed using a porcelain polishing kit and surface roughness assessed with a stylus profilometer. Results: No statistically significant differences were identified for SBS among the study groups (S, SHF, SOXA and SB) (P&gt;0.05). The majority of the specimens presented ARI scores 3 and 2 (P&gt;0.05). All of the study groups presented increased surface roughness after debonding and polishing procedures (P&lt;0.05), with significant greater values observed in SB group (RaF: 1.27±0.41; RzF: 6.23±1.82), (P&lt;0.05). Conclusions: Surface treatment with hydrofluoric acid etch, Al2O3 sandblasting and diamond bur did not enhance SBS of orthodontic brackets bonded to ceramic veneers. Ceramic surfaces treated with diamond burs presented significantly increased roughness after adhesive removal. abstract_id: PUBMED:37648480 Ceramic partial laminate veneers in anterior teeth: A literature review. Study Selection: Partial laminate veneers, defined as small ceramic restorations adhesively luted onto unprepared anterior teeth, are an interesting and conservative alternative to conventional ceramic and composite resin veneers in the anterior region. This literature review aimed to summarize the available laboratory and clinical data on ceramic partial laminate veneers. An electronic search of the MEDLINE/PubMed, EBSCO, and Web of Science databases was conducted. The keywords used were "partial veneer," "partial laminate veneer," "ceramic fragment," and "sectional veneer." The Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines were followed. The risk of bias in the included studies was assessed using the QUIN tool. Results: Of the 266 identified articles, only 16 met the inclusion criteria (ten clinical reports, five laboratory studies, and one retrospective clinical study). To date, no randomized controlled clinical trials have been conducted. Most laboratory studies displayed a low risk of bias, with partial laminate veneers rendering adequate strength and color stability. Clinical reports have shown large variability in material selection, luting, and finishing/polishing protocols. Conclusions: Low-quality evidence is available for ceramic partial laminate veneers. Available data from laboratory studies suggest good mechanical and optical performances comparable to those of conventional ceramic and composite resin veneers. Further clinical studies with longer follow-up periods are warranted. abstract_id: PUBMED:33288325 OCT evaluation of the internal adaptation of ceramic veneers depending on preparation design and ceramic thickness. Objectives: In-vitro evaluation of the influence of preparation design and thickness of ceramic veneers on the interfacial bond using optical coherence tomography (OCT). Methods: Sixty-four central incisors were randomly assigned to four preparation designs differing from no to complete dentine exposure (n = 16 each): non-prep (NP), minimal-invasive (MI, no dentine exposure), semi-invasive (SI, 50% dentine) and invasive (I, 100% dentine). Ceramic veneers (IPS InLine Veneer) of two thicknesses (0.2-0.5 mm (T1) and &gt; 0.5-1.2 mm (T2)) were etched, silanized, and adhesively luted (Optibond FL, Variolink Veneer). After water storage (37 °C, 21d), thermocycling (2000 cycles, 5°-55 °C), and mechanical loading (2 + 1 million cycles, 50 + 100 N) specimens were imaged by spectral-domain OCT (Telesto II, Thorlabs). Adhesive defects at the ceramic-composite and tooth-composite interfaces were quantified on 35 equidistantly distributed OCT B-scans (length, %). Statistical differences were verified with Wilcoxon-/Mann-Whitney-U-test (α = 0.05). Results: Adhesive defects appeared in all groups at both interfaces, albeit to differing extents (0.1 - 31.7%). NP and MI veneers showed no significant differences at the interfaces (pi &gt; 0.05). In groups, SI and I, significantly more adhesive defects appeared at the tooth-composite compared to the veneer-composite interface (pi ≤ 0.039). The following preparation designs and veneer thicknesses showed differences (pi ≤ 0.021): Veneer-composite: NP-T1 &lt; I-T1, MI-T1 &lt; I-T1, I-T1 &gt; I-T2; Tooth-composite: NP-T1 &lt; SI-T1, NP-T1 &lt; I-T1, NP-T2 &gt; MI-T2, MI-T1 &lt; SI-T1, MI-T1 &lt; I-T1, SI-T1 &lt; I-T1, MI-T2 &lt; SI-T2, MI-T2 &lt; I-T2. Significance: The interface adhesion of ceramic veneers was influenced by the preparation design and the veneer thickness. A ceramic thickness of at least 0.5 mm and a preparation without exposing dentine is advantageous for the interfacial bond. abstract_id: PUBMED:28074386 The effect of ceramic thickness and resin cement shades on the color matching of ceramic veneers in discolored teeth. The objective of this study was to analyze the effects of ceramic material thickness and resin cement shade on the color matching of ceramic veneers at the gray tooth structures. Seventy-two artificial maxillary right central incisor teeth (C2 shade) were prepared according to veneer tooth preparation in practice. Ceramic materials (LT, A2 shade, IPS e.max Press) were selected to fabricate the 0.50- and 0.75-mm thick veneers at the body region. The ceramic veneer specimens were bonded to the artificial teeth by the 6 shades of resin cements (Variolink Veneer: shades of HV+3, LV-2, LV-3; and RelyXTM Veneer: shades of WO, TR, A3). A clinical spectrophotometer (Crystaleye, Olympus) was used to measure the color parameters. The color differences (ΔE values) of ceramic veneers and A2 shade tab (Vitapan Classical, Vita) and C* ab values were calculated. The results of three-way ANOVA indicated that the ΔE values of ceramic veneer and A2 shade tab were significantly different in the thickness of ceramic materials, shades of resin cements, and measuring regions (p &lt; 0.001). There were significant differences in 0.50-mm-thick ceramic veneers that exhibited higher ΔE values compared with veneers that were 0.75-mm thick. Tukey's HSD test showed that the average ΔE values in body region were significantly smaller than that in cervical and incisal regions. The color matching of ceramic veneers was significantly influenced not only by the ceramic thickness and the resin cement shades but also the tooth regions. abstract_id: PUBMED:2646440 Microleakage patterns of porcelain and castable ceramic laminate veneers. This investigation evaluated the microleakage of conventional porcelain and Dicor castable ceramic laminate veneers bonded either entirely on acid-etched enamel or with the cervical margins on dentin. Three brands of light-activated composite resin cement systems were used and compared. Laminate veneers were bonded to extracted human maxillary central incisors and the extent of the marginal microleakage was measured. The findings indicated minimal marginal microleakage under both types of ceramic veneers bonded to all-enamel preparations. A markedly greater leakage at the dentin-composite resin interface was observed in veneers with cervical margins placed on dentin. abstract_id: PUBMED:24605200 How will surface treatments affect the translucency of porcelain laminate veneers? Purpose: The purpose of this study was to evaluate whether surface treatments affect the translucency of laminate veneers with different shades and thicknesses. Materials And Methods: A total of 224 disc-shaped ceramic veneers were prepared from A1, A3, HT (High Translucent) and HO (High Opaque) shades of IPS e.max Press (Ivoclar Vivadent) with 0.5 mm and 1.0 mm thicknesses. The ceramics were divided into four groups for surface treatments. Group C: no surface treatments; Group HF: etched with hydrofluoric acid; Group SB: sandblasted with 30-µm Al2O3 [corrected]; and Group L; irradiated with an Er;YAG laser. A translucent shade of resin cement (Rely X Veneer, 3M ESPE) was chosen for cementation. The color values of the veneers were measured with a colorimeter and translucency parameter (TP) values were calculated. A three-way ANOVA with interactions for TP values was performed and Bonferroni tests were used when appropriate (α=0.05). Results: There were significant interactions between the surface treatments, ceramic shades and thicknesses (P=.001). For the 0.5-mm-thick specimens there were significant differences after the SB and L treatments. There was no significant difference between the HF and C treatments for any shades or thicknesses (P&gt;.05). For the 1-mm-thick ceramics, there was only a significant difference between the L and C treatments for the HT shade ceramics (P=.01). There were also significant differences between the SB and C treatments except not for the HO shades (P=.768). Conclusion: The SB and L treatments caused laminate veneers to become more opaque; however, HF treatment did not affect the TP values. When the laminate veneers were thinner, both the shade of the ceramic and the SB and laser treatments had a greater effect on the TP values. abstract_id: PUBMED:30213524 Influence of light-cured luting agents and associated factors on the color of ceramic laminate veneers: A systematic review of in vitro studies. Objective: The luting agent plays a significant role in the appearance of ceramic laminate veneers (CLV), allowing improved/stable shade matching with adjacent teeth. A systematic review was conducted to investigate the influence of light-cured luting agents and color-associated factors aiming to draw guidelines for stable shade matching of CLV. The paper has also given an overview of the compositional characteristics of all luting agents and ceramic systems whose performance was presented herein. Methods: A search of in vitro studies that quantitatively investigated the influence of light-cured luting agents on the color of CLV was conducted. PubMed/MedLine, Cochrane Library, SciVerse Scopus and Web of Science databases were explored until December 2017 with no year limit. Data regarding the effect of light-cured luting agent, ceramic systems, effect of background, accelerated aging and surface treatments on color change were collected. A meta-analysis was not possible due to heterogeneity of data. Results: After duplicates' removal, 3630 studies were identified, 48 were selected for full-text analysis, and 21 remaining papers met the inclusion criteria and were included in this systematic review. Significance: Considering the shade aspects of luting agents, translucency and value showed the greatest visible color differences for CLVs. The effect of luting agent shade on the color of veneers is greatly affected by ceramic thickness and opacity. Answer: Yes, surface treatments do affect the optical properties of ceramic veneers. Various studies have investigated the impact of different surface treatments on the color and translucency of ceramic veneers. For instance, one study found that surface treatments such as etching with hydrofluoric acid, airborne-particle abrasion, and laser irradiation led to color changes in ceramics, especially as the ceramics became thinner (PUBMED:24840909). Another study reported that sandblasting and laser treatments caused laminate veneers to become more opaque, although hydrofluoric acid treatment did not significantly affect translucency parameter (TP) values (PUBMED:24605200). The color difference of veneers could be noticeable with different cements when the material thickness increased from 0.3 to 0.5 mm, indicating that both the thickness of the ceramic material and the type of resin cement used can influence the optical properties of the veneers (PUBMED:28590094). Additionally, the optical properties of porcelain veneers for tetracycline-stained teeth were found to be affected by the type of ceramic material used, with feldspathic veneers with an opaque layer providing better chromatic properties than glass ceramic veneers, although the translucency of feldspathic veneers was generally poorer (PUBMED:34041885). Furthermore, the effect of ceramic thickness and resin cement shades on the color matching of ceramic veneers in discolored teeth was significant, with thicker ceramics and different resin cement shades influencing the final color outcome (PUBMED:28074386). Lastly, a systematic review of in vitro studies highlighted that the shade aspects of luting agents, such as translucency and value, showed the greatest visible color differences for ceramic laminate veneers, and these effects were greatly affected by ceramic thickness and opacity (PUBMED:30213524). In summary, surface treatments, along with factors such as ceramic thickness, resin cement shade, and the type of ceramic material, can significantly affect the optical properties of ceramic veneers.
Instruction: Does professional counseling improve infant home monitoring? Abstracts: abstract_id: PUBMED:12416288 Does professional counseling improve infant home monitoring? Evaluation of an intensive instruction program for families using home monitoring on their babies. Unlabelled: Home apnea/bradycardia monitoring was widely used in the 80s and 90s in the hope that Sudden Infant Death (SID) could be prevented. As no evidence could be found in favor of this hypothesis, HM today is restricted to symptomatic preterm infants, infants with cardiorespiratory problems and infants after an apparent life threatening event (ALTE). HM can impose substantial stress on families, especially mothers. We introduced an intensive counseling program (IC) for home monitoring and evaluated its effects, using a questionnaire. The control group consisted of families who were using a home monitor before the IC program was instituted, and were instructed according to the standard protocol given by the "Austrian SIDS-Consensus". The IC program consisted of standard monitor instruction as well as instruction in infant cardiopulmonary resuscitation, and was extended by providing intensive support at the beginning and throughout the monitoring period with special regard to the monitor weaning phase. Results: Fifty-eight percent of the 90 questionnaires of the IC-families and 66% of the 70 questionnaires of the control families were returned. Home monitoring was considered to be reassuring by more than 60% of the families. We found the following differences between the two groups: parents taking part in the IC program liked the instruction better, were less stressed by the monitor and reacted less aggressively to monitor alarms. They used the monitor predominantly during sleeping periods and for a shorter period of time (6 vs. 7 1/2 months). IC could not reduce SID related anxiety or change the feelings associated with the use of the home monitor. Conclusion: Intensive counseling leads to a better use of home monitoring and reduces parents' stress. Even if home monitoring is used less frequently today, families should still be instructed and counselled intensively. abstract_id: PUBMED:12671135 Apnea, sudden infant death syndrome, and home monitoring. More than 25 years have elapsed since continuous cardiorespiratory monitoring at home was suggested to decrease the risk of sudden infant death syndrome (SIDS). In the ensuing interval, multiple studies have been unable to establish the alleged efficacy of its use. In this statement, the most recent research information concerning extreme limits for a prolonged course of apnea of prematurity is reviewed. Recommendations regarding the appropriate use of home cardiorespiratory monitoring after hospital discharge emphasize limiting use to specific clinical indications for a predetermined period, using only monitors equipped with an event recorder, and counseling parents that monitor use does not prevent sudden, unexpected death in all circumstances. The continued implementation of proven SIDS prevention measures is encouraged. abstract_id: PUBMED:2652374 Home monitoring of infants: a literature review (1972-1987) In the Netherlands home-monitoring of infants has been used since 1978, although not as widely as in other countries. For years it has been discussed whether a cardio-respiratory monitor can prevent SIDS. Controversies still exist about the criteria for home-monitoring and the reliability of the device. In this article, the history of debates on home-monitoring since 1972 is reviewed as well as current opinions and controversies. abstract_id: PUBMED:3397613 Home monitoring in children at risk of sudden infant death. Electronic home monitoring has been advocated as a means of preventing sudden infant death syndrome (SIDS). Special emphasis has been placed on apnoea monitoring. The evaluation of the preventive aspect of home monitoring is exceedingly difficult considering the ethical and statistical implications. Major problems arise as the monitoring is done by the parents in a situation different from the one in the intensive care unit. The relative advantages of pulse oximetry and inductance plethysmography are outlined in comparison with other techniques like impedance pneumography, apnoea mattress and apnoea capsule. abstract_id: PUBMED:1464198 Home monitoring. This article addresses the controversies surrounding home monitoring for the treatment of infant apnea and in the prevention of sudden infant death syndrome. A program for the use of monitoring is outlined for the practicing physician, and the benefits and deficiencies of home cardiorespiratory recording are discussed. abstract_id: PUBMED:31154995 The Healthy Baby Flipbook: piloting home-based counseling for refugee mothers to improve infant feeding and water, sanitation, and hygiene (WASH) practices. Problems in growth and undernutrition manifest in early infancy, with suboptimal breastfeeding and inadequate complementary feeding remaining strong risk factors for chronic undernutrition in infants. No published studies exist on educational interventions to improve infant feeding practices among refugees or displaced persons in low and middle-income (LMIC) settings. The objective of this study was to create and pilot educational materials for home-based counseling of refugee mothers along the Thailand-Myanmar border to improve appropriate infant feeding and water, sanitation, and hygiene (WASH) behaviors. Mothers of infants received counseling on appropriate infant feeding and WASH practices on a monthly basis for a total of six months from infant age three months until nine months. Educational materials were designed to feature a basic script for health workers and photos of locally available, appropriate foods. Of the 20 mothers participating in this pilot, infant feeding and WASH behaviors improved within 1 to 2 months of the first visit, including exclusive breastfeeding, minimum acceptable diet, and safe disposal of infant stool. This pilot demonstrates improvement in maternal infant feeding and WASH practices in a small set of refugee mothers, providing evidence for counseling measures to improve infant health in vulnerable populations. abstract_id: PUBMED:8552015 Recent advances in home infant apnea monitoring. Appropriate and effective nursing intervention is an essential element in determining how the family responds to the monitor in the home. Accurate assessment of the family system and dynamics provides the basis for a plan of care. The family's and infant's specific needs must be addressed. Careful implementation of the plan allows for changes and unexpected outcomes. Frequent evaluation of monitoring is necessary to determine if a change in the plan of care is needed. Recent changes in home apnea monitoring technology are rapidly altering the care of infants at risk for apnea and SIDS. The advent of the documented or recording monitor has the potential to demystify the events occurring while the infant is being monitored. Parents can get answers about their infant as quickly as a telephone call. The clinician can differentiate between a true and a false alarm and reassure the parents accordingly. Documenting false events and shallow-breathing alarms will potentially reduce the duration of monitoring, decreasing costs to the entire health care system. Documented monitoring is a valuable tool for nurses. For the staff nurse, clinical observation can be validated through trending and print out of events can be done at the bedside. For the advanced practice nurse, management of care can become more efficient through remote monitoring via modem. Patient teaching can be followed with immediate feedback. Monitors may assist in allaying anxiety in families who have lost children to SIDS or had an unexpected death in a previous sibling. Families may feel less anxious about having an "at risk" child in the home if the events are continuously being recorded. Length of hospital stay may decrease initially, with fewer rehospitalizations. Nursing research in these areas is necessary. Evaluating events occurring in the home may also help shed light on the enigma of SIDS. Several SIDS deaths have been recorded on documented monitors. If we can pinpoint exactly what takes place prior to and immediately after a SIDS episode, the enigma that has had physicians puzzled for so long may finally begin to unravel. abstract_id: PUBMED:1295497 Role of home monitoring in the context of sudden infant death prevention. Current methods indications, monitoring Some infants are cared with a home monitoring system during their first year of life. An international clinical consensus has been obtained and has proposed this technique mainly for infants who have presented an apparent life threatening event or for ex-premature with bradycardia or apnea, rather than for siblings of sudden infant death syndrome or other infants. In any case, this monitoring must be held after a complete clinical evaluation of the infant and after a real education of the parents about the use of the device. Many types of devices are used. The most efficient is the cardio-respiratory monitoring. Some of them include a processor and record the alarms. The need to see or to call the medical team to decode them allows close collaboration between the family and the clinical team. Knowledge of the alarms and the circumstances in which they have occurred help the medical team to propose the withdrawal of the home monitoring. Thus, sometimes preventive, sometimes prophylactic, this device will provide us for an optimal help. abstract_id: PUBMED:2378691 Effectiveness of individual counseling by professional and peer helpers for family caregivers of the elderly. This study of the effectiveness of individual counseling for daughters and daughters-in-law (N = 87) who were the primary caregivers for frail elderly parents revealed that participants who received professional counseling demonstrated significantly better outcomes than a no-treatment control group in regard to subjective well-being, level of psychiatric symptomatology, and perceived change in aspects of the caregiver-care-receiver relationship. Participants who received peer counseling demonstrated similar gains but did not improve significantly more than did control subjects in subjective well-being. Both forms of counseling had no significant effect on caregivers' formal and informal social support networks. abstract_id: PUBMED:7658267 Assessment of compliance with home cardiorespiratory monitoring in infants at risk of sudden infant death syndrome. Collaborative Home Infant Monitoring Evaluation (CHIME). Objectives: Documented monitoring was used to evaluate prospectively (1) the level of compliance among infants in whom cardiorespiratory monitoring was clinically indicated and (2) factors that might influence compliance: diagnosis, socioeconomic status, maternal age and education, and alarms. Study Design: Sixty-seven infants (51% female, 49% term) were sequentially enrolled, and monitoring was prescribed for the following indications: siblings of sudden infant death syndrome victims (16%), apnea of prematurity (45%), and apparent life-threatening events or apnea of infancy (39%). Demographic data, alarm and event data, and a summary report of monitor use from the first monitor download were obtained. Results: Maternal age, education, and insurance status did not differ significantly by indication for monitoring. The median number of monitor alarms per 10 hours of use was 0.7 for apnea or bradycardia and 0.6 for loose lead alarms. Monitors were available for use in the home from 2 to 106 days (median, 11 days). Median hours of monitor use per full day in the home was 15.5 hours. Of 67 infants, 58 used the monitor for at least part of every day in the home. The number of hours of monitor use per day did not differ significantly by diagnostic category, chronologic age, alarms, maternal age, education, or insurance type. This study population of infants at increased risk of sudden infant death syndrome had excellent compliance; 75% of the infants were monitored more than 10.5 hours per day, and 25% were monitored more than 21 hours per day. Conclusions: Documented monitoring provides an objective measure of compliance. These data provide a potential goal for level of compliance with home cardiorespiratory monitoring. Answer: Yes, professional counseling does improve infant home monitoring. An intensive counseling program for families using home monitoring on their babies was introduced and evaluated through a questionnaire-based study. The results showed that parents who participated in the intensive counseling (IC) program reported liking the instruction better, feeling less stressed by the monitor, and reacting less aggressively to monitor alarms. They also used the monitor predominantly during sleeping periods and for a shorter period of time compared to the control group (6 vs. 7 1/2 months). However, the IC program did not reduce SID-related anxiety or change the feelings associated with the use of the home monitor. The conclusion drawn from this study was that intensive counseling leads to better use of home monitoring and reduces parents' stress, suggesting that families should still be instructed and counseled intensively even if home monitoring is used less frequently today (PUBMED:12416288).
Instruction: Can EBUS-TBNA provide an accurate diagnosis in patients found to have enlarged or FDG-avid lymph nodes during surveillance of previously treated lung cancer? Abstracts: abstract_id: PUBMED:25887006 Can EBUS-TBNA provide an accurate diagnosis in patients found to have enlarged or FDG-avid lymph nodes during surveillance of previously treated lung cancer? A retrospective study. Background: Reliable pathologic sampling methods are pivotal in the management of lung cancer patients who have undergone either curative intent or palliative treatment previously. Early diagnosis of localized disease recurrence may facilitate further curative treatment and rebiopsy at the point of disease progression during palliative treatment can inform further management. This study assessed the performance of endobronchial ultrasound-guided transbronchial needle aspiration (EBUS-TBNA) lymph node sampling in a cohort of such patients. Methods: A retrospective analysis of a prospectively maintained database of EBUS-TBNA procedures at the University Hospital of South Manchester from August 2010 to August 2013. All patients with previously treated lung cancer and suspected nodal metastases (defined as nodal enlargement on CT&gt;10 mm in the short axis or abnormal FDG avidity on PET-CT) were included. Results: The sensitivity of EBUS-TBNA, on a per patient and per lymph node basis, was 91.4% and 91.8%, respectively (CI, 80.8%-96.5%). The corresponding NPV was 87.5% and 89.7%, respectively (CI, 76.4%-95.9%). There were no major complications and 3 (5.4%) minor complications. From the malignant EBUS-TBNA samples, the NSCLC-NOS rate was 3.2% and adequate tissue for molecular testing was provided in 100% of the cases (16/16). Conclusions: EBUS-TBNA is a safe and highly effective diagnostic procedure in suspected nodal metastases after previous treatment for lung cancer. The sensitivity and NPV are equivalent to EBUS-TBNA in the diagnosis of "new" lung cancer. abstract_id: PUBMED:29179511 The role of endobronchial ultrasound elastography in the diagnosis of mediastinal and hilar lymph nodes. Endobronchial ultrasound-guided transbronchial needle aspiration (EBUS-TBNA) has been widely used for diagnosis and mediastinal lymph nodes staging in patients with suspicious lung cancer. Ultrasound elastography is a novel sonographical technique that can evaluate tissue compressibility. The aim of the present study was to investigate the diagnostic yield of elastography for differentiating malignant and benign mediastinal lymph nodes. Conventional EBUS B-mode features, including size, shape, border distinction, echogenicity, central hilar structure with central blood vessel and coagulation necrosis were also evaluated. The ultrasonic features were compared with the pathological results from EBUS-TBNA. 133 lymph nodes in 60 patients were assessed. Elastography displayed the highest area under the curve (AUC) (type 3 versus type 1: AUC, 0.825; 95% confidence interval [CI], 0.707-0.910) with an impressive sensitivity (100%) and an acceptable specificity (65%). The combined model covering the four positive criteria (elastography, heterogeneity, size, and shape) showed that the odds ratio for malignance is 9.44 with a 95% CI of 3.99 to 22.32 (p &lt;0.0001). The combined model was superior to elastography alone (AUC, 0.851; sensitivity, 89.89%; specificity, 72.73%; p &lt;0.0001). This prospective study showed that elastography is a feasible technique for classifying mediastinal lymph nodes, especially in combination with conventional EBUS imaging. abstract_id: PUBMED:27113372 Negative Predictive Value of Granulomas on EBUS-TBNA in Suspected Extrathoracic Malignancy. Purpose: Endobronchial ultrasound-guided transbronchial needle aspiration (EBUS-TBNA) is a well-established diagnostic tool for lung cancer, sarcoidosis, and suspected metastatic extrathoracic malignancy. EBUS-TBNA carries a high diagnostic yield, but its negative predictive value (NPV) requires further clarification. Methods: We reviewed EBUS-TBNA at our cancer center from 2008 to 2015. We identified negative diagnostic samples for adenopathy suspected to represent metastatic disease from extrathoracic malignancy. Results: We reviewed 529 EBUS-TBNAs. Ninety patients underwent EBUS-TBNA sampling of the hilum and/or mediastinum (121 nodes, 14 masses) for suspected extrathoracic malignancy. Thirty-seven patients had negative samples (lymph node, granulomas or non-diagnostic specimens). The overall NPV was 98 %. Granulomas (11 patients, 25 nodes) seen on histology had a 100 % NPV, including those that were FDG-PET (fluorodeoxyglucose positron emission tomography) avid (n = 14 nodes). Conclusion: Negative EBUS-TBNA in patients with extrathoracic malignancy and suspected secondary hilar or mediastinal metastases can infer a high NPV especially if granulomas are seen on histology. Larger prospective investigations are needed to confirm the high NPV of EBUS-TBNA with granulomas in extrathoracic malignancies. abstract_id: PUBMED:33547554 Impact of EBUS-TBNA in addition to [18F]FDG-PET/CT imaging on target volume definition for radiochemotherapy in stage III NSCLC. PURPOSE/INTRODUCTION: [18F]FDG-PET/CT is the standard imaging-technique for radiation treatment (RT) planning in locally advanced non-small cell lung cancer (NSCLC). The purpose of this study was to examine the additional value of endobronchial-ultrasound transbronchial needle aspiration (EBUS-TBNA) to standard PET/CT for mediastinal lymph-node (LN) staging and its impact on clinical target volume (CTV). Materials And Methods: All consecutive patients with primary stage III NSCLC who underwent [18F]FDG-PET/CT and EBUS-TBNA prior to RT were analyzed from 12/2011 to 06/2018. LN-stations were assessed by an expert-radiologist and a nuclear medicine-physician. CTV was evaluated by two independent radiation oncologists. LNs were grouped with increasing distance along the lymphatic chains from primary tumor into echelon-1 (ipsilateral hilum), echelon-2 (LN-station 7 and ipsilateral 4), and echelon-3 (remaining mediastinum and contralateral hilum). Results: A total of 675 LN-stations of which 291 were positive for tumor-cells, were sampled by EBUS-TBNA in 180 patients. The rate of EBUS-positive LNs was 43% among all sampled LNs. EBUS-positivity in EBUS-probed LNs decreased from 85.8% in echelon-1 LNs to 42.4%/ 9.6% in echelon-2/ -3 LNs, respectively (p &lt; 0.0001, Fisher's exact test). The false discovery rate of PET in comparison with EBUS results rose from 5.3% in echelon-1 to 32.9%/ 69.1% in echelon-2/ -3 LNs, respectively (p &lt; 0.0001, Fisher's exact test). Sensitivity and specificity of FDG-PET/CT ranged from 85 to 99% and 67 to 80% for the different echelons. In 22.2% patients, EBUS-TBNA finding triggered changes of the treated CTV, compared with contouring algorithms based on FDG-avidity as the sole criterion for inclusion. CTV was enlarged in 6.7% patients due to EBUS-positivity in PET-negative LN-station and reduced in 15.5% by exclusion of an EBUS-negative but PET-positive LN-station. Conclusion: The false discovery rate of [18F]FDG-PET/CT increased markedly with distance from the primary tumor. Inclusion of systematic mediastinal LN mapping by EBUS-TBNA in addition to PET/CT has the potential to increase accuracy of target volume definition, particularly in echelon-3 LNs. EBUS-TBNA is recommended as integral part of staging for radiochemotherapy in stage III NSCLC. abstract_id: PUBMED:21364919 Diagnostic value of EBUS-TBNA for lung cancer with non-enlarged lymph nodes: a study in a tuberculosis-endemic country. Background: In tuberculosis (TB)-endemic areas, contrast-enhanced computed tomography (CT) and positron emission tomography (PET) findings of lung cancer patients with non-enlarged lymph nodes are frequently discrepant. Endobronchial ultrasound-guided transbronchial aspiration (EBUS-TBNA) enables real-time nodal sampling, and thereby improves nodal diagnosis accuracy. This study aimed to compare the accuracy of nodal diagnosis by using EBUS-TBNA, and PET. Methods: We studied 43 lung cancer patients with CT-defined non-enlarged mediastinal and hilar lymph nodes and examined 78 lymph nodes using EBUS-TBNA. Results: The sensitivity, specificity, positive predictive value, and negative predictive value of EBUS-TBNA were 80.6%, 100%, 100%, and 85.7%, respectively. PET had low specificity (18.9%) and a low positive predictive value (44.4%). The diagnostic accuracy of EBUS-TBNA was higher than that of PET (91% vs. 47.4%; p&lt;0.001). Compared to CT-based nodal assessment, PET yielded a positive diagnostic impact in 36.9% nodes, a negative diagnostic impact in 46.2% nodes, and no diagnostic impact in 16.9% nodes. Patients with lymph nodes showing negative PET diagnostic impact had a high incidence of previous pulmonary TB. Multivariate analysis indicated that detection of hilar nodes on PET was an independent predictor of negative diagnostic impact of PET. Conclusion: In a TB-endemic area with a condition of CT-defined non-enlarged lymph node, the negative diagnostic impact of PET limits its clinical usefulness for nodal staging; therefore, EBUS-TBNA, which facilitates direct diagnosis, is preferred. abstract_id: PUBMED:28469313 Evaluation of cytology in lung cancer diagnosis based on EBUS-TBNA. Aims: Endobronchial ultrasound (EBUS) is a relatively new modality that can be used to guide transbronchial needle aspiration (TBNA) of mediastinal lymph nodes. At present, researches on the sensitivity and specificity of cytopathology based on the EBUS-TBNA are deficient; therefore, we want to evaluate the value of cytology based on the EBUS-TBNA in this article. Materials And Methods: We reviewed the 379 cases that underwent the EBUS-TBNA in Shanghai Pulmonary Hospital from April 2010 to May 2011. Discarding the 139 cases with insufficient cells, we analyzed the remaining 240 cases that had enough cells on the smears. Statistical Analysis Used: The Statistical Package for the Social Sciences version 15.0 (SPSS Inc., Chicago, IL) was used for data analysis. A P value of &lt;0.05 was considered significant. Results: We found that the cytologic diagnosis of sensitivity and specificity reached 94.52% and 95.12%, respectively. The sensitivity of squamous cell carcinoma, adenocarcinoma, and small cell carcinoma was up to 88.24%, 100.00%, and 96.00%, respectively. The specificity of squamous cell carcinoma, adenocarcinoma, and small cell carcinoma reached to 100.00%, 100.00%, and 99.25%, respectively. Conclusion: Here, we report that the cytological examination of EBUS-TBNA should be acknowledged as a simple, fast, and safe procedure that provides a reasonable sensitivity and specificity of diagnosis in lung cancer. abstract_id: PUBMED:19096305 EBUS-TBNA for the clarification of PET positive intra-thoracic lymph nodes-an international multi-centre experience. Introduction: To determine the sensitivity and accuracy of endobronchial ultrasound guided transbronchial needle aspiration (EBUS-TBNA) for clarification of the nature of fluorodeoxyglucose-positron emission tomography (FDG) positive hilar and/or mediastinal lymph nodes in patients with (suspected) lung cancer. Methods: All consecutive patients who had undergone EBUS-TBNA alone for assessment of abnormal FDG-uptake in hilar and/or mediastinal lymph nodes between January 2005 and August 2007 were reviewed. Results: One-hundred-nine patients underwent EBUS-TBNA of 127 positron emission tomography positive lymph nodes. Hilar (station 10 or 11) nodes (N1 or N3) were aspirated in 26 patients and mediastinal (stations 2, 4, 7) nodes (N2 or N3) in 90 patients. In 7 patients both hilar and mediastinal nodes were sampled. There were no procedure-related complications. Malignancy was detected in 77 (71%) cases. Thirty-two patients were tumor negative by EBUS-TBNA; subsequent surgical biopsy in 19 showed malignancy in 7. In four cases the false negative result was due to sampling error and in three cases due to detection error. In 13 cases surgical staging was not performed although long term follow-up in 3 showed no evidence of malignancy. The sensitivity and accuracy of EBUS-TBNA for malignancy in patients with reference pathology was 91% and 92%, respectively. The negative predictive value was 60%. If the 10 cases for which confirmatory surgical staging was not performed are assumed to be false negative results, overall sensitivity and accuracy were 82% and 84%, respectively. Conclusions: EBUS-TBNA offers an effective accurate, minimally invasive strategy for evaluating FDG avid hilar and mediastinal lymph nodes. However, negative findings should be confirmed by surgical staging. abstract_id: PUBMED:23411844 The diagnostic utility of real-time EBUS-TBNA for hilar and mediastinal lymph nodes in conventional TBNA negative patients. Background: There are many causes of mediastinal and hilar lymphadenopathy, such as neoplasms, granulomatous diseases, infections and reactive hyperplasia. Nowadays, the popularity of endobronchial ultrasound-guided transbronchial needle aspiration (EBUSTBNA) is increasing in the diagnosis of mediastinal and hilar lymphadenopathy. We aimed to investigate the diagnostic value of EBUS-TBNA in patients with mediastinal and/or hilar lymphadenopathy and previously conventional TBNA-negative or inadequate results. Methods: Retrospective analysis was performed in 64 patients with previously conventional TBNA- negative or inadequate results and consequently undergoing EBUS-TBNA between July 2007 and August 2011. Results: One hundred and twenty three lymph nodes were sampled by EBUS-TBNA in 64 patients with no complications. In the 63 (98.4%) cases with adequate results, the sensitivity, diagnostic accuracy, and NPV of EBUS-TBNA per patient was 90.5%, 90.6%, and 66.6%, respectively. In a total of 122 (99.1%) adequately sampled lymph nodes, the diagnostic sensitivity, accuracy, and NPV of EBUS-TBNA per nodal station were 87.8%, 90.1%, and 65.7%, respectively. Non-small cell lung cancer (NSCLC) (n = 21, 33.3%) and sarcoidosis (n = 16, 25.3%) were the most common malignant and benign diseases in the patients with adequate samples by EBUS-TBNA. The relationships of diagnostic accuracy with the number of lymph nodes sampled, number of passes per node, or size of lymph nodes were both insignifi cant (p &gt;0.05). Conclusion: EBUS-TBNA is a sensitive and accurate method for the assessment of mediastinal and hilar lymph nodes in patients with conventional TBNA negative results. abstract_id: PUBMED:24822097 TBNA with and without EBUS: a comparative efficacy study for the diagnosis and staging of lung cancer. Introduction: Conventional transbronchial needle aspiration (TBNA) has been around for over 30 years with sensitivities approaching 70-90%. Recent development of endobronchial ultrasound (EBUS) TBNA demonstrated even higher sensitivities among experts. However EBUS-TBNA is more costly and less available worldwide than conventional TBNA. A comparison study to determine the efficacy of TBNA with and without EBUS in the diagnosis and staging of lung cancer is described. Methods: A total of 287 patients with mediastinal and hilar lymphadenopathy presenting for diagnosis and/or staging of lung cancer at enrolling institutions were included. Equal numbers of punctures were performed at the target lymph node stations using conventional TBNA techniques followed by EBUS-TBNA at the same sites. Patients and puncture sites that were biopsied by both methods and were positive for lung cancer were compared to establish efficacy of each technique on the same patients. Results: In 253 patients at least one pair of specimens were obtained by conventional TBNA and EBUS-TBNA. In 83 of these patients malignancy was diagnosed. Among the 83 patients with a diagnosis of a malignancy there was no significant difference in the diagnostic yield of conventional TBNA versus EBUS-TBNA. When comparing diagnosis of malignancy for each lymph node sampled, there were a significantly greater number of positive (diagnostic for malignancy) lymph nodes sampled by EBUS-TBNA. Conclusions: Recommendations for current practice depend on individual centers and bronchoscopist comfort level with TBNA (with or without EBUS). In our study, no significant difference was seen between the techniques for the diagnosis and staging of individual patients. abstract_id: PUBMED:24267766 Diagnostic performance of EBUS-TBNA in patients with mediastinal lymphadenopathy and extrathoracic malignancy Introduction: There is limited data about the diagnostic performance of EBUS-TBNA in patients with mediastinal lymphadenopathy and extrathoracic malignancy. Methods: From January 2007 to July 2011, EBUS-TBNA was performed in 68 patients with a history of extrathoracic malignancy (current or past) and suspected mediastinal lymph node metastases. Results: Thirty-one patients had a final diagnosis of cancer. In nineteen patients, the same histology was identified in the mediastinal nodes as in their prior extrathoracic cancer (colorectal cancer, esophageal cancer and lymphoma). In 12, the diagnosis was not "as expected" (ten lung cancers, one colorectal cancer, one unidentified cancer). Among 37 patients without diagnosis, biopsies in 27 showed normal lymphoid material, two had non-specific inflammation and eight had no contributory results. It was noted that procedures were reported to have been more difficult in these patients. Conclusions: Diagnostic performance of EBUS-TBNA in the context of extrathoracic malignancy is very variable depending on the origin of the cancer. Nevertheless, a diagnosis is concluded in almost 50% of the cases. These results underline the necessity to select carefully the indications of EBUS-TBNA in extrathoracic cancer. Answer: Yes, EBUS-TBNA (endobronchial ultrasound-guided transbronchial needle aspiration) can provide an accurate diagnosis in patients found to have enlarged or FDG-avid lymph nodes during surveillance of previously treated lung cancer. A retrospective study found that EBUS-TBNA had a sensitivity of 91.4% on a per patient basis and 91.8% on a per lymph node basis, with a corresponding negative predictive value (NPV) of 87.5% and 89.7%, respectively. This indicates that EBUS-TBNA is a highly effective diagnostic procedure in suspected nodal metastases after previous treatment for lung cancer, with sensitivity and NPV equivalent to its use in the diagnosis of new lung cancer cases. Additionally, the study reported no major complications and a minor complication rate of 5.4%. Importantly, from the malignant EBUS-TBNA samples, adequate tissue for molecular testing was provided in 100% of the cases (16/16) (PUBMED:25887006).
Instruction: Are there higher pedestrian fatalities in larger cities? Abstracts: abstract_id: PUBMED:27027479 Are there higher pedestrian fatalities in larger cities?: A scaling analysis of 115 to 161 largest cities in the United States. Objective: In 2012, 4,743 pedestrians were killed in the United States, representing 14% of total traffic fatalities. The number of pedestrians injured was higher at 76,000. Therefore, 36 out of 52 of the largest cities in the United States have adopted a citywide target of reducing pedestrian fatalities. The number of cities adopting the reduction goal during 2011 and 2012 increased rapidly with 8 more cities. We examined the scaling relationship of pedestrian fatality counts as a function of the population size of 115 to 161 large U.S. cities during the period of 1994 to 2011. We also examined the scaling relationship of nonpedestrian and total traffic fatality counts as a function of the population size. Methods: For the data source of fatality measures we used Traffic Safety Facts Fatality Analysis Reporting System/General Estimates System annual reports published each year from 1994 to 2011 by the NHTSA. Using the data source we conducted both annual cross-sectional and panel data bivariate and multivariate regression models. In the construction of the estimated functional relationship between traffic fatality measures and various factors, we used the simple power function for urban scaling used by Bettencourt et al. ( 2007 , 2010 ) and the refined STIRPAT (stochastic impacts by regression on population, affluence, and technology) model used in Dietz and Rosa ( 1994 , 1997 ) and York et al. ( 2003 ). Results: We found that the scaling relationship display diseconomies of scale or sublinear for pedestrian fatalities. However, the relationship displays a superlinear relationship in case of nonpedestrian fatalities. The scaling relationship for total traffic fatality counts display a nearly linear pattern. When the relationship was examined by the 4 subgroups of cities with different population sizes, the most pronounced sublinear scaling relationships for all 3 types of fatality counts was discovered for the subgroup of megacities with a population of more than 1 million. Conclusions: The scaling patterns of traffic fatalities of subgroups of cities depend on population sizes of the cities in subgroups. In particular, 9 megacities with populations of more than 1 million are significantly different from the remaining cities and should be viewed as a totally separate group. Thus, analysis of the patterns of traffic fatalities needs to be conducted within the group of megacities separately from the other cities with smaller population sizes for devising prevention policies to reduce traffic fatalities in both megacities and smaller cities. abstract_id: PUBMED:25551356 Spatiotemporal approaches to analyzing pedestrian fatalities: the case of Cali, Colombia. Objective: Injuries among pedestrians are a major public health concern in Colombian cities such as Cali. This is one of the first studies in Latin America to apply Bayesian maximum entropy (BME) methods to visualize and produce fine-scale, highly accurate estimates of citywide pedestrian fatalities. The purpose of this study is to determine the BME method that best estimates pedestrian mortality rates and reduces statistical noise. We further utilized BME methods to identify and differentiate spatial patterns and persistent versus transient pedestrian mortality hotspots. Methods: In this multiyear study, geocoded pedestrian mortality data from the Cali Injury Surveillance System (2008 to 2010) and census data were utilized to accurately visualize and estimate pedestrian fatalities. We investigated the effects of temporal and spatial scales, addressing issues arising from the rarity of pedestrian fatality events using 3 BME methods (simple kriging, Poisson kriging, and uniform model Bayesian maximum entropy). To reduce statistical noise while retaining a fine spatial and temporal scale, data were aggregated over 9-month incidence periods and censal sectors. Based on a cross-validation of BME methods, Poisson kriging was selected as the best BME method. Finally, the spatiotemporal and urban built environment characteristics of Cali pedestrian mortality hotspots were linked to intervention measures provided in Mead et al.'s (2014) pedestrian mortality review. Results: The BME space-time analysis in Cali resulted in maps displaying hotspots of high pedestrian fatalities extending over small areas with radii of 0.25 to 1.1 km and temporal durations of 1 month to 3 years. Mapping the spatiotemporal distribution of pedestrian mortality rates identified high-priority areas for prevention strategies. The BME results allow us to identify possible intervention strategies according to the persistence and built environment of the hotspot; for example, through enforcement or long-term environmental modifications. Conclusions: BME methods provide useful information on the time and place of injuries and can inform policy strategies by isolating priority areas for interventions, contributing to intervention evaluation, and helping to generate hypotheses and identify the preventative strategies that may be suitable to those areas (e.g., street-level methods: pedestrian crossings, enforcement interventions; or citywide approaches: limiting vehicle speeds). This specific information is highly relevant for public health interventions because it provides the ability to target precise locations. abstract_id: PUBMED:36031254 Effects of large vehicles on pedestrian and pedalcyclist injury severity. Introduction: Fatal pedestrian and pedalcyclist crashes have been on the rise in the United States since 2009. This rise in fatalities coincides with the rise of large vehicles on American roadways, continuing a trend that began years earlier. Method: Through rare access to both crash and hospital records, this report investigates the relationship between striking vehicle type and medical outcomes of pedestrian and pedalcyclist cases. Results: Results suggest that children are eight times more likely to die when struck by a SUV compared to those struck by a passenger car. Passenger cars were the striking vehicle in most fatal pedestrian and pedalcyclist crashes, though they were underrepresented relative to the proportion of all crashes in which they were involved. Though pickup trucks were the striking vehicle in just 5.6% of pedestrian and pedalcyclist crashes, they were involved in 12.6% of fatalities. SUVs were similarly overrepresented in fatalities relative to the proportion of their involvement in all crashes. SUVs struck 14.7% of the pedestrians and pedalcyclists investigated here, but were involved in 25.4% of the fatalities. Head and thorax injury severities are examined by vehicle type and age. Hospital charges of pedestrian and pedalcycle crash victims are also analyzed by striking vehicle type and victim age. Practical Applications: Findings suggest larger vehicles are involved in pedestrian and pedalcyclist crashes with more severe injuries that result in higher hospital charges. By race, Blacks are also found to be overrepresented as pedestrian and pedalcyclist crash victims. abstract_id: PUBMED:31462468 Population-based case-control study of the effect of sun glare on pedestrian fatalities in Taiwan. Objectives: Sun glare is a serious driving hazard and increases crash risks. Relatively few studies have examined the effects of sun glare on pedestrian fatalities, given that a crash has occurred. The primary objective of this study was to investigate the effect of sun glare on pedestrian fatalities. Design: A population-based case-control study. Setting: Taiwan. Participants: Using the Taiwan National Traffic Crash Data and sunrise and sunset data from the National Oceanic and Atmospheric Administration for the period 2003 to 2016, 100 411 pedestrians involved in crashes were identified. Of these crashes, 13 355 and 87 056 were glare-related (case) and non-glare-related (control) crashes, respectively. Methods: To account for unobserved heterogeneity, mixed logit models were estimated to identify the determinants of pedestrian fatalities. Main Outcome Measures: Pedestrian fatalities. Results: Pedestrians involved in glare-related crashes were more likely to be fatally injured than those in non-glare-related crashes (β=0.527; t=3.21). Other contributory factors to fatal injuries among pedestrians were older pedestrians (β=0.553; t=2.33), male drivers (β=0.324; t=2.33), older drivers (β=0.218; t=2.14), intoxicated motorists (β=0.606; t=2.85), rural roadways (β=0.985; t=3.92), overtaking manoeuvres (β=0.472; t=3.58), heavy vehicle crash partners (β=0.248; t=2.78) and sunset hours (β=0.274; t=3.08). Walking against traffic appeared beneficial for decreasing injury severity (β=-0.304; t=-2.76). Conclusions: Sun glare is associated with pedestrian fatalities. Older pedestrians, male drivers, older drivers and intoxicated motorists are prevalent determinants of pedestrian fatalities in glare-related crashes. abstract_id: PUBMED:31601217 Walking against or with traffic? Evaluating pedestrian fatalities and head injuries in Taiwan. Background: Allowing contraflow cycling on one-way streets has been reported to reduce crash risks in Belgium and the United Kingdom. Similarly, walking against traffic on roadways without sidewalks substantially improves pedestrian safety. This study examined fatalities and head injuries sustained by pedestrians in against-traffic and with-traffic crashes. Methods: Using police-reported crash data in Taiwan between 2011 and 2016, fatalities and head injuries were compared for pedestrians involved in against-traffic and with-traffic crashes. Results: Of the 14,382 pedestrians involved in crashes, 10,749 and 3633 pedestrians in with-traffic and against-traffic crashes, respectively, were reported. Compared with pedestrians involved in against-traffic crashes, those in with-traffic crashes were more likely to sustain fatalities and head injuries. Results of logistic regression models revealed several influential factors on pedestrian fatalities and head injuries, including elderly pedestrians, male drivers, intoxicated drivers, rural roadways, unlit streets in darkness, limited sight distance, adverse weather conditions, midnight hours, and a heavy vehicle as the crash partner. Conclusions: Pedestrians in with-traffic crashes were more likely to sustain fatalities and head injuries compared with those in against-traffic crashes. Furthermore, the negative effect of walking with traffic on injuries was more pronounced in reduced-visibility conditions. abstract_id: PUBMED:30553428 An examination of the increases in pedestrian motor-vehicle crash fatalities during 2009-2016. Introduction: Pedestrian fatalities increased 46% in the United States during 2009-2016. This study identified circumstances under which the largest increases in deaths occurred during this period. Method: Annual counts of U.S. pedestrian fatalities and crash involvements were extracted from the Fatality Analysis Reporting System and General Estimates System. Poisson regression examined if pedestrian fatalities by various roadway, environmental, personal, and vehicle factors changed significantly during 2009-2016. Linear regression examined changes over the study period in pedestrian deaths per 100 crash involvements and in horsepower per 1000 pounds of weight among passenger vehicles involved in fatal single-vehicle pedestrian crashes RESULTS: Pedestrian deaths per 100 crash involvements increased 29% from 2010, when they reached their lowest point, to 2015, the most recent year for which crash involvement data were available. The largest increases in pedestrian deaths during 2009-2016 occurred in urban areas (54% increase from 2009 to 2016), on arterials (67% increase), at nonintersections (50% increase), and in dark conditions (56% increase). The rise in the number of SUVs involved in fatal single-vehicle pedestrian crashes (82% increase) was larger than the increases in the number of cars, vans, pickups, or medium/heavy trucks involved in these crashes. The power of passenger vehicles involved in fatal single-vehicle pedestrian crashes increased over the study period, with larger increases in vehicle power among more powerful vehicles. Conclusions: Efforts to turn back the recent increase in pedestrian fatalities should focus on the conditions where the rise has been the greatest. Practical Applications: Transportation agencies can improve urban arterials by investing in proven countermeasures, such as road diets, median crossing islands, pedestrian hybrid beacons, and automated speed enforcement. Better road lighting and vehicle headlights could improve pedestrian visibility at night. abstract_id: PUBMED:18329410 Magnitude and categories of pedestrian fatalities in South Africa. Road traffic injuries, a major global public health burden, are concentrated in low-income and middle-income countries.(1) In contrast to high-income countries, pedestrians make up the largest group of road traffic injuries and fatalities in low- and middle-income countries. This article presents an analysis of pedestrian fatalities (2001-2004) in four South African cities. The article describes the magnitude, demographic, and temporal factors associated with pedestrian fatalities and presents a typological analysis to identify particular groups of at risk pedestrians. The analysis can serve to inform the development of prevention programmes tailored to the needs of specific at risk pedestrian groups. Data were obtained from the National Injury Mortality Surveillance System (NIMSS). The results indicated that there were a total of 7433 pedestrian deaths (2001-2004) for the four cities and the majority occurred over weekends. Most (56.7%) were between ages 20 and 44 years. Overall, there were 3.3 male pedestrian deaths for every female pedestrian death, and over half (58%) of the 4004 cases tested were positive for alcohol. A typological analysis identified three categories of pedestrian fatalities: (1) male pedestrian fatalities that showed high levels of alcohol concentrations, (2) female and elderly pedestrian deaths that occurred between 6 AM and midday, and (3) children, adolescents, and young adult pedestrian fatalities that typically occurred during weekday afternoons and evenings. The findings call for multiple strategies for combating pedestrian fatalities. abstract_id: PUBMED:31271110 Development of pedestrian crash prediction model for a developing country using artificial neural network. Urban intersections in India constitute a significant share of pedestrian fatalities. However, model-based prediction of pedestrian fatalities is still in a nascent stage in India. This study proposes an artificial neural network (ANN) technique to develop a pedestrian fatal crash frequency model at the intersection level. In this study, three activation functions are used along with four different learning algorithms to build different combinations of ANN models. In each of these combinations, the number of neurons in the hidden layer is varied by trial and error method, and the best results are considered. In this way, 12 sets of pedestrian fatal crash predictive models are developed. Out of these, Bayesian Regularization Neural Network consisting of 13 neurons in the hidden layer with 'hyperbolic tangent-sigmoid' activation function is found to be the best-fit model. Finally, based on sensitivity analysis, it is found that the 'approaching speed' of the motorized vehicle has the most significant influence on the fatal pedestrian crashes. 'Logarithm of average daily traffic' (ADT) volume is found to be the second most sensitive variable. Pedestrian-vehicular interaction concerning 'pedestrian-vehicular volume ratio' and lack of 'accessibility of pedestrian cross-walk' are found to be approximately as sensible as 'logarithm of ADT'. abstract_id: PUBMED:32288991 Improving pedestrian level low wind velocity environment in high-density cities: A general framework and case study. An acceptable pedestrian level wind environment is essential to maintain an enjoyable outdoor space for city residents. Low wind velocity environment can lead to uncomfortable outdoor thermal experience in hot and humid summer, and it is unable to remove the pollutants out of city canyons. However, the average wind velocity at pedestrian level is significantly lowered by closely spaced tall buildings in modern megacities. To improve the low wind velocity environment at pedestrian level in high-density cities, a general framework and detailed guidelines are needed. This study is the first time to develop such a framework, and provide detailed guidelines for improving pedestrian level low wind velocity environment in high-density cities. Additionally, a detailed review and summarisation of evaluation criteria and improvement measures are presented in this paper, which provide additional options for urban planners. To investigate the performance of the framework, the Hong Kong Polytechnic University campus was utilised as a case study. Results showed that pedestrian level wind comfort was greatly improved with the developed framework. The outcomes of this study can assist city planners to improve the low wind velocity environment, and can help policy makers to establish sustainable urban planning policies. abstract_id: PUBMED:35877997 Microscale pedestrian environment surrounding pedestrian injury sites in Washington state, 2015-2020. Objective: While microscale pedestrian environment features such as sidewalks and crosswalks can affect pedestrian safety, it is challenging to assess microscale environment associated risk across locations or at scale. Addressing these challenges requires an efficient auditing protocol that can be used to assess frequencies of microscale environment features. For this reason, we developed an eight-item pedestrian environment virtual audit protocol and conducted a descriptive epidemiologic study of pedestrian injury in Washington State, USA. Methods: We used data from police reports at pedestrian-automotive collision sites where the pedestrian was seriously injured or died. At each collision site, high school students participating in an online summer internship program virtually audited Google Street View imagery to assess the presence of microscale pedestrian environment features such as crosswalks and streetlighting. We assessed inter-rater reliability using Cohen's kappa and explored prevalence of eight microscale environment features in relation to injury severity and municipal boundaries. Results: There were 2248 motor vehicle crashes eliciting police response and resulting in death or serious injury of a pedestrian in Washington State between January 1, 2015 and May 8, 2020. Of the crashes resulting in serious injury or death, 498 (22%) resulted in fatalities and 1840 (82%) occurred within municipal boundaries. Cohen's kappa scores for the eight pedestrian features that were audited ranged from 0.52 to 0.86. Audit results confirmed that features such as sidewalks and crosswalks were more common at collision sites within city limits. Conclusions: High school student volunteers with minimal training can reliably audit microscale pedestrian environments using limited resources. Answer: Yes, there are higher pedestrian fatalities in larger cities. The scaling analysis of 115 to 161 large U.S. cities during the period of 1994 to 2011 found that the scaling relationship displays diseconomies of scale or sublinear for pedestrian fatalities, meaning that as cities increase in size, pedestrian fatalities also increase but at a slower rate than the population growth. However, the most pronounced sublinear scaling relationships for all three types of fatality counts (pedestrian, nonpedestrian, and total traffic fatalities) were discovered for the subgroup of megacities with a population of more than 1 million. This indicates that megacities with populations over 1 million are significantly different from smaller cities and should be viewed as a separate group when analyzing patterns of traffic fatalities and devising prevention policies (PUBMED:27027479). Additionally, the study examining increases in pedestrian motor-vehicle crash fatalities during 2009-2016 in the United States found that the largest increases in pedestrian deaths occurred in urban areas (54% increase from 2009 to 2016), on arterials (67% increase), at nonintersections (50% increase), and in dark conditions (56% increase) (PUBMED:30553428). This further supports the notion that larger cities, which are typically more urbanized and have more arterial roads, experience higher pedestrian fatalities.
Instruction: A randomized controlled trial: is water better than a standard lubricant in nasendoscopy? Abstracts: abstract_id: PUBMED:16620333 A randomized controlled trial: is water better than a standard lubricant in nasendoscopy? Objectives: Previous studies have shown that nasendoscopy is made easier with the use of lubrication at the expense of the quality of the view obtained, but had no effect on the discomfort or pain experienced by the patient. We set out to determine whether the advantages of lubrication with a standard lubricant (KY Jelly) could be achieved by using water, without incurring the same disadvantages. Design: Single blind randomized controlled trial. Setting: Outpatient Departments at two District General Hospitals. Participants: One hundred and fifty participants selected sequentially from patients requiring a nasendoscopy where the endoscope would be passed far enough to view the larynx. Patients requiring topical anaesthetic were excluded. Methods: Patients were randomly assigned to have either water or standard lubricant applied to the nasendosope prior to nasendoscopy. Levels of pain experienced by the patient were recorded on a visual analogue score as well as visual analogue scores from the clinician for the levels of difficulty in passing the endoscope. All participants who were entered the study completed the study. Endoscopy was performed by all grades of medical staff. Main Outcome Measures: Levels of pain experienced by the participants were assessed by a visual analogue scale. Difficulty of passing the scope was indicated by the endoscopist using a visual analogue scale, as was the quality of image obtained. Results: Endoscopists found the insertion of the endoscope was easier and the image better when water was used rather than KY Jelly. There was no significant difference in reported in levels of pain experienced by the patient. Conclusion: Although the use of water as a lubricant during nasendoscopy confers no additional benefit in term of the levels of pain suffered by patients during the procedure, it makes the insertion of the endoscope easier and provides a better quality image than standard lubrication. abstract_id: PUBMED:36319196 Lubricant for reducing perineal trauma: A systematic review and meta-analysis of randomized controlled trials. Aim: To assess the effect of lubricants on reducing perineal trauma during vaginal delivery. Methods: PubMed, Embase, the Cochrane Library, CINAHL, China National Knowledge Infrastructure, China Biology Medicine disc, WanFang databases, and ClinicalTrials.gov, were searched for literature up to 25 June 2021. Randomized controlled trials published in English or Chinese that compared the vaginal application of lubricant with standard care for women were included. Two reviewers independently performed study screening, data extraction, risk of bias assessment, and certainty of evidence assessment. Pooled effect sizes and corresponding 95% confidence intervals (CI) were calculated using meta-analysis. Results: Nineteen trials enrolling 5445 pregnant women were included. Compared with standard care, women using lubricants had a lower incidence of perineal trauma (risk ratio [RR] 0.84, 95% CI 0.76-0.93; low certainty evidence), second-degree perineal laceration (RR 0.72, 95% CI 0.64-0.82; moderate certainty evidence) and episiotomy (RR 0.77, 95% CI 0.62-0.96; very low certainty evidence), and had a shorter duration of the second-stage labor (MD -13.72 min, 95% CI -22.68 to -4.77; very low certainty evidence). Conclusion: Lubricants might reduce the incidence of perineal trauma, especially second-degree perineal laceration, and shorten the duration of the second-stage labor. More well-designed studies will continue developing high-quality evidence in this field. abstract_id: PUBMED:16209678 The use of lubrication in flexible fibreoptic nasendoscopy: a randomized controlled trial. Objectives: To determine how lubrication of the endoscope prior to flexible fibreoptic nasendoscopy affects levels of patient discomfort and pain during endoscopy. To assess the extent to which lubrication affects the ease of nasendoscopy and the quality of the image obtained. Design: Single blind, randomized controlled trial. Setting: Outpatient clinic of the Department of Otolaryngology at a secondary level referral centre. Participants: A total of 150 participants selected sequentially from patients requiring a nasendoscopy where the endoscope would be passed far enough to view the larynx. Patients requiring topical anaesthetic were excluded. All participants who were entered in the study completed the study. Endoscopy was performed by all grades of medical staff. Main Outcomes Measures: Levels of discomfort and pain experienced by the participants were assessed by a visual analogue scale (VAS). Difficulty of passing the scope was indicated by the endoscopist using a VAS, as was the quality of image obtained. Results: There was no significant difference between mean scores for pain or discomfort of the lubricated and unlubricated groups (P = 0.46 and P = 0.82 respectively). Scores for difficulty of passing the scope were significantly lower in the lubricant group (P = 0.003) but loss of image was significantly greater (P = 0.008). Conclusions: Lubrication of the endoscope prior to nasendoscopy does not reduce levels of discomfort or pain for the patient, but increases ease in insertion for the operator. However, this may be at the expense of reduced image quality. abstract_id: PUBMED:18492307 Nasal and instrument preparation prior to rigid and flexible nasendoscopy: a systematic review. Background: Examination of the upper aerodigestive tract is an important part of ENT practice. The use of both flexible and rigid nasendoscopes is the most common way of achieving this in the out-patient setting. However, these procedures can cause pain or discomfort for the patient, and topical preparations have been used in an attempt to reduce this. Objectives: The variability in current practice amongst those performing nasendoscopy may suggest an uncertainty as to what constitutes best practice for this procedure. A systematic review of the literature was undertaken in an attempt to clarify this. Methods: A literature search of the Cochrane ENT group trials register, the Cochrane central register of controlled trials (CENTRAL), CINAHL (1982-2007), MEDLINE (1950-2007) and EMBASE (1974-2007) was performed. Reference lists of selected studies were scanned for additional research material. Results: Eighteen studies relevant to this review were identified. The evidence suggests that local anaesthetic is not beneficial when performing flexible nasendoscopy, neither alone nor in combination with a vasoconstrictor. Water is better than lubricant for flexible endoscope passage and gives a superior optical outcome. Further research is required on the use of endosheaths for flexible and rigid nasendoscopy. abstract_id: PUBMED:17040589 Timing of co-phenylcaine administration before rigid nasendoscopy: a randomized, controlled trial. Rigid nasendoscopy is a commonly used method of examining the nasal cavity and postnasal space. Co-phenylcaine is useful for its vasoconstrictive and anaesthetic properties, but the length of time allowed for it to take effect is variable. We performed a single-blind, randomized, controlled trial to determine whether it was better to allow one or 10 minutes for co-phenylcaine to take effect. Fifty patients were randomized into two groups, 25 in each. Patients in the 10 minute group experienced less discomfort (p=0.02) and less pain (p=0.018) than those in the one minute group. Ease of examination was also greater in the 10 minute group, as was the quality of the image obtained (p&lt;0.001). abstract_id: PUBMED:16441799 Topical anaesthesia before nasendoscopy: a randomized controlled trial of co-phenylcaine compared with lignocaine. Objective: To evaluate the relative effectiveness of co-phenylcaine (lignocaine 5% with phenylephrine) and lignocaine 5% sprays when administered prior to rigid nasendoscopy. Design: Randomized, double blind controlled study. Setting: Teaching hospital otolaryngology unit. Participants: Thirty patients requiring routine outpatient rigid nasendoscopy were administered five puffs of either co-phenylcaine or lignocaine 5% spray which had been randomly assigned to either the first or the second visit. Ten minutes later nasendoscopy was performed. Immediately after nasendoscopy the ease of performance of the procedure and the quality of the view achieved was rated on a visual analogue scale by the endoscopist and the patients recorded the level of pain experienced on a visual analogue scale. Two weeks later, the patients returned for a repeat nasendoscopy, receiving the alternate spray. Main Outcomes Measures: Ease of performance and quality of view of achieved by endoscopists and pain experienced by patients, both measured with visual analogue scales. Results: The ease of passage of the endoscope and quality of the view obtained was found to be greater after the administration of co-phenylcaine [visual analogue scores 84 (95% CI: 80-89) than after lignocaine and 77 (95% CI: 73-81) (P &lt; 0.01)]. The two sprays produced similar levels of topical anaesthesia. Conclusions: Nasendoscopy can be performed with minimal discomfort after the administration of either co-phenylcaine or lignocaine 5% sprays. The vasoconstricting action of co-phenylcaine increases the ease of passage of the endoscope and quality of the view obtained by the endoscopist. abstract_id: PUBMED:25302273 The role of nasendoscopy in the fabrication of a palatopharyngeal obturator - a case report. Defects confined to the hard palate can be managed with relative ease because it is a static shelf creating oro- nasal separation. Since the soft palate is a dynamic separator between the oral and nasal cavities, defects involving the soft palate require careful consideration. Instrumental visualization methods can aid the fabrication of obturators in patients with such defects and prevent problems of under or over obturation. This case report presents the prosthodontic management of a patient with a Veau class II cleft palate using a palatopharyngeal obturator in order to re-establish normal functions of deglutition and speech. Nasendoscopy was used as an adjunctive procedure to assess velopharyngeal function with the prosthesis. abstract_id: PUBMED:25690952 Ultrasmooth submicrometer carbon spheres as lubricant additives for friction and wear reduction. Ultrasmooth submicrometer carbon spheres are demonstrated as an efficient additive for improving the tribological performance of lubricating oils. Carbon spheres with ultrasmooth surfaces are fabricated by ultrasound assisted polymerization of resorcinol and formaldehyde followed by controlled heat treatment. The tribological behavior of the new lubricant mixture is investigated in the boundary and mixed lubrication regimes using a pin-on-disk apparatus and cylinder-on-disk tribometer, respectively. The new lubricant composition containing 3 wt % carbon spheres suspended in a reference SAE 5W30 engine oil exhibited a substantial reduction in friction and wear (10-25%) compared to the neat oil, without change in the viscosity. Microscopic and spectroscopic investigation of the carbon spheres after the tribological experiments illustrated their excellent mechanical and chemical stability. The significantly better tribological performance of the hybrid lubricant is attributed to the perfectly spherical shape and ultrasmooth surface of carbon sphere additive filling the gap between surfaces and acting as a nanoscale ball bearing. abstract_id: PUBMED:33302056 The challenge of lubricant-replenishment on lubricant-impregnated surfaces. Lubricant-impregnated surfaces are two-component surface coatings. One component, a fluid called the lubricant, is stabilized at a surface by the second component, the scaffold. The scaffold can either be a rough solid or a polymeric network. Drops immiscible with the lubricant, hardly pin on these surfaces. Lubricant-impregnated surfaces have been proposed as candidates for various applications, such as self-cleaning, anti-fouling, and anti-icing. The proposed applications rely on the presence of enough lubricant within the scaffold. Therefore, the quality and functionality of a surface coating are, to a large degree, given by the extent to which it prevents lubricant-depletion. This review summarizes the current findings on lubricant-depletion, lubricant-replenishment, and the resulting understanding of both processes. A multitude of different mechanisms can cause the depletion of lubricant. Lubricant can be taken along by single drops or be sheared off by liquid flowing across. Nano-interstices and scaffolds showing good chemical compatibility with the lubricant can greatly delay lubricant depletion. Often, depletion of lubricant cannot be avoided under dynamic conditions, which warrants lubricant-replenishment strategies. The strategies to replenish lubricant are presented and range from spraying or stimuli-responsive release to built-in reservoirs. abstract_id: PUBMED:24158493 Intranasal topical local anesthetic and decongestant for flexible nasendoscopy in children: a randomized, double-blind, placebo-controlled trial. Importance: To our knowledge, the present study is the first double-blind, randomized, placebo-controlled trial in children to compare nasal preparation sprays administered before flexible nasendoscopy with placebo. Objective: To compare the degree of pain experienced by children undergoing flexible nasendoscopy after 1 of 3 intranasal sprays: placebo, decongestant with topical local anesthetic (TLA), or decongestant without TLA. Design, Setting, And Participants: A randomized placebo-controlled trial with blinding of participants, caregivers, observers, and otolaryngologists was conducted in a tertiary pediatric otolaryngology ambulatory clinic. Participants included a consecutive sample of children aged 3 to 12 years requiring flexible nasendoscopy. Exclusion criteria included concomitant respiratory tract infection, known allergy to a trial agent, or previous flexible nasendoscopy. One hundred fifty-one children were assessed for eligibility; 24 eligible children refused participation and 69 were included and block-randomized. All completed the study, and there were no adverse events. Interventions: Nasal spray administration of placebo (normal saline); xylometazoline hydrochloride, 0.05% (decongestant); or lidocaine hydrochloride, 1%, with xylometazoline hydrochloride, 0.05% (TLA with decongestant) was performed 10 minutes before flexible nasendoscopy. Main Outcomes And Measures: Primary outcome measure was the child-reported Wong-Baker Faces Pain (WBFP) scale. Secondary outcomes included the caregiver-proxy WBFP scale; the Face, Legs, Activity, Cry, and Consolability (FLACC) scale; and the physician-reported Difficulty of Procedure Visual Analog Scale (DPVAS). Results: Twenty-three children were recruited in each of the intervention arms. Baseline characteristics were comparable between groups. The mean child-rated WBFP scale scores were 2.4, 1.8, and 2.2 for the placebo, decongestant, and TLA with decongestant groups, respectively (P = .45). Although the finding was statistically nonsignificant, decongestant had the lowest mean caregiver-proxy WBFP scale score, lowest observer-rated FLACC scale score, and highest physician-rated DPVAS score. Subgroup analysis did not demonstrate any correlation between the outcomes and age or sex. Conclusions And Relevance: This study revealed no statistically significant difference in the discomfort experienced by children undergoing flexible nasendoscopy after placebo, decongestant, or TLA with decongestant. Decongestant was associated with the least discomfort (on child, caregiver, and observer-rated pain scale scores) and the lowest rating for difficulty of procedure. With these findings, the study suggests that there is no significant benefit of topical decongestant with or without TLA compared with placebo in reducing pain associated with pediatric flexible nasendoscopy. TRIAL REGISTRATION clinicaltrials.gov Identifier: NCT01351298. Answer: According to the randomized controlled trial described in the abstract with PUBMED ID 16620333, water was found to be better than a standard lubricant (KY Jelly) for use in nasendoscopy. The study concluded that while the use of water as a lubricant during nasendoscopy did not confer any additional benefit in terms of reducing the levels of pain suffered by patients during the procedure, it did make the insertion of the endoscope easier for the clinician and provided a better quality image than when standard lubrication was used. The endoscopists reported that the insertion of the endoscope was easier and the image quality was better with water compared to KY Jelly. There was no significant difference in the levels of pain experienced by the patient between the two lubricants.
Instruction: Smoking trends in Switzerland, 1992-2007: a time for optimism? Abstracts: abstract_id: PUBMED:20427551 Smoking trends in Switzerland, 1992-2007: a time for optimism? Objective: To assess trends in smoking status according to gender, age and educational level in the adult Swiss population. Methods: Four national health interview surveys conducted between 1992 and 2007 in representative samples of the Swiss population. Results: The prevalence of current smokers increased between 1992 and 1997, decreasing thereafter. In 2007, the prevalence of current smokers (32.0% of men and 23.8% of women) was lower than in 1992 (38.4% and 26.7%, respectively). Whereas the prevalence of current + former smoking decreased from 64.5% in 1992 to 59.3% in 2007 among men, it was similar among women during the same period (44.0% in 1992 and 43.9% in 2007). The prevalence of current + former smokers decreased from 47.2% in 1992 to 46.3% in 2007 in the lower education group (no education + primary), from 54.8% to 52.9% in subjects with secondary level education, and from 55.4% to 48.7% in subjects with university level education. The prevalence of current smokers decreased in all age groups. Finally, the amount of cigarette equivalents smoked per day decreased, but the amount of non-cigarette tobacco (alone or in combination with cigarettes) increased for both sexes. Conclusion: The prevalence of smoking has been decreasing in the Swiss population, for both sexes and for most age groups and educational levels between 1992 and 2007. The health effects of the change in type of tobacco products consumed await further investigation. abstract_id: PUBMED:28194642 Are Optimism and Cynical Hostility Associated with Smoking Cessation in Older Women? Background: Optimism and cynical hostility independently predict morbidity and mortality in Women's Health Initiative (WHI) participants and are associated with current smoking. However, their association with smoking cessation in older women is unknown. Purpose: The purpose of this study is to test whether optimism (positive future expectations) or cynical hostility (mistrust of others) predicts smoking cessation in older women. Methods: Self-reported smoking status was assessed at years 1, 3, and 6 after study entry for WHI baseline smokers who were not missing optimism or cynical hostility scores (n = 10,242). Questionnaires at study entry assessed optimism (Life Orientation Test-Revised) and cynical hostility (Cook-Medley, cynical hostility subscale). Generalized linear mixed models adjusted for sociodemographics, lifestyle factors, and medical and psychosocial characteristics including depressive symptoms. Results: After full covariate adjustment, optimism was not related to smoking cessation. Each 1-point increase in baseline cynical hostility score was associated with 5% lower odds of cessation over 6 years (OR = 0.95, CI = 0.92-0.98, p = 0.0017). Conclusions: In aging postmenopausal women, greater cynical hostility predicts lower smoking cessation over time. Future studies should examine whether individuals with this trait may benefit from more intensive cessation resources or whether attempting to mitigate cynical hostility itself may aid smoking cessation. abstract_id: PUBMED:31777707 Associations between optimism, tobacco smoking and substance abuse among Iranian high school students. Background: Optimism is known to be associated with many health behaviors. However, the associations between optimism, tobacco smoking and substance abuse in adolescents are not well documented. This study aimed to address this research gap in a large school-based population. Methods: Participants (N = 1104) were selected based on multi-stage cluster sampling method. Cigarette and hookah smoking behaviors, illicit drug use, optimism, and relevant covariates were measured using a validated questionnaire. Data were analyzed using ordinal logistic regression. Results: After adjustment, higher optimism score was a protective factor against being situated in advanced stages of cigarette smoking (odds ratio [OR] = 0.88, 95% CI: 0.84-0.91), hookah smoking (OR = 0.91, 95% CI: 0.88-0.94), and illicit drugs usage (OR = 0.90, 95% CI: 0.85-0.95). Moreover, the results showed that negative-stability and negative-globality domains of optimism were significantly higher among advanced-stage smokers and illicit drug users. Conclusion: Optimism was found to be a protective factor against tobacco smoking and substance abuse; whereas pessimism (negative-stability and negative-globality) was found to be a determinant factor. Further research is needed to investigate the effects of optimism on the transition in cigarette and hookah smoking stages. abstract_id: PUBMED:3672086 Trends in and distribution of lung cancer in Switzerland Recent trends in lung cancer mortality in Switzerland and the distribution of this cancer by districts, linguistic regions, and occupations are described. In 1985 7.5% of male deaths were due to the disease; female lung cancer is increasing (male-female ratio of deaths 1985: 5.9;in the age-class 35-44: 1.5). From 1970 to 1980, lung cancer became more frequent in the French-speaking area than in the rest of Switzerland. The geographic distribution shows a correlation with population density (chiefly in the over 65 age group) and there were high rates in agglomerations such as Basle and in some peripheral regions. The impact of smoking and occupational/environmental factors on the distribution of lung cancer by regions and occupations is discussed. abstract_id: PUBMED:8711458 Trends in alcohol consumption in Switzerland from 1975-1992 Based on four representative surveys independently conducted at 6-year intervals, the study presented here delineates the development of alcohol consumption in the general population of Switzerland between 1975 and 1992. A description of changes in the frequency and quantity of alcohol consumption by sex is presented, as well as specific changes by beverage type. Even though Switzerland remains one of the high consumption countries, a marked decline can be observed. Compared to 1975 when 28% of respondents reported daily alcohol consumption, this figure dropped to 20% in 1992. Although 10% consumed more than 60 grams of pure alcohol in the 1975 sample, this proportion fell to only 3% in 1992. This downward trend, which is also reflected in secondary statistics such as sales and mortality figures, can be mainly attributed to male consumers. In contrast to international trends, which in general demonstrate converging drinking patterns, reduced consumption in Switzerland, a wine-producing country, is mainly due to a decline in beer consumption and not in wine consumption. abstract_id: PUBMED:3244670 Trends in smoking and lung cancer mortality in Switzerland. Patterns of cigarette smoking in Switzerland were analyzed on the basis of sales data (available since 1924) and national health surveys conducted in the last decade. There was a steady and substantial increase in cigarettes sales up to the early 1970s. Thereafter, the curve tended to level off around an average value of 3,000 cigarettes per adult per year. According to the 1981-1983 National Health Survey, 37% of Swiss men were current smokers, 25% were ex-smokers, and 39% were never smokers. Corresponding porportions in women were 22, 11, and 67%. Among men, smoking prevalence was higher in lower social classes, and some moderate decline was apparent from survey data over the period 1975-1981 mostly in later middle-age. Trends in lung cancer death certification rates over the period 1950-1984 were analyzed using standard cross-sectional methods and a log-linear Poisson model to isolate the effects of age, birth cohort, and year of death. Mortality from lung cancer increased substantially among Swiss men between the early 1950s and the late 1970s, and levelled off (around a value of 70/100,000 men) thereafter. Among women, there has been a steady upward trend which started in the mid-1960s, and continues to climb steadily, although lung cancer mortality is still considerably lower in absolute terms (around 8/100,000 women) than in several North European countries or in North America. Cohort analyses indicate that the peak rates in men were reached by the generation born around 1910 and mortality stabilized for subsequent generations up to the 1930 birth cohort. Among females, marked increases were observed in each subsequent birth cohort. This pattern of trends is consistent with available information on smoking prevalence in successive generations, showing a peak among men for the 1910 cohort, but steady upward trends among females. Over the period 1980-1984, about 90% of lung cancer deaths among Swiss men and about 40% of those among women could be attributed to smoking (overall proportion, 85%). abstract_id: PUBMED:9702156 Trends in cigarette smoking: the Minnesota Heart Survey, 1980 through 1992. Objectives: The purpose of this study was to describe trends in the prevalence of cigarette smoking between 1980 through 1982 and 1990 through 1992 in Minneapolis and St. Paul, Minn. Methods: Three population-based surveys were conducted among adults 25 to 74 years of age in 1980 through 1982, 1985 through 1987, and 1990 through 1992. Results: Overall age-adjusted prevalences of cigarette smoking declined significantly between 1980-1982 and 1985-1987 and between 1985-1987 and 1990-1992. Serum thiocyanate, a biochemical marker for tobacco use, also declined significantly over the 3 periods. Conclusions: Favorable trends in smoking prevalence and cigarette consumption among smokers were observed, but disturbing trends in some smoking behaviors were also noted. abstract_id: PUBMED:38076700 Smoking trends and health equity in Switzerland between 1992 and 2017: dependence of smoking prevalence on educational level and social determinants. Background: Switzerland ranks among the top three healthcare systems in the world with regards to healthcare access, suggesting a high degree of health equity. However, Switzerland has few preventive strategies against smoking abuse. The aim of this study is to clarify whether educational level and citizenship status have an influence on the prevalence of smoking in Switzerland and whether there is health inequity related to a lack of preventive strategies. Methods: We based our analysis on publicly available health data published in the Swiss government's Swiss health survey (1992-2017). We compared the prevalence of smoking across the years and correlated these data with levels of educational attainment, citizenship status and age. Results: A continuous significant decline in smokers is observed in the highest education group (TERT). Over time, prevalence was reduced from 29% in 1992 to 23% in 2017 (p &lt; 0.001). The intermediate-level educational group (SEK 2) showed smaller but also significant decline on a 0.05 sigificance level over the same period, from 31% to 29% (p = 0.003). The lowest educational group showed a nonsignificant decline from 28% to 27% (p = 0.6). The population who holds Swiss citizenship showed a decrease in smoking from 28% to 26% within the time frame (p &lt; 0.001). People without Swiss citizenship had a much higher prevalence of smokers, at 38% in 1992 and declining to 32% in 2017 (p &lt; 0.001). All cohorts from age 15 to age 64 have a far higher prevalence of smokers than cohorts at an older age, with the highest prevalence in the 25-34 age group. Conclusion: In Switzerland, individuals with lower levels of education and non-Swiss populations are more susceptible to health risk of smoking. This is despite the existence of a high-quality healthcare system that has nevertheless failed to negated health inequities. abstract_id: PUBMED:28634966 Prevalence of tobacco smoking in Switzerland: do reported numbers underestimate reality? Question Under Study: Surveys on tobacco consumption represent an important decision aid for public health policy related to smoking. Although Switzerland's tobacco control policies and regulations are among the weakest, its reported smoking prevalence is among the lowest in Europe. However, there appears to be a discrepancy between reported prevalence and aggregate data on national cigarette consumption. Our purpose was to closely look at this discrepancy and study its significance. Methods: Calculation of national aggregate tobacco consumption by Switzerland's resident population for the years 2012 to 2015 and comparison with reported consumption derived from survey results on smoking prevalence and intensity. Comparison with similar data for France for year 2014. Evaluation of several hypotheses to explain the discrepancy between survey results and national aggregate consumption data. Results: There was a large discrepancy of about 45% between reported smoking consumption implied by survey results and estimated actual consumption derived from aggregate data on sales. Whereas survey results suggest smoking prevalence in Switzerland to be around 25% and rather stable during the period 2012 to 2015, true prevalence could be greater than 31%. Conclusion: The results of surveys aiming to estimate smoking prevalence and intensity in Switzerland may substantially suffer from underreporting, misreporting and sampling bias. Survey methodology needs to be examined to see whether such reporting discrepancy can be reduced. abstract_id: PUBMED:25128662 I want to quit education: a longitudinal study of stress and optimism as predictors of school dropout intention. Prior research on school dropout has often focused on stable person- and institution-level variables. In this research, we investigate longitudinally perceived stress and optimism as predictors of dropout intentions over a period of four years, and distinguish between stable and temporary predictors of dropout intentions. Findings based on a nationally representative sample of 16-20 year-olds in Switzerland (N = 4312) show that both average levels of stress and optimism as well as annually varying levels of stress and optimism affect dropout intentions. Additionally, results show that optimism buffers the negative impact of annually varying stress (i.e., years with more stress than usual), but not of stable levels of stress (i.e., stress over four years). The implications of the results are discussed according to a dynamic and preventive approach of school dropout. Answer: The analysis of smoking trends in Switzerland between 1992 and 2007 suggests a time for cautious optimism. The prevalence of current smokers increased between 1992 and 1997 but decreased thereafter. By 2007, the prevalence of current smokers was lower than in 1992, with 32.0% of men and 23.8% of women being current smokers, compared to 38.4% and 26.7%, respectively, in 1992 (PUBMED:20427551). Additionally, the prevalence of current + former smoking decreased among men from 64.5% in 1992 to 59.3% in 2007, and a similar decrease was observed across different educational levels. The prevalence of current smokers also decreased in all age groups, and the amount of cigarette equivalents smoked per day decreased. However, there was an increase in the consumption of non-cigarette tobacco for both sexes (PUBMED:20427551). These findings indicate a positive trend in smoking behavior in the Swiss population, with reductions in smoking prevalence across both sexes, most age groups, and educational levels. The shift in the type of tobacco products consumed and its potential health effects would require further investigation. Despite the positive trends, the need for continued efforts in smoking cessation and prevention is underscored by the fact that certain psychological traits, such as cynical hostility, have been found to predict lower smoking cessation rates in older women (PUBMED:28194642). Moreover, optimism has been identified as a protective factor against tobacco smoking and substance abuse among Iranian high school students, suggesting that psychological factors can play a significant role in smoking behavior (PUBMED:31777707). In conclusion, the decreasing prevalence of smoking in Switzerland from 1992 to 2007 provides a reason for optimism, but it also highlights the importance of addressing psychological factors and the changing landscape of tobacco product consumption in comprehensive tobacco control strategies.
Instruction: Mycobacteria and glutaraldehyde: is high-level disinfection of endoscopes possible? Abstracts: abstract_id: PUBMED:8726756 Mycobacteria and glutaraldehyde: is high-level disinfection of endoscopes possible? Background: High-level disinfection of endoscopes has traditionally been undertaken by manual or automatic scope cleaning plus a 10 to 20 minute soak in 2% alkaline glutaraldehyde. Mycobacteria species are less sensitive to glutaraldehyde, and a 45-minute instrument soak has recently been recommended by the manufacturer. Because of concerns over endoscope damage, need for more endoscopes, and perception that the current cleaning method is adequate, we prospectively studied mycobacteria-contaminated endoscopes at various stages of the cleaning process. Methods: All work was done under a laminar flow hood in a microbiology laboratory. Five gastrointestinal scopes were contaminated with 10(8) colony forming units per milliliter (CFU/mL) of Mycobacterium chelonei, an atypical mycobacterium similar in chemical resistance to Mycobacterium tuberculosis but with less infectious potential. Cultures of the sheath, biopsy channel, and elevator channel were taken at baseline, after manual cleaning, and after 10, 20, and 45 minutes to glutaraldehyde soak both before and after alcohol rinse. Results: Manual cleaning resulted in a mean of 4.7 log10 reduction in viable mycobacterial colonies. Qualitative studies of the external endoscope surface as well as the air-water valve showed no detectable organisms after a 10-minute exposure to alkaline glutaraldehyde. Conventional quantitative culture techniques of the channels demonstrated one endoscope out of five with consistent growth after a 10-minute exposure to glutaraldehyde. Following alcohol treatment, there was no significant colony growth. In contrast, a quantitative membrane filter system showed the presence of at least one mycobacterial colony in four out of five scopes after a 45-minute glutaraldehyde exposure. Conclusions: Additional studies utilizing a standardized mycobacterial species, inoculum size, and suspension characteristics are recommended to delineate adequate duration of disinfectant exposure time. abstract_id: PUBMED:19033060 Sterilization and disinfection of endoscopes in urology Sterilization and disinfection of endoscopes take account of the risk of transmitted infections and nosocomial infections. These risks are ruled by legal texts. Urology is a high risk speciality. The material which is used must be single use or at least sterilisable (18min at 134 degrees C). Flexible endoscopes are sensitive to high temperatures and needs disinfection, and immediate use. These steps are subjected to quality control rules and marking. abstract_id: PUBMED:15229429 Bacteriologic testing of endoscopes after high-level disinfection. Background: There are no definitive data available concerning microbiologic safety of prolonged endoscope storage after reprocessing and disinfection. This study evaluated the durability of high-level disinfection of endoscopes stored in a dust-proof cabinet for 5 days. Methods: Three different types of endoscopes (upper endoscopes, duodenoscopes, colonoscopes) were tested. After completion of the endoscopic procedure, endoscopes were subjected to an initial decontamination, followed by manual cleaning with the endoscope immersed in detergent. The endoscopes then were placed in an automatic reprocessor that provides high-level disinfection. They then were stored by hanging in a dust-proof cabinet. Bacteriologic samples were obtained from the surface of the endoscopes, the openings for the piston valves, and the accessory channel daily for 5 days, and by flush-through (combined with brushing) from the accessory channels after 5 days of storage. Samples were cultured for all types of aerobic and anaerobic bacteria, including bacterial spores, and for Candida species. Results: For all assays, all endoscopes were bacteria-free immediately after high-level disinfection. Only 4 assays (of 135) were positive during the subsequent 5-day assessment (skin bacteria cultured from endoscope surfaces). All flush-through samples were sterile. Conclusions: When endoscope reprocessing guidelines are strictly observed and endoscopes are stored in appropriate cabinets for up to 5 days, reprocessing before use may not be necessary. abstract_id: PUBMED:8708362 High-level disinfection or "sterilization" of endoscopes? Controversy has surrounded the use of liquid chemical germicides to reprocess medical instruments that are damaged by heat sterilization. A review of the literature was performed to assess and compare the efficacy of disinfection and sterilization processes. The results of this review demonstrate that high-level disinfection of thoroughly cleaned endoscopes is not associated with a higher infection rate than is "sterilization". While there may be a theoretical distinction between the highest level of disinfection and sterilization, thorough cleaning eliminates clinical differences between the two. High-level disinfection is quick, effective, inexpensive, and recommended whenever heat sterilization is not feasible. A low-temperature sterilization process should be considered only if it is comparable in cost to disinfection or if it offers demonstrated advantages without damaging the instrument. Neither disinfection nor sterilization is likely to be effective if the instrument is not cleaned thoroughly after use. abstract_id: PUBMED:20033708 Use of flexible endoscopes for NOTES: sterilization or high-level disinfection? Background: Natural orifice translumenal endoscopic surgery (NOTES) involves the use of flexible endoscopes to perform intra-abdominal or intra-thoracic surgeries. Surgery in the operating room usually involves sterile instrumentation, whereas in the endoscopy suite high-level disinfection seems to be sufficient. Our objective was to assess the necessity for endoscope sterilization for clinical NOTES and to develop an endoscope processing protocol based on a score for the available processing options. Methods: Score and processing protocol development for clinical NOTES endoscopes was based on a comprehensive review of the available relevant literature. Options for sterilization for flexible endoscopes in the Good Samaritan Hospital, Legacy Health in Portland, Oregon, were analyzed for patient safety, potential for recontamination, cost, and validation. Results: Literature survey indicated that there is controversy surrounding the necessity for sterilization of surgical endoscopes. However, standard of practice seems to call for sterile instrumentation for surgery and it is possible to terminally sterilize flexible endoscopes. Within our institution, a score was created to rank the available sterilization options. We successfully introduced a protocol for sterilization of endoscopes for use in clinical NOTES procedures. The protocol involved mechanical cleaning and high-level disinfection per Multi-Society Guidelines, with subsequent terminal sterilization using a validated peracetic acid protocol. Conclusions: It remains controversial whether sterile instrumentation is truly needed for surgery. It is difficult but possible to terminally sterilize flexible endoscopes. We recommend sterile instrumentation for clinical NOTES until well-designed, randomized, clinical trials are available and guidelines are published. abstract_id: PUBMED:1624823 Contamination of endoscopes and endoscope washers by atypical mycobacteria Contamination of endoscopes and endoscope washers by atypical mycobacteria was studied. Large amounts of atypical mycobacteria were detected with high frequency inside endoscopes and endoscope washers. The species of atypical mycobacteria was Mycobacterium chelonae subsp. abscessus. Antibacterial-effects of glutaraldehyde against isolated atypical mycobacteria were checked. Sufficient antibacterial-effect was not obtained by 2% glutaraldehyde solution for endoscope sterilization. However, after frequent manual washing and brushing of endoscopes, by using 3% glutaraldehyde solution and 70% alcohol, all endoscopic instruments were completely decontaminated. We must pay attention to contamination of endoscopes and endoscope washer at least once a month. abstract_id: PUBMED:10364023 High-level disinfection of gastrointestinal endoscopes: are current guidelines adequate? Objective: For a germicide to obtain a high level disinfection (HLD) claim, FDA requires demonstration of a 6-log reduction of mycobacterial inoculum under worst case conditions. The purpose of this study was to assess the adequacy of current guidelines for high level disinfection of GI endoscopes using alkaline glutaraldehyde in simulated-use testing. Methods: Various gastrointestinal endoscopes were contaminated with Mycobacterium chelonae in 46 experiments. Quantitative cultures were obtained from each endoscope channel separately after each step: inoculation, standardized manual cleaning, immersion in 2% glutaraldehyde (Cidex) for 10, 20, or 45 min at room temperature, 70% isopropanol rinse, and drying. Results: Manual cleaning alone achieved a 4-log reduction. After 10 min of glutaraldehyde exposure, but before alcohol rinse, two of 10 experiments failed to achieve a 6-log reduction. However, after alcohol rinse, all 10 experiments achieved HLD. After 20 min of glutaraldehyde exposure, but before alcohol rinse, one of 18 experiments failed to achieve a 6-log reduction. After alcohol rinse, all 18 experiments achieved HLD. After 45 min of glutaraldehyde exposure, but before alcohol rinse, one of 18 experiments failed to achieve a 6-log reduction. After alcohol rinse, all 18 experiments achieved HLD. Thus, if the entire reprocessing protocol including manual cleaning, glutaraldehyde exposure, alcohol rinse, and drying was taken into account, the required 6-log reduction of mycobacteria was achieved with a minimum of 10 min of glutaraldehyde exposure at room temperature. Conclusions: Current guidelines for high level disinfection using glutaraldehyde are appropriate. Alcohol rinse is a valuable adjunctive step for drying and for its bactericidal effects. abstract_id: PUBMED:10670140 Regulatory aspects of disinfection of endoscopes The circular on the sterilization and the law of july 1, 1998 enact clearly that the medical devices that support steam sterilization must be sterilized with an organization of sterilization that ensure quality. Endoscope that enter normally sterile tissues should be subjected to a sterilization procedure before each use; if this is not feasible, they should receive high-level disinfection to destroy bacterial spores. The endoscope must be immersed for at least one hour in aqueous solution of 2% glutaraldehyde. This lengthened duration of processing must be integrate in the organization of medical department. The endoscopes not penetrating in a sterile cavity are disinfected with manual processing according to the protocol of the circular of april 1996, or with automated endoscope reprocessing machine. The different types of automated machine used to wash and disinfect endoscope must now answer to criterion concerning their design described in the circular of july 15, 1998. A significant work has to be made in hospitals to conform automated machine and procedure, to control the risk dependent with their use. abstract_id: PUBMED:31241788 Guidelines for standardizing cleansing and disinfection of gastrointestinal endoscopes. As part of the activities toward standardizing endoscopy procedures, the Japan Gastroenterological Endoscopy Society has prepared guidelines for cleansing and disinfection of gastrointestinal endoscopes. The environment of gastrointestinal endoscopy differs between Japan and advanced Western countries. In advanced Western countries, gastrointestinal endoscopy is performed almost exclusively at specialized facilities, where strict provisions are observed for cleansing and disinfecting endoscopes. In Japan, however, gastrointestinal endoscopy is performed even in small clinics, and the annual number of gastrointestinal endoscopy cases is enormous. In addition, the method for cleansing and disinfecting endoscopes differs among hospitals. Although there is a distinct lack of evidence for how gastrointestinal endoscopes are cleaned and disinfected, it is necessary to standardize the method for doing so to advance the field of endoscopic medicine. abstract_id: PUBMED:33098959 'Secondary biofilms' could cause failure of peracetic acid high-level disinfection of endoscopes. Introduction: The reduced susceptibility of biofilms to disinfectants presents a challenge to the successful reprocessing of medical equipment. This study examined the effect of residual biomass remaining after previous disinfection with peracetic acid (PAA) on the tolerance of subsequent mature Pseudomonas aeruginosa biofilms to PAA. The effect of enzymatic degradation of specific components of the extracellular polymeric substance (EPS) of P. aeruginosa biofilm on the effectiveness of PAA disinfection was also evaluated. Methods: The susceptibility of biofilm grown on the biomass of PAA-killed biofilm to PAA was compared with the PAA susceptibility of biofilm grown in wells of a 24-well plate by evaluating their viability using the plate count assay. The effect of PAA on biofilm biomass was measured using crystal violet quantification of total biofilm biomass, while its effect on the polysaccharide and protein components of biofilm EPS was quantified using the phenol-sulphuric acid assay or Bradford assay, respectively. A confocal microscope was used to visualize the distribution of living and dead cells in biofilms grown on residual biofilm biomass. Findings: The presence of residual biomass from previously disinfected biofilms significantly enhanced the tolerance of subsequent biofilms. A 96-h-old 'secondary biofilm' formed on disinfected biomass survived PAA concentrations of 4000 ppm, which exceeds the concentrations used in practice for high-level disinfection. Conclusion: These observations indicate that, under certain circumstances, recolonization of residual EPS can cause failure of disinfection of medical equipment such as endoscopes, and emphasizes the importance of cleaning endoscopes prior to disinfection. Answer: High-level disinfection (HLD) of endoscopes with glutaraldehyde is a topic of concern due to the resistance of mycobacteria to this disinfectant. Studies have shown that mycobacteria species, such as Mycobacterium chelonei, are less sensitive to glutaraldehyde, and a 45-minute instrument soak has been recommended by the manufacturer to ensure adequate disinfection (PUBMED:8726756). However, this extended exposure time raises concerns about potential endoscope damage, the need for more endoscopes, and the perception that current cleaning methods are sufficient. Research indicates that manual cleaning can result in a significant reduction of viable mycobacterial colonies, but mycobacteria may still be present after a 10-minute exposure to alkaline glutaraldehyde. A subsequent alcohol treatment can eliminate significant colony growth, suggesting that alcohol rinse is a valuable adjunctive step in the disinfection process (PUBMED:8726756; PUBMED:10364023). Additionally, a quantitative membrane filter system showed the presence of at least one mycobacterial colony in four out of five scopes after a 45-minute glutaraldehyde exposure, highlighting the need for further studies to determine the adequate duration of disinfectant exposure time (PUBMED:8726756). The literature review suggests that high-level disinfection of thoroughly cleaned endoscopes is not associated with a higher infection rate than sterilization, and that thorough cleaning eliminates clinical differences between the two processes (PUBMED:8708362). However, contamination of endoscopes and endoscope washers by atypical mycobacteria, such as Mycobacterium chelonae subsp. abscessus, has been detected, and sufficient antibacterial effect was not obtained by 2% glutaraldehyde solution for endoscope sterilization. Frequent manual washing and brushing, using 3% glutaraldehyde solution and 70% alcohol, were necessary for complete decontamination (PUBMED:1624823). In conclusion, while high-level disinfection of endoscopes with glutaraldehyde is possible, it requires adherence to strict guidelines, including extended exposure times, thorough manual cleaning, and the use of alcohol rinses to ensure the elimination of mycobacteria. Additional research is recommended to establish standardized protocols for disinfectant exposure times to ensure the safety and efficacy of the disinfection process (PUBMED:8726756).
Instruction: Orthotopic liver transplantation with vena cava preservation in cirrhotic patients: is systematic temporary portacaval anastomosis a justified procedure? Abstracts: abstract_id: PUBMED:15964366 Orthotopic liver transplantation with vena cava preservation in cirrhotic patients: is systematic temporary portacaval anastomosis a justified procedure? Introduction: We evaluated the peri- and postoperative effects of the lack of a temporary portocaval anastomosis (TPCA) during orthotopic liver transplantation (OLT) in 84 patients with cirrhosis. Patients And Methods: From December 1996 to December 2002, 156 liver transplant recipients included (54%; 60 men and 24 women) of mean age 52 +/- 9 years with portal hypertension. In whom peri- and postoperative data were analyzed. Results: The median fall in mean arterial pressure upon vascular clamping and unclampings was 20 mm Hg (range 15 to 75), while the median duration of portal vein clamping was 77 minutes. The median amount of blood autotransfusion was 1100 mL (range 0 to 5400). The median number of red blood cell and fresh-frozen plasma units transfused were 5 and 6.5, respectively. The median intraoperative urinary output was 72 mL/h (range 11 to 221). Three patients (3.5%) presented a perioperative complication, but no perioperative death was observed. Six patients experienced an early postoperative complication (&lt;10 days): five hemodynamic complications and one transient renal failure, which did not require hemodialysis. One patient (1%) died at 12 hours after OLT from acute pulmonary edema. Conclusion: This study shows that systematic TPCA during OLT with preservation of the native retrohepatic vena cava in cirrhotic patients does not appear to be justified. In contrast, peri- and postoperative hemodynamic parameters as well as blood component requirements were comparable to those of the literature reporting OLT with straightforward TPCA. abstract_id: PUBMED:10507483 Specific vascular complications of orthotopic liver transplantation with preservation of the retrohepatic vena cava: review of 1361 cases. Objective: The objective of this study was to describe the complications specifically related to orthotopic liver transplantation (OLT) with preservation of the inferior vena cava and to their therapeutic management. This preservation technique has considerably influenced the surgical phases of liver transplantation, increasing hepatectomy time and modifying the number of vascular anastomoses. Methods: Our retrospective multicentric study, based on data from 1361 adult patients that had undergone orthotopic liver transplantation with preservation of the inferior vena cava in France between 1991 and 1997, analyzed the concomitant surgical complications. Type of cavo-caval anastomosis performed (piggyback, end-to-side, or side-to-side), use of a temporary portacaval anastomosis, technique-related complications, and mortality, were investigated. Results: Cavo-caval anastomosis was side-to-side in 50.6% of cases (n=689), piggyback in 42.7% (n=582), and end-to-side in 6.6% (n=90). In total, 882 temporary portacaval anastomosis were carried out. Fifty-five patients presented with one or more complications related to the preservation of the inferior vena cava technique; i.e., overall morbidity was 4.1% (55/1361). Overall mortality was 0.7% (10/1361). Mortality rate for patients who presented with surgical complication was 18%. A total of 64 complications were recorded: 57 (89%) were in the perioperative or immediate postoperative period and 7 (11%) were postoperative. Conclusions: These retrospective, descriptive results show significant advantages in favor of side-to-side anastomosis in terms of vascular complications. Certain factors should be evaluated specifically at pretransplant assessment to prevent certain serious complications; principally, these are anatomic factors of the recipient (inferior vena cava included in segment I, anatomic abnormalities of the inferior vena cava) and graft size. Depending on these factors, surgeons must be able to adapt the orthotopic liver transplantation, either before or during orthotopic liver transplantation, preferring the standard technique. abstract_id: PUBMED:7733605 Liver transplantation with preservation of the inferior vena cava and "piggyback" reimplantation of the liver A modified technique of orthotopic hepatic transplantation with inferior vena caval preservation and piggy-back procedure is described. This procedure, used in 14 patients, avoided the need for temporary vena cava clamping in nine cases and reduced the need for venous bypass during orthotopic liver transplantation. The indications, results and advantages of this procedure are described. abstract_id: PUBMED:1467143 Liver transplantation without clamping the vena cava inferior An original technique of orthotopic liver transplantation with preservation of the recipient's entire inferior vena cava and side-to-side caval anastomosis is described. The procedure was used in 21 consecutive patients. It has permitted to avoid vena cava occlusion and the need for venous bypass. No consequences on caval flow were observed during the anhepatic phase. Such technique avoids retrocaval dissection and requires only one caval anastomosis, reducing the duration of the anhepatic phase. abstract_id: PUBMED:7940712 Orthotopic liver transplantation with preservation of the caval and portal flows. Technique and results in 62 cases. Sixty-two OLTs in 61 patients were performed using a technical modification reported recently, including total hepatectomy with preservation of the inferior vena cava, partial clamping of the native vena cava, and side-to-side cavacaval anastomosis. We further modified the technique by adding the early construction of a temporary end-to-side portacaval shunt, and, more recently, by using an end-to-side caval reconstruction. With this technique, the caval and portal flows were maintained throughout the procedure. Hemodynamic parameters were analyzed prospectively during the operative period and remained stable at all stages of the procedure. Venous bypass was avoided in all cases without need for increased fluid infusion. Operative time and transfusion requirements were 6.8 +/- 1.6 hr and 9.8 +/- 4.3 U of packed RBC, respectively. There were no specific complications or deaths due to the technique used and hospital mortality was 10% (6/61). The technique used in this study is a safe adjunct to the technical armamentarium of clinical liver transplantation. Its main advantage seems to be hemodynamic stability throughout the procedure, obviating the need for venous bypass or fluid overload. abstract_id: PUBMED:7733604 Liver transplantation with preservation of portacaval flow: comparison with the conventional technique A modified technique of liver transplantation was used in 46 adults. This technique consisted of: a) construction of a temporary end-to-side porto-caval shunt, b) total hepatectomy with preservation of the inferior vena cava, c) closure of the graft inferior vena cava at both ends, d) partial clamping of the recipient's inferior vena cava followed by side-to-side cavocaval anastomosis, e) suppression of the temporary porto-caval shunt with completion of the operation according to the conventional technique. The results obtained with this technique were compared to those of 40 transplantations performed with the conventional technique. The use of veno-venous extracorporeal circulation was able to be avoided in every case with the modified technique, but was necessary in 70% of cases with the conventional technique. A significant reduction of the operating time (7.2 vs 9.3 hours) and transfusional requirements (11.4 vs 20.3 units of packed cells) was observed. The principal characteristic of this technique was the haemodynamic stability of the patients throughout the operation. These results suggest that this technique is an reliable and effective alternative to the use of veno-venous extracorporeal circulation during liver transplantation in adults. abstract_id: PUBMED:7983819 7 cases of liver transplantation with preservation of the vena cava inferior Orthotopic liver transplantation (OLT) with preservation of the inferior vena cava (IVC) was performed in 7 patients during a period of 7 weeks. For 5 years in this center are realized 180 OLT, the last 93 of them with preservation VCI (51.6%). This method had important advantages versus standard operation--blood loss and operating time are reduced, IVC does not clamped. This operation can be used in the majority of cases. abstract_id: PUBMED:11379339 Liver transplantation with vena cava in situ and selective use of temporary portacaval shunt or portal clamping. Background/aims: The recipient hepatectomy with vena cava in situ in liver transplantation has overcome the need of venous-venous bypass thanks to temporary porta caval shunt or portal clamping. Methodology: 150 orthotopic liver transplants in 137 patients were performed and the vena cava in situ technique was used in 142 (venous bypass in 7, temporary porta caval shunt in 49, portal clamping in 87). The suprahepatic cava veins anastomosis was performed with Belghiti in 97 and piggyback techniques in 45. Results: There were no differences in operative and warm ischemia times nor in blood requirements, while a greater stability of body temperature was documented in the vena cava In Situ group: in the latter temporary porta caval shunt preserved the temperature better than portal clamping (P &lt; 0.01). In anhepatic phase mean artery pressure decreased in veno-venous bypass and increased in the vena cava In situ groups (P &lt; 0.01). The venous return and the cardiac performances (anhepatic phase) were better preserved in the vena cava In Situ group. (P &lt; 0.0001). Conclusions: Temporary portal caval shunt or portal clamping and piggyback or Belgiti Techniques allow a better hemodynamic stability through out the procedure, obviating the need for veno-venous bypass or fluid overload, if selectively used. abstract_id: PUBMED:7840394 Temporary portocaval anastomosis with preservation of caval flow during orthotopic liver transplantation. Background: A novel technique of orthotopic liver transplantation was developed whereby both vena caval and portal venous blood flows are preserved during the entire procedure. Patients And Methods: This method of liver transplantation was successfully performed in 51 consecutive patients and included a temporary portocaval shunt, a total hepatic resection with vena caval preservation, and an end-to-side cavocaval anastomosis. Results: Preservation of vena cava and portocaval anastomosis were feasible in 51 patients (100%), including 34 patients with cirrhosis and 2 patients undergoing elective retransplantation. Both caval and portal flows were preserved during the entire procedure in 48 patients (94%). No venovenous bypass was required. Four deaths (7.8%) occurred postoperatively, all of them unrelated to the surgical procedure. Conclusion: We believe adoption of this orthotopic liver transplantation technique will obviate the need for venovenous bypass. abstract_id: PUBMED:17692641 Modified liver hanging maneuver during orthotopic liver transplantation with inferior vena cava preservation: results after 120 consecutive applications. The outflow venovenous anastomosis represent a crucial aspect during orthotopic liver transplantation (OLT) with inferior vena cava (IVC) preservation. The modified Belghiti liver hanging maneuver applied to the last phase of hepatectomy, lifting the liver, provides a better exposure of the suprahepatic region and allows easier orthogonal clamping of the three suprahepatic veins with a minimal portion of IVC occlusion. The outflow anastomosis constructed with a common cloacae of the three native suprahepatic veins is associated with a lower incidence of graft related venous outflow complications. The procedure planned in 120 consecutive OLT was achieved in 118 (99%). The outflow anastomosis was constructed on the common cloaca of the three hepatic veins in 111/120 cases (92.5%). No major complications were observed (bleeding during tunnel creation, graft outflow dysfunction, etc) except in one patient with acute Budd-Chiari, who successfully underwent retransplantation. Answer: The question of whether systematic temporary portacaval anastomosis (TPCA) is justified during orthotopic liver transplantation (OLT) with vena cava preservation in cirrhotic patients has been explored in several studies. One study found that systematic TPCA during OLT with preservation of the native retrohepatic vena cava in cirrhotic patients does not appear to be justified. The peri- and postoperative hemodynamic parameters, as well as blood component requirements, were comparable to those reported in literature for OLT with straightforward TPCA (PUBMED:15964366). Another study highlighted that specific vascular complications related to OLT with preservation of the inferior vena cava were relatively low, with an overall morbidity of 4.1% and mortality of 0.7%. The study suggested that side-to-side anastomosis had significant advantages in terms of vascular complications and that certain anatomic factors should be evaluated pretransplant to prevent serious complications (PUBMED:10507483). Additional studies have described various techniques of OLT with vena cava preservation that avoid the need for temporary vena cava clamping or venous bypass, suggesting that these methods can provide hemodynamic stability throughout the procedure (PUBMED:7733605, PUBMED:1467143, PUBMED:7940712, PUBMED:7733604, PUBMED:7983819). One study reported that the use of a temporary portacaval shunt or portal clamping, along with piggyback or Belghiti techniques, allowed for better hemodynamic stability throughout the procedure, eliminating the need for veno-venous bypass or fluid overload if selectively used (PUBMED:11379339). Another study described a technique that preserved both vena caval and portal venous blood flows during the entire procedure, which was feasible in all patients, including those with cirrhosis and those undergoing elective retransplantation. This technique did not require venovenous bypass and had a low postoperative mortality rate unrelated to the surgical procedure (PUBMED:7840394). Lastly, a modified liver hanging maneuver during OLT with IVC preservation was reported to provide better exposure of the suprahepatic region and easier construction of the outflow anastomosis, with a low incidence of graft-related venous outflow complications (PUBMED:17692641).
Instruction: Does the use of radiofrequency ultrasonic dissector in tonsillectomy have a beneficial effect over the use of laser? Abstracts: abstract_id: PUBMED:19082232 Does the use of radiofrequency ultrasonic dissector in tonsillectomy have a beneficial effect over the use of laser? Objective: To compare the intra- and post- operative morbidities on the use of radiofrequency ultrasonic dissector (US) with the use of laser during tonsillectomy in the same patients. Methods: A randomized comparative study was implemented where all patients underwent tonsillectomy at Al-Hada Armed Forces Hospital, Taif, and Aouhod Hospital, Al-Madinah, Kingdom of Saudi Arabia during the period January 2000 to December 2005 were recruited for the study. Radiofrequency US was used in one side, and laser was used in the other side of the same patient. Intra- and post- operative blood losses, and duration of the operation, in addition to postoperative pain score were recorded. Results: Five hundred and twenty-eight patients were recruited for the study. The mean operation duration was significantly shorter in the US group as compared to the laser group (p&lt;0.01). Intraoperative blood loss was significantly lesser in the US group (p&lt;0.001). Post-operative pain score was significantly lower among US subjects (p&lt;0.001). No significant difference in postoperative hemorrhage was detected between the compared groups. Conclusion: The use of US in tonsillectomy showed a beneficial effect on intraoperative blood loss, duration of the operation, as well as, postoperative pain over the use of laser. abstract_id: PUBMED:6542962 Tonsillectomy with a laser dissector. Preliminary report A so called laser-dissector with a flexible beam guide system for tonsillectomy is described. With this instrument the tissue is irradiated for a very short time resulting in a coagulation of the peritonsillar tissue with closure of the small vessels even before mechanical dissection is performed. A preliminary report of 10 patients is given. The main advantage is that the procedure could be performed without or with much less bleeding as in tonsillectomy without using vasoconstrictors. abstract_id: PUBMED:3662798 Techniques of liver resection. Comparison of suction knife, ultrasonic dissector, and contact neodymium-YAG laser. Liver resections were performed in 18 pigs with an inexpensive disposable plastic suction knife, an ultrasonic dissector, or a contact neodymium (Nd)-YAG laser. Technical aspects and intraoperative and postoperative data were compared. Intraoperative blood loss was less with the suction knife (112 +/- 28 mL) than with the ultrasonic dissector (149 +/- 45 mL) or Nd-YAG laser (174 +/- 25 mL). Operating time was similar in all groups. The number of ligatures used in the Nd-YAG laser group (12 +/- 1) was significantly less than in the ultrasonic dissector (27 +/- 2) or suction knife (32 +/- 2) groups. In the ultrasonic dissector group, there was an increase in postoperative white blood cell count and liver enzyme levels compared with the other two groups. Light microscopy revealed dilated bile ducts in the ultrasonic dissector resection group, which may reflect biliary stasis. There were no significant differences in mortality among the three experimental groups. Results indicated that the ultrasonic dissector and the contact laser method were not substantially better than an inexpensive, easily modified plastic suction catheter in performing a major nonanatomic liver resection in piglets. abstract_id: PUBMED:11770142 Ultrasonic tonsillectomy. High-frequency ultrasound is an effective mechanism for cutting and coagulating tissue. It is widely used in laparoscopic surgery and may have advantages in tonsillectomy. Twenty-five consecutive cases of tonsillectomy were performed using an ultrasonic scalpel; these were compared with a similar previous consecutive group performed by conventional cold dissection and monopolar electrocautery. There was no undue primary bleeding in either group and no immediate or late postoperative bleeding. Since adenoidectomy was often performed and the total blood loss in each group was so small, no statistical difference could be determined between the groups. No complications were noted in either group. The ultrasonic scalpel has the potential to produce a bloodless tonsillectomy and less collateral tissue damage and is easy to use. There may be less postoperative pain, although this could not be shown statistically in this retrospective study. Ultrasonic tonsillectomy appears to be a safe, and possibly a superior, method of performing this common operation. abstract_id: PUBMED:12737346 Application of lingual tonsillectomy to sleep apnea syndrome involving lingual tonsils. In sleep apnea syndrome, surgical treatment is applied in obstructive-type cases and some mixed-type cases. If the obstructive part is in the root of the tongue, forward transfer of the tongue, lingual tonsillectomy and laser midline glossectomy are applied. In this study, we demonstrate the surgical technique of lingual tonsillectomy using an ultrasonic coagulating dissector (SonoSurg) with a blade tip shape developed in our department. We conclude that lingual tonsillectomy using SonoSurg, which we have frequently used, should be the first choice of treatment for snoring and sleep apnea caused by hypertrophy of the lingual tonsils from the viewpoints of effectiveness, prevention of hemorrhage, safety and handling. abstract_id: PUBMED:1417485 The ultrasonic dissector facilitates laparoscopic cholecystectomy. The ultrasonic dissector disrupts tissues in proportion to their fluid content by ultrasonically induced cavitational forces. Since sturdy tissues are spared, the instrument tends to follow tissue planes and to dissect fat and other soft tissues selectively. We performed a prospective, randomized, controlled trial in 73 patients comparing the safety and efficacy of a prototype ultrasonic dissector with that of electrosurgery and laser during laparoscopic cholecystectomy. Randomization was as follows: ultrasonic dissector, 37 patients; electrosurgery, 21 patients; and laser, 15 patients. The results were not different with respect to patient characteristics, amount of blood loss, technical difficulties, length of hospital stay, or return to work. Subjectively, the ultrasonic dissector was thought to be of special value in isolating the hilar structures, particularly when they were edematous or embedded in fat. The ultrasonic dissector disintegrated the fat, which was rapidly cleared up the suction channel, allowing the cystic duct and artery to be bared with less risk of injury. We concluded that the ultrasonic dissector has unique attributes that contribute to the ease and safety of laparoscopic cholecystectomy. abstract_id: PUBMED:10149911 A new instrument for use with Nd:YAG in tonsillectomy. In tonsillectomy, unexpected bleeding can be one of the most troublesome surgical procedures to perform. To resolve some inherent problems, we have developed a new elevator with contact Nd:YAG laser probe for blunt dissection around the tonsillar capsule. Of 63 cases of laser tonsillectomies performed over 5 years, 51 cases were performed using conventional contact Nd:YAG laser probe and a chisel type dissector. Tonsillectomy in the remaining 12 cases was undertaken by the newly developed elevator with contact Nd:YAG laser probe. We compared clinical evaluation between laser and conventional procedures. Additionally, the advantage of laser tonsillectomy is discussed. abstract_id: PUBMED:22992271 Six years of evidence-based adult dissection tonsillectomy with ultrasonic scalpel, bipolar electrocautery, bipolar radiofrequency or 'cold steel' dissection. Objective: To conduct an adequately powered, prospective, randomised, controlled trial comparing adult dissection tonsillectomy using either ultrasonic scalpel, bipolar electrocautery, bipolar radiofrequency or 'cold steel' dissection. Methods: Three hundred patients were randomised into four tonsillectomy technique groups. The operative time, intra-operative bleeding, post-operative pain, tonsillar fossa healing, return to full diet, return to work and post-operative complications were recorded. Results: The bipolar radiofrequency group had a shorter mean operative time. The mean intra-operative blood loss during bipolar radiofrequency tonsillectomy was significantly less compared with cold dissection and ultrasonic scalpel tonsillectomy. Pain scores were significantly higher after bipolar electrocautery tonsillectomy. Patients undergoing bipolar electrocautery tonsillectomy required significantly more days to return to full diet and work. The bipolar electrocautery group showed significantly reduced tonsillar fossa healing during the first and second post-operative weeks. Conclusion: In this adult series, bipolar radiofrequency tonsillectomy was superior to ultrasonic, bipolar electrocautery and cold dissection tonsillectomies. This method combines the advantages of 'hot' and 'cold' tonsillectomy. abstract_id: PUBMED:17521473 Pilot comparison between potassium titanyl phosphate laser and bipolar radiofrequency in paediatric tonsillectomy. Objectives: To compare the advantages and disadvantages of potassium titanyl phosphate laser with those of bipolar radiofrequency techniques, in paediatric tonsillectomy. Study Design: Prospective, randomised, clinical study. Patients And Methods: From July 2004 to April 2006, 80 patients aged between 10 and 15 years, with tonsillectomy planned for chronic tonsillitis, were included in the study. Children were prospectively randomised into two equal groups: potassium titanyl phosphate laser tonsillectomy and bipolar radiofrequency tonsillectomy. Operative time and intra-operative blood loss were recorded. Patients were scheduled for follow up during the first, second and fourth post-operative weeks. They were asked to record their pain and discomfort on a standardised visual analogue scale, from zero (no pain) to 10 (severe pain). Post-operative complications were also recorded and managed. Results: The potassium titanyl phosphate laser group showed a slightly longer operative time (mean 12 minutes) than the bipolar radiofrequency group (mean 10 minutes). Intra-operative blood loss was significantly less in the potassium titanyl phosphate laser group (mean 21 cm3) than in the bipolar radiofrequency group (mean 30 cm3). In the first week, post-operative pain scores were less in the potassium titanyl phosphate laser group than in the bipolar radiofrequency group (means 7.5 and 8.5, respectively). However, in the second week pain scores increased more in the potassium titanyl phosphate laser group than in the bipolar radiofrequency group (means 8.5 and 6, respectively). In the fourth week, both groups showed equal and nearly normal pain scores. No case of reactionary post-tonsillectomy haemorrhage was recorded in either group. Only one case of secondary post-tonsillectomy haemorrhage was recorded, in the potassium titanyl phosphate laser group (2.5 per cent), managed conservatively. Conclusion: Both the potassium titanyl phosphate and the bipolar radiofrequency techniques were safe and easy to use for tonsillectomy, with reduced operative time, blood loss and complication rates and better post-operative general patient condition. Potassium titanyl phosphate laser resulted in reduced operative bleeding and immediate post-operative pain, compared with the bipolar radiofrequency technique. However, potassium titanyl phosphate laser required slightly more operative time and caused more late post-operative pain than the bipolar radiofrequency technique. The low rate of recorded complications showed that both techniques cause little damage to the tonsillar bed during dissection, thus minimising complications. abstract_id: PUBMED:36742542 Review of Radiofrequency Ablation in Tonsillectomy. The most common disease involving the tonsils among children and adults is chronic tonsillitis. There are different surgical techniques described in literature for the treatment of diseases involving the tonsils. Of these, the most routinely performed and the traditional method is conventional dissection tonsillectomy. The main disadvantage of this method is bleeding and postoperative pain. Radiofrequency ablation is found to overcome this limitation due to its mechanism of action. Both monopolar and bipolar radiofrequency ablation can be used in the surgical management. The effectiveness of monopolar radiofrequency has been proved in different fields of medicine. Hence new studies can be directed in comparing monopolar radiofrequency ablation with other techniques of tonsillectomy. Answer: Yes, the use of a radiofrequency ultrasonic dissector in tonsillectomy has been shown to have a beneficial effect over the use of laser. A study comparing intra- and post-operative morbidities between radiofrequency ultrasonic dissector (US) and laser during tonsillectomy found that the mean operation duration was significantly shorter in the US group compared to the laser group. Additionally, intraoperative blood loss was significantly lesser in the US group, and post-operative pain score was significantly lower among US subjects. There was no significant difference in postoperative hemorrhage between the two groups, indicating that the US is at least as safe as the laser in this regard (PUBMED:19082232). While laser tonsillectomy has been reported to result in less bleeding compared to traditional methods without using vasoconstrictors (PUBMED:6542962), the comparison with radiofrequency ultrasonic dissector suggests that the latter may offer superior outcomes in terms of operation time, blood loss, and postoperative pain. Other studies have also highlighted the potential benefits of ultrasonic tonsillectomy, such as producing a bloodless surgical field and less collateral tissue damage, which may lead to less postoperative pain, although this was not statistically shown in a retrospective study (PUBMED:11770142). Additionally, the use of an ultrasonic coagulating dissector has been recommended as the first choice of treatment for snoring and sleep apnea caused by hypertrophy of the lingual tonsils due to its effectiveness, prevention of hemorrhage, safety, and ease of handling (PUBMED:12737346). In summary, the evidence suggests that radiofrequency ultrasonic dissector in tonsillectomy has beneficial effects over the use of laser, particularly in reducing operation time, intraoperative blood loss, and postoperative pain.
Instruction: On-pump beating heart coronary revascularization: Is it valid for emergency revascularization? Abstracts: abstract_id: PUBMED:26503012 How revascularization on the beating heart with cardiopulmonary bypass compares to off-pump? A meta-analysis of observational studies. Off-pump coronary artery bypass surgery has been a controversial area of debate and the outcome profile of the technique has been thoroughly investigated. Scepticism regarding the reported outcomes and the conduct of the randomized trials comparing this technique with conventional on-pump coronary artery bypass surgery has been widely voiced, and the technique of off-pump surgery remains as an infrequently adopted approach to myocardial revascularization worldwide. Criticisms of the technique are related to lower rates of complete revascularization and its unknown long-term consequences, the significant detrimental effects on mortality and major adverse events when emergency conversion is required, and the significant lack of long-term survival and morbidity data. The hybrid technique of myocardial revascularization on the beating heart with the use of cardiopulmonary bypass may theoretically provide the beneficial effects of off-pump surgery in terms of myocardial protection and organ protection, while providing the safety and stability of on-pump surgery to allow complete revascularization. Large randomized comparison to support evidence-based choices is currently lacking. In this article, we have meta-analysed the outcomes of on-pump beating heart surgery in comparison with off-pump surgery focusing on major adverse cardiovascular and cerebrovascular adverse events (MACCE) including mortality, stroke and myocardial infarction and the degree of revascularization and number of bypass grafts performed. It was demonstrated that the beating heart on-pump technique allows a significantly higher number of bypass grafts to be performed, resulting in significantly higher degree of revascularization. We have also demonstrated a slightly higher rate of 30-day mortality and MACCE with the technique although not at a statistically significant level. These results should be considered alongside the population risk profile, where a significantly higher risk cohort had undergone the beating heart on-pump technique. Long-term survival and morbidity figures are required to assess the impact of these findings in the coronary surgery patient population. abstract_id: PUBMED:33061051 Intermittent on-pump beating-heart coronary artery bypass grafting-a safer option. Purpose: On-pump beating-heart coronary artery bypass grafting represents a merger of standard on and off-pump techniques and is thought to benefit patients by coupling the dual absence of cardioplegic arrest (conventional coronary surgery), with the hemodynamic instability during manipulation seen in off-pump surgery. However, the clinical benefits are still under discussion. We improvised on the standard on-pump beating-heart surgeries by introducing use of "intermittent" bypass as and when required. Methods: This study involved 108 patients. "Intermittent" on-pump-beating heart coronary artery bypass grafting was done using suction stabilizer and placing aortic and venous cannula, electively in all patients (group 1) who were supported by pump intermittently (n = 54). Retrospective data of patients who underwent off-pump surgery electively by the same surgeon (group 2, n = 54) was collected. Results: There was a significant advantage in the number of grafts performed for the lateral surface (circumflex branches) using the new technique compared to conventional technique (68vs22). Similarly, significant advantage was also noted in terms of total number of grafts along with shorter operating times. There were no mortalities in the new group compared to the off-pump group and blood loss was also lesser. Conclusions: "Intermittent" on-pump coronary revascularization is a technically reliable method of coronary revascularization taking advantage of the off-pump and conventional on-pump techniques while considerably eliminating the disadvantages of both. It has shown its superiority in safety, number of grafts, blood loss, operating time and perioperative course. abstract_id: PUBMED:26336019 On-pump beating heart coronary revascularization: Is it valid for emergency revascularization? Background: On-pump beating heart coronary artery bypass grafting (CABG) may be considered as an alternative to the conventional on-pump surgery in patients presenting with acute coronary syndrome requiring emergency revascularization. This study reports our clinical experience and early outcomes with the on-pump beating heart coronary surgery on patients with acute coronary syndrome. Design And Settings: A retrospective study conducted from August 2009 to October 2015, in a regional training and research hospital in Turkey. Methods: A total of 1432 patients underwent isolated CABG at our institution. A total of 316 of these patients underwent the on-pump beating heart procedure without cardioplegic arrest by the same surgeon. Results: The time interval from the onset of acute myocardial infarction to CABG was 10 (2.2) hours. The mean number of grafts was 3.0 (0.6). Hospital mortality was 2.9% (9 patients). Twelve patients had low cardiac output syndromes after surgery. Eight of them had renal dysfunction but none of them needed hemodialysis. The mean intensive care unit stay was 3 (2) days and the mean hospital length of stay was 7 (4) days. Conclusion: We think that the on-pump beating heart revascularization technique can be a good choice for emergency CABG of high-risk patients with a multivessel coronary artery disease. abstract_id: PUBMED:25583646 Off-pump or on-pump beating heart: which technique offers better outcomes following coronary revascularization? A best evidence topic was written according to a structured protocol. The question addressed was whether on-pump beating heart coronary artery bypass (BH-ONCAB) surgery has a different outcome profile in comparison to off-pump coronary artery bypass (OPCAB). A total of 205 papers were found by systematic search of which 7 provided the largest and most recent outcome analysis comparing BH-ONCAB with OPCAB, and represented the best evidence to answer the clinical question. The authors, date, journal, study type, population, main outcome measures and results were tabulated. Reported outcome measures included mortality, stroke, myocardial infarction, renal failure, myocardial damage, change in ejection fraction, number of bypass grafts and completeness of revascularization. With the exception of one study that favoured the off-pump technique, our review did not demonstrate a statistically significant difference in terms of mortality between the groups. We did not identify a statistically significant difference in any reported morbidity outcomes. However, there was a trend towards better outcomes for the on-pump beating heart technique, despite a higher risk profile in terms of age, ejection fraction and burden of coronary disease in this group. Consistent statistically significant differences between the groups were the mean number of grafts performed and the completeness of revascularization, both of which were higher with the on-pump beating heart technique. Limitations to the current evidence include the finding that most of the current data arise from specialist off-pump surgeons or centres that would usually only carry out BH-ONCAB in the higher risk patients where the added safety of cardiopulmonary bypass is desired. abstract_id: PUBMED:34251708 Emergency on-pump beating heart reconstructive coronary artery bypass grafting for entrapped percutaneous coronary intervention equipment. Background: Equipment entrapment during percutaneous coronary intervention (PCI) may lead to acute ischemia necessitating emergency surgery. Case Presentation: This is the first case report where emergency surgery had to be performed on beating heart, for removal of retained PCI equipment, due to an incidental finding of severely atheromatous aorta precluding cross-clamp. Ultrasound-guided aortic cannulation and off-pump strategy made the complex reconstruction of left anterior descending artery possible. Conclusions: PCI equipment entrapment and subsequent myocardial ischemia, with or without hemodynamic compromise, necessitates emergency surgery and should involve an early discussion with a cardiothoracic team. Each case poses different challenges and requires surgical planning to devise an individualized management strategy. Intraoperative finding of atheromatous aorta may be managed with pump-assisted beating heart surgery and clampless technique to achieve satisfactory results. abstract_id: PUBMED:38178340 On Pump Beating Heart Coronary Artery Surgery in Patients Requiring Urgent Revascularization. Background: When acute coronary syndrome patients necessitate immediate revascularization, heart-on-pump coronary artery bypass grafting may be regarded as a viable substitute for conventional on-pump surgery. Our clinical experience and initial outcomes of heart-on-pump coronary surgery in patients with acute coronary syndrome are detailed in this article. Method: This research endeavor was a retrospective analysis that spanned the years March 2011 to August 2023. The sample size comprised 2816 patients who had undergone coronary artery surgery. During this period, the same surgical team performed coronary artery bypass surgery on 411 of these patients, who underwent beating heart surgery while on cardiopulmonary bypass support and without cardioplegic arrest; this was done under emergency conditions. Result: 9.3 ± 2.2 hours elapsed between the initiation of acute myocardial infarction and the commencement of coronary artery bypass grafting (CABG). A mean of 4.0 grafts was applied (2.2 ± 1.1). Mortality in hospitals was calculated for sixteen patients. Following surgery, twenty-six patients developed an inadequate cardiac output syndrome. Despite having renal dysfunction, none of the eight individuals needed hemodialysis. The mean duration of stay in intensive care was 3.2 (2.2 ± 1.1) days, while the mean length of hospitalization was 9.2 (4.3 ± 2.4) days. Conclusion: We believe that for high-risk patients with multivascular coronary artery disease who require an emergency coronary artery bypass graft, on-pump beating heart revascularization could be a viable option. abstract_id: PUBMED:28231841 Early clinical outcomes of on-pump beating-heart versus off-pump technique for surgical revascularization in patients with severe left ventricular dysfunction: the experience of a single center. Background: Limited experiences of applying an on-pump beating-heart technique for surgical revascularization in patients with severe left ventricular dysfunction have been reported. Which strategy, either off-pump coronary artery bypass grafting (CABG) or on-pump beating-heart CABG surgery, is the best strategy for surgical revascularization in patients with severe left ventricular dysfunction is still controversial. This single-center study aimed to evaluate the impacts of an on-pump beating-heart versus an off-pump technique for surgical revascularization on the early clinical outcomes in patients with a left ventricular ejection fraction (LVEF) of 35% or less to explore which technique would be more suitable for surgical revascularization in patients with severe left ventricular dysfunction. Methods: A total of 216 consecutive patients with an echocardiographic estimated LVEF of 35% or less who underwent non-emergency, primary, isolated CABG from January 2010 to December 2014 were included in this study and were divided into either an ONBEAT group (patients who received on-pump beating-heart CABG surgery, n = 88) or an OFF group (patients who received off-pump CABG surgery, n = 128). The early clinical outcomes were investigated and compared. Results: Patients in the ONBEAT group compared to the OFF group had a significant higher early postoperative LVEF (35.6 ± 2.9 vs. 34.8 ± 3.3%, p = 0.034) but shared a similar baseline LVEF (31.0 ± 2.8 vs. 31.0 ± 2.9%, p = 0.930). Patients in the ONBEAT group compared to the OFF group received a greater number of grafts and an increased amount of drainage during the first 24 h (3.7 ± 0.8 vs. 2.8 ± 0.6, p &lt;0.001; 715 ± 187 ml vs. 520 ± 148 ml, p &lt;0.001, respectively), without evidence of worse in-hospital mortality or major postoperative morbidity. Additionally, logistic regression analysis showed that surgical technique (on-pump beating-heart CABG vs. off-pump CABG) had no independent influence on in-hospital mortality or major postoperative morbidity in patients with preoperative LVEF of 35% or less. Conclusions: The on-pump beating-heart technique may be an acceptable alternative to the off-pump technique for surgical revascularization in patients with an estimated LVEF of 35% or less. abstract_id: PUBMED:37162439 On-pump beating-heart technique for managing a ruptured left coronary artery aneurysm. We present the case of an 86-year-old woman with a ruptured left coronary artery aneurysm associated with coronary-pulmonary artery fistula. She was transferred to our hospital with complaints of chest discomfort. Echocardiography revealed cardiac tamponade, and subsequently, a bloody pericardial effusion was drained. Coronary angiography and computed tomography revealed a ruptured left coronary artery aneurysm originating from a branch of the proximal left anterior descending artery and a coronary-pulmonary artery fistula. In the emergency operation, under cardiopulmonary bypass, the inlet and outlet vessels were ligated, and the aneurysm was closed without arresting the heart. By utilizing cardiopulmonary bypass, the pulmonary trunk was easily decompressed, and dissection around the aneurysm and identification of the inlet and outlet vessels was facilitated without cardiac arrest. Thus, to conclude, an on-pump beating-heart technique should be considered in such cases. abstract_id: PUBMED:34422347 Beating-heart on-pump coronary artery bypass grafting vs. off-pump coronary artery bypass grafting: a systematic review and meta-analysis. Background: Beating-heart on-pump coronary artery bypass grafting (CABG), otherwise known as BH-ONCAB, can reduce myocardial injury by preserving native coronary blood flow while maintaining hemodynamic stability by the effective support of cardiopulmonary bypass (CPB). This study aimed to identify whether BH-ONCAB confers a survival, mortality, or morbidity benefit over off-pump CABG (OPCAB). Methods: A systematic literature review identified 18 studies incorporating 5,615 patients (1,548 BH-ONCAB and 4,067 OPCAB cases) who satisfied the inclusion criteria. Outcome measures were meta-analyzed using random-effects modeling. Between-study heterogeneity was investigated through quality assessment and risk of bias analysis. Results: The results demonstrated comparable early mortality and long-term survival between BH-ONCAB and OPCAB coronary revascularization with no significant statistical differences. The incidences of stroke, renal failure, blood loss, and arrhythmias were significantly higher in patients who underwent BH-ONCAB than patients who underwent OPCAB. However, BH-ONCAB conferred lower rates of incomplete revascularization and greater numbers of distal anastomoses. Conclusions: BH-ONCAB is a safe and comparable alternative to OPCAB in terms of early mortality and late survival. BH-ONCAB may confer particular advantages in preventing incomplete revascularization and allowing more distal anastomoses compared to OPCAB. However, BH-ONCAB was associated with more postoperative complications due to the use of CPB. Future work should focus on larger matched studies and multicenter randomized controlled trials to optimize our surgical revascularization strategies. abstract_id: PUBMED:34776662 On-pump beating heart versus off-pump myocardial revascularization-a propensity-matched comparison. Objective: On-pump beating heart (OP-BH) coronary artery bypass grafting (CABG) is often undertaken as an alternative between off-pump coronary artery bypass (OPCAB) and conventional on-pump coronary artery bypass grafting (On-pump CABG), especially in India. However, outcome data following OP-BH surgery is sparse. The aim of this study was to compare the outcomes of OP-BH CABG with OPCAB. Methods: From our institutional database, all patients undergoing OP-BH CABG (n = 531) were identified. A propensity-matched cohort undergoing OPCAB (n = 531) was identified from the database. Nearest neighbor matching technique was used and the groups were matched for variables including age, gender, body mass index, EuroSCORE, history of recent myocardial infarction or unstable angina, hypertension, peripheral vascular disease, chronic obstructive airway disease, diabetes, pre-op renal impairment, pre-op neurological events, and left ventricular function. Results: The propensity-matched groups were well matched in terms of baseline characteristics. The mean EuroSCORE was 3.17 and 3.20 in the OP-BH and the OPCAB groups. The unadjusted 30-day mortality in the propensity-matched OPCAB group was 2.07% (11/531) while mortality in the on-pump beating heart group was significantly higher at 6.9% (37/531). Multivariate analysis showed that OP-BH CABG was an independent risk factor for 30-day mortality as well as major adverse post-operative outcomes including renal, neurological, and respiratory outcomes and post-operative atrial fibrillation. Conclusions: OP-BH CABG is associated with worse clinical outcomes compared to patients undergoing OPCAB. Answer: On-pump beating heart coronary revascularization is considered a valid option for emergency revascularization, particularly in high-risk patients with multivessel coronary artery disease requiring immediate intervention. The technique combines the benefits of both on-pump and off-pump surgeries, aiming to reduce myocardial injury by preserving native coronary blood flow while maintaining hemodynamic stability with the support of cardiopulmonary bypass (CPB) (PUBMED:26336019; PUBMED:38178340). Clinical experiences and early outcomes reported in the literature suggest that on-pump beating heart coronary surgery can be a good choice for emergency coronary artery bypass grafting (CABG) in patients presenting with acute coronary syndrome. Studies have shown that this technique can be performed with a reasonable number of grafts and without the need for cardioplegic arrest, resulting in acceptable hospital mortality rates and lengths of stay in intensive care and hospital (PUBMED:26336019; PUBMED:38178340). Moreover, in cases of emergency surgery required for complications such as equipment entrapment during percutaneous coronary intervention (PCI) or ruptured coronary artery aneurysms, the on-pump beating heart technique has been successfully utilized, demonstrating its versatility and effectiveness in managing complex and urgent cardiac conditions (PUBMED:34251708; PUBMED:37162439). While the on-pump beating heart approach is a viable option for emergency revascularization, it is important to note that it may be associated with higher rates of postoperative complications compared to off-pump CABG due to the use of CPB. However, it may offer advantages in preventing incomplete revascularization and allowing more distal anastomoses (PUBMED:34422347; PUBMED:34776662). In summary, on-pump beating heart coronary revascularization is a valid and potentially beneficial technique for emergency revascularization, especially in high-risk patients or complex emergency situations. However, careful consideration of the risks and benefits is necessary, and further research may be needed to optimize surgical revascularization strategies.
Instruction: Retinal vascular calibre is altered in patients with rheumatoid arthritis: a biomarker of disease activity and cardiovascular risk? Abstracts: abstract_id: PUBMED:21172929 Retinal vascular calibre is altered in patients with rheumatoid arthritis: a biomarker of disease activity and cardiovascular risk? Objectives: Alterations in retinal vascular calibre, particularly wider venular calibre, have been independently associated with elevated markers of inflammation and cardiovascular risk in the general population. We hypothesized that retinal vascular calibre would be altered in patients with RA, who are known to have both elevated cardiovascular risk and chronic, systemic inflammation. Methods: Retinal vascular calibre was measured from digital retinal photographs using computerized methods in 51 RA patients and 51 age- and gender-matched controls. Retinal vascular calibre was compared between RA and control patients with adjustment for relevant variables including cardiovascular risk factors and companion vessel calibre. The relationship between retinal venular calibre and inflammation was assessed by comparing controls and RA patients with high and lower disease activity. Results: Retinal venular calibre [mean (s.d.)] was significantly wider in RA patients than in controls [235.9 (24.6) vs. 211.6 (21.0) µm, P &lt; 0.001]. After adjustment for all relevant variables, mean venular calibre remained 20.3 µm (95% CI 10.4, 30.3) wider in RA patients compared with controls. Retinal venular calibre [mean (s.d.)] also increased with increasing levels of systemic inflammation: 211.6 (21.0) µm in controls, 232.3 (22.4) µm in RA patients with moderate or lower disease activity and 255.5 (28.3) µm in RA patients with high disease activity (P for trend &lt; 0.0001). Conclusions: This study demonstrates that RA patients have dilated retinal venular calibre, reflecting systemic inflammation and possibly increased cardiovascular risk. Longitudinal studies correlating retinal vascular calibre with subsequent cardiovascular events will clarify the clinical utility of this test in patients with RA. abstract_id: PUBMED:26338663 Suppression of inflammatory disease activity in rheumatoid arthritis is associated with improvements in retinal microvascular health. Objective: To investigate the effect of suppressing inflammation on retinal microvascular health in patients with RA. Methods: Two groups of patients with RA were recruited and studied concurrently. Group A included patients with moderate to high disease activity [28-joint DAS with CRP (DAS28-CRP) &gt;3.2] requiring treatment escalation, while group B had stable low disease activity (DAS28-CRP ≤3.2) not requiring treatment escalation. Retinal photography was performed at baseline and weeks 6 and 24 in group A and at baseline and week 12 in group B. Results: Group A included 26 patients with a mean age of 50.7 years (s.d. 3.5) and a mean disease duration of 7.1 years (s.d. 8.0). Disease activity significantly improved during follow-up and was accompanied by a significant reduction in retinal venular calibre at week 6 [mean difference (MD) -7.9 μm (95% CI -13.3, -2.5)] and at week 24 [MD -6.8 μm (95% CI -12.2, -1.4)]. No significant change in retinal arteriolar calibre was identified at week 6 [MD -0.6 μm (95% CI -4.5, 3.28)] or week 24 [MD 0.7 μm (95% CI -3.1, 4.5)]. Group B included 27 patients with a mean age of 54.6 years (s.d. 1.8) and a mean disease duration of 14.5 years (s.d. 10.9). Disease activity and therapy remained unchanged during follow-up and no significant changes in retinal venular [MD 1.81 μm (95% CI -2.32, 5.95)] or arteriolar [MD 0.54 μm (95% CI -2.77, 3.86)] calibre were observed. Conclusion: We demonstrated that suppression of inflammation in RA is associated with a reduction of retinal venular calibre, suggesting that therapies targeting inflammation could improve vascular health in RA. abstract_id: PUBMED:23654356 Retinal venular calibre is increased in patients with autoimmune rheumatic disease: a case-control study. Aim: To examine retinal vessel calibre in autoimmune rheumatic disease. Materials And Methods: Patients with autoimmune rheumatic disease were recruited from a Rheumatology clinic. Retinal vessel calibre was measured from fundus photographs, and summarised as the central retinal artery and vein equivalents (CRAE and CRVE) using a semi-automated computer-assisted method. Results: The 124 patients studied had rheumatoid arthritis (n = 76, 61%), systemic lupus erythematosus (n = 17, 14%), psoriatic arthritis (n = 11, 9%) or another rheumatological disease (n = 20, 16%). Retinal venular calibre was increased in patients with autoimmune rheumatic disease (+11.6 µm, 95% Confidence interval [CI] 3.8 - 19.3, p = 0.01) compared with other hospital patients, after adjusting for baseline differences. This increase was also present in the subgroup with rheumatoid arthritis (p = 0.01). Patients with a rheumatic disease and elevated C-reactive protein (CRP) levels had wider retinal venules than those with a lower CRP (mean CRVE: 247.8 ± 28.0 versus 216.6 ± 25.3, p &lt; 0.01), and than other hospital patients with increased CRP (mean CRVE: 247.8 ± 28.0 versus 216.4 ± 25.9, p &lt; 0.01). Conclusions: Individuals with an autoimmune rheumatic disease have wider retinal venules than other hospital patients. This increase in calibre may be due to the underlying inflammatory activity. abstract_id: PUBMED:28926162 Retinal vessel morphology in rheumatoid arthritis: Association with systemic inflammation, subclinical atherosclerosis, and cardiovascular risk. Objective: Quantification of retinal vessel morphology has emerged as a marker of cardiovascular health. We examined retinal microvascular diameters in RA, particularly in regard to systemic inflammation, subclinical atherosclerosis, and cardiovascular risk. Methods: Retinal images from RA patients and controls were processed using computerized software, to obtain CRAE and CRVE and AVR. Subclinical atherosclerosis was assessed with cIMT, and 10-year risk of general cardiovascular disease was calculated. Results: Both CRAE (78.8 ± 8.9 vs 90.2 ± 9.9 μm, P &lt; .001) and AVR (0.69 ± 0.09 vs 0.81 ± 0.09, P &lt; .001) were decreased in RA patients (n = 87) compared to controls (n = 46), whereas CRVE did not differ. Among RA patients, CRAE and AVR were inversely associated with both cIMT and CRP, whereas CRVE positively correlated with CRP (P &lt; .05 for all). CRAE additionally correlated with cardiovascular risk score (r = -.396, P = .001). In the multivariate analysis, cardiovascular risk was associated with CRAE; age with CRVE, while CRP independently predicted AVR. Conclusions: Our study shows altered retinal microvascular morphology in RA patients. Inflammation appears as the biological link for the observed association between retinal microvascular abnormalities and subclinical atherosclerosis. Retinal arteriolar narrowing might play its own role in cardiovascular risk prediction in RA. abstract_id: PUBMED:15288364 Rheumatoid arthritis increases the risk of coronary heart disease via vascular endothelial injuries. Patients with rheumatoid arthritis (RA) have an increased prevalence of coronary heart disease and a high cardiovascular mortality rate. The causes of increased coronary heart disease in RA patients are poorly understood. Conventional cardiovascular risk factors, such as inactivity, overweight or dyslipidemia may play a role, but they do not seem to be wholly responsible for the increased cardiovascular risk. RA is associated with a high incidence of inflammation and vascular endothelial injuries. Endothelial dysfunction is one of the key steps in the pathogenesis of atherosclerosis in non-RA patients. Therefore, we hypothesized that inflammation-induced vascular endothelial injuries may be responsible for the increased risk of coronary heart disease and high rates of cardiovascular mortality in patients with RA. abstract_id: PUBMED:17278937 Cardiovascular risk factors, fitness and physical activity in rheumatic diseases. Purpose Of Review: There is increased recognition of an excess risk of cardiovascular disease in patients with rheumatic disorders. Physical inactivity is a frequent complication of arthritis, and also common in the general population. In this review, we highlight recent findings on risk factors for cardiovascular disease in patients with rheumatic diseases, and explore the role of physical activity for the prevention of cardiovascular disease. Recent Findings: Inflammatory mechanisms are clearly involved in cardiovascular disease in patients with systemic lupus erythematosus and rheumatoid arthritis. In rheumatoid arthritis, disability is also a major predictor of cardiovascular disease. A sedentary lifestyle increases the risk of cardiovascular disease in the general population, and high physical activity prevents cardiovascular disease mortality and morbidity. Successful treatment of rheumatic disease with control of inflammation and improved functional capacity may also reduce the risk of cardiovascular disease. Summary: As part of the effort to prevent vascular comorbidity, regular exercise should be encouraged in patients with rheumatic diseases, and structured interventions to reduce adverse lifestyle factors scientifically evaluated. abstract_id: PUBMED:19524171 Vascular disease in rheumatoid arthritis: from subclinical lesions to cardiovascular risk. Rheumatoid arthritis (RA) is one of the most prevalent and complex inflammatory diseases affecting primarily the joints, but also associating several extra-articular features. The vascular disease in RA encompasses a large spectrum of lesions, from rheumatoid vasculitis to atherosclerotic lesions. During the last years the importance of the vascular disease related to atherosclerosis in terms of cardiovascular morbidity and global mortality became evident in RA. The inflammatory hypothesis of atherosclerosis in RA implies that mediators originating from the inflamed synovial tissue or from the liver may have systemic vascular consequences, leading to endothelial dysfunction and structural abnormalities of the vessels. Hence, the global management of patients with RA must include the improvement of cardiovascular risk in parallel with the management of joint disease. abstract_id: PUBMED:30761837 Abnormal retinal microvasculature found in active rheumatoid arthritis:a different perspective of microvascular health Background/aim: We aimed to assess the association between retinal vascular caliber (RVC) scores and disease activity in rheumatoid arthritis (RA) patients. Materials And Methods: Forty-seven RA patients, 32 systemic lupus erythematosus (SLE) patients, and 45 healthy people were enrolled. RA and SLE patients were subdivided into groups according to C-reactive protein (CRP) levels. RA patients were also grouped according to Disease Activity Score-28 (DAS-28). Fundus photography was performed for all patients. RVC was summarized as the central retinal artery and vein equivalents (CRAE and CRVE). Results: Mean CRVE for RA patients was 213.3 ± 17.8 μm compared with 209.2 ± 14.1 μm for SLE and 217.5 ± 26.2 μm for the control group (P = 0.17). RVC scores did not differ between the CRP-high and CRP-low groups. As the RA disease activity increased, the widening of CRVE became more prominent and statistically significant. When the DAS-28 &gt; 5.1 (CRVE, 220.4 (211.8–246.5) μm) group and DAS-28 ≤ 3.2 (CRVE, 214.4 (172.4–242.3) μm) group were compared, statistical significance was more pronounced (P = 0.03) than when comparing the DAS-28 &gt; 3.2 and DAS-28 ≤ 3.2 groups (P = 0.05). Conclusions: CRVE, which reflects systemic inflammation and possibly increased cardiovascular risk, was significantly increased in active RA patients. The association between retinal venular widening and disease activity, regardless of CRP, may be a sign that RA-related inflammation may have systemic vascular effects even with normal levels of CRP. abstract_id: PUBMED:31802210 The effects of exercise on cardiovascular disease risk factors and cardiovascular physiology in rheumatoid arthritis. Cardiovascular disease (CVD) morbidity and mortality is highly prevalent in patients with rheumatoid arthritis (RA) with debilitating effects for the individual as well as significant healthcare impact. Current evidence demonstrates that engaging in aerobic and resistance exercise (i.e. structured physical activity) can significantly improve patient-reported and clinical index-assessed outcomes in RA. In addition to this, engagement in exercise programmes improves, in a dose-dependent manner, the risk of developing CVD as well as CVD symptoms and outcomes. The present narrative review uses evidence from systematic reviews and meta-analyses as well as controlled trials, to synthesize the current state-of-the-art on the potential effects of aerobic and resistance exercise on CVD risk factors as well as on cardiac and vascular function and structure in people with RA. Where there is a lack of evidence in RA to explain potential mechanisms, relevant studies from the general population are also discussed and linked to RA. abstract_id: PUBMED:25152090 The relationship between cardiovascular disease risk prediction scores and vascular function and morphology in rheumatoid arthritis. Objectives: Patients with rheumatoid arthritis (RA) are at an increased risk for cardiovascular disease (CVD) resulting from impairments in vascular function and morphology. CVD risk prediction scores can identify patients at high risk of CVD, but little is known about whether they relate with assessments of vascular function and morphology which provide early indication of subclinical atherosclerosis. The objective of the present study was to examine the relationship of several CVD risk prediction scores with assessments of vascular function and morphology in patients with RA. Methods: Framingham risk score, Systematic Coronary Risk Evaluation for total cholesterol and ratio of total cholesterol to high-density lipoprotein, as well as Reynolds Risk Score, and QRISK2 were calculated in 201 RA patients (155 females, median (25th to 75th percentile) age: 61 (53-67)) who were examined at baseline (2006). The European League Against Rheumatism (EULAR) multiplication factor was also applied to the algorithms. At a 6-year follow-up (2012) visit the patients underwent assessments of microvascular and macrovascular endothelium-dependent and endothelium-independent function, along with assessment of carotid atherosclerosis. Results: All five CVD risk prediction scores measured at baseline were significantly correlated with vascular function and morphology at follow-up. Application of the EULAR multiplication factor did not change any of the associations. Conclusions: Five commonly used CVD risk prediction scores associate with assessments of vascular function and morphology over a 6-year follow-up period suggesting that these CVD risk prediction scores may also reflect subclinical atherosclerotic changes. Answer: The retinal vascular calibre is indeed altered in patients with rheumatoid arthritis (RA), and this alteration may serve as a biomarker of disease activity and cardiovascular risk. Studies have shown that patients with RA have significantly wider retinal venular calibre compared to controls, which reflects systemic inflammation and possibly increased cardiovascular risk (PUBMED:21172929). This widening of the retinal venular calibre is associated with higher levels of disease activity in RA patients (PUBMED:21172929; PUBMED:30761837). Furthermore, suppression of inflammation in RA is associated with a reduction in retinal venular calibre, suggesting that therapies targeting inflammation could improve vascular health in RA (PUBMED:26338663). This is supported by the finding that individuals with autoimmune rheumatic diseases, including RA, have wider retinal venules than other hospital patients, and this increase in calibre may be due to underlying inflammatory activity (PUBMED:23654356). Retinal vessel morphology has also been linked to systemic inflammation, subclinical atherosclerosis, and cardiovascular risk in RA patients. Inflammation appears to be the biological link for the observed association between retinal microvascular abnormalities and subclinical atherosclerosis (PUBMED:28926162). Additionally, RA increases the risk of coronary heart disease via vascular endothelial injuries, and inflammation-induced vascular endothelial injuries may be responsible for the increased risk of coronary heart disease and high rates of cardiovascular mortality in patients with RA (PUBMED:15288364). Moreover, cardiovascular risk factors, fitness, and physical activity play a role in the prevention of cardiovascular disease in patients with rheumatic diseases. Regular exercise should be encouraged in patients with rheumatic diseases as part of the effort to prevent vascular comorbidity (PUBMED:17278937). Exercise has been shown to improve cardiovascular disease risk factors and cardiovascular physiology in RA, with aerobic and resistance exercise having significant benefits (PUBMED:31802210). In summary, retinal vascular calibre alterations in RA patients are indicative of systemic inflammation and may serve as a biomarker for both disease activity and cardiovascular risk. These findings underscore the importance of monitoring and managing inflammation and cardiovascular health in RA patients.
Instruction: Does size matter? Abstracts: abstract_id: PUBMED:25916437 Size distribution characteristics of particulate matter in the top areas of coke oven Objective: To systematically evaluate the environmental exposure information of coke oven workers, we investigated the concentration and size distribution characteristics of the particle matter (PM) in the top working area of coke oven. Methods: The aerodynamic particle sizer spectrometer was employed to collect the concentration and size distribution information of PM at a top working area. The PM was divided into PM ≤ 1.0 µm, 1.0 µm &lt; PM ≤ 2.5 µm, 2.5 µm &lt; PM ≤ 5.0 µm, 5.0 µm &lt; PM ≤ 10.0 µm and PM&gt;10.0 µm based on their aerodynamic diameters. The number concentration, surface area concentration, and mass concentration were analyzed between different groups. We also conducted the correlation analysis on these parameters among groups. Results: We found the number and surface area concentration of top area particulate was negatively correlated with particle size, but mass concentration curve showed bimodal type with higher point at PM = 1.0 µm and PM = 5.0 µm. The average number concentration of total particulate matter in the top working area was 661.27 number/cm³, surface area concentration was 523.92 µm²/cm³, and mass concentration was 0.12 mg/m³. The most number of particulate matter is not more than 1 µm (PM(1.0)), and its number concentration and surface area concentration accounted for 96.85% and 67.01% of the total particles respectively. In the correlation analysis, different particle size correlated with the total particulate matter differently. And the characteristic parameters of PM2.5 cannot fully reflect the total information of particles. Conclusion: The main particulate matter pollutants in the top working area of coke oven is PM1.0, and it with PM(5.0) can account for a large proportion in the mass concentration of PM. It suggest that PM1.0 and PM(5.0) should be considered for occupational health surveillance on the particulate matter in the top area of coke oven. abstract_id: PUBMED:29287244 Investigating the differential contributions of sex and brain size to gray matter asymmetry. Scientific reports of sex differences in brain asymmetry - the difference between the two hemispheres - are rather inconsistent. Some studies report no sex differences whatsoever, others reveal striking sex effects, with large discrepancies across studies in the magnitude, direction, and location of the observed effects. One reason for the lack of consistency in findings may be the confounding effects of brain size as male brains are usually larger than female brains. Thus, the goal of this study was to investigate the differential contributions of sex and brain size to asymmetry with a particular focus on gray matter. For this purpose, we applied a well-validated workflow for voxel-wise gray matter asymmetry analyses in a sample of 96 participants (48 males/48 females), in which a subsample of brains (24 males/24 females) were matched for size. By comparing outcomes based on three different contrasts - all males versus all females; all large brains versus all small brains; matched males versus matched females - we were able to disentangle the contributing effects of sex and brain size, to reveal true (size-independent) sex differences in gray matter asymmetry: Males show a significantly stronger rightward asymmetry than females within the cerebellum, specifically in lobules VII, VIII, and IX. This finding agrees closely with prior research suggesting sex differences in sensorimotor, cognitive and emotional function, which are all moderated by the respective cerebellar sections. No other significant sex effects on gray matter were detected across the remainder of the brain. abstract_id: PUBMED:30195677 Size distribution and source of heavy metals in particulate matter on the lead and zinc smelting affected area. In order to understand the size distribution and the main kind of heavy metals in particulate matter on the lead and zinc smelting affected area, particulate matter (PM) and the source samples were collected in Zhuzhou, Hunan Province from December 2011 to January 2012 and the results were discussed and interpreted. Atmospheric particles were collected with different sizes by a cascade impactor. The concentrations of heavy metals in atmospheric particles of different sizes, collected from the air and from factories, were measured using an inductively coupled plasma mass spectrometry (ICP-MS). The results indicated that the average concentration of PM, chromium (Cr), arsenic (As), cadmium (Cd) and lead (Pb) in PM was 177.3 ± 33.2 μg/m3, 37.3 ± 8.8 ng/m3, 17.3 ± 8.1 ng/m3, 4.8 ± 3.1 ng/m3 and 141.6 ± 49.1 ng/m3, respectively. The size distribution of PM displayed a bimodal distribution; the maximum PM size distribution was at 1.1-2.1 μm, followed by 9-10 μm. The size distribution of As, Cd and Pb in PM was similar to the distribution of the PM mass, with peaks observed at the range of 1.1-2.1 μm and 9-10 μm ranges while for Cr, only a single-mode at 4.7-5.8 μm was observed. PM (64.7%), As (72.5%), Cd (72.2%) and Pb (75.8%) were associated with the fine mode below 2.1 μm, respectively, while Cr (46.6%) was associated with the coarse mode. The size distribution characteristics, enrichment factor, correlation coefficient values, source information and the analysis of source samples showed that As, Cd and Pb in PM were the typical heavy metal in lead and zinc smelting affected areas, which originated mainly from lead and zinc smelting sources. abstract_id: PUBMED:30384061 Effect of particle size on adsorption of norfloxacin and tetracycline onto suspended particulate matter in lake. Aquatic systems are important sinks of antibiotics; however, their final destination has not been completely elucidated. Therefore, we investigated the adsorption behaviors of suspended particulate matter (SPM) in lakes to support the analysis of the migration and transformation of antibiotics in lacustrine environments. SPM was collected from Meiliang Bay (ML) and Gonghu Bay (GH) in Lake Taihu, China, which was sieved into four particle sizes of &gt;300, 150-300, 63-150, and &lt;63 μm for subsequent antibiotic adsorption experiments. All particles exhibited rapid and substantial adsorption of tetracycline and norfloxacin. Most size fractions fit a Langmuir model, indicative of monomolecular adsorption, except the &lt;63-μm fraction, which fit a Freundlich model. Particle size had a substantial influence on antibiotic adsorption; the 63-150-μm fraction had the greatest adsorption capacity, while the &gt;300-μm fraction had the lowest capacity. The influence of particle size on adsorption was mainly related to SPM physicochemical properties, such as cation exchange capacity, surface area, and organic matter content, rather than types of functional groups. Considering the mass ratios, the &lt;63-μm fraction had the greatest contribution to adsorption. Antibiotics adsorbed onto the SPM from ML and GH exhibited different behaviors. The ML SPM settled more readily into sediment, and larger, denser particles were more resistant to resuspension. Conversely, the GH SPM was more likely to be found in the water column, and larger, less-dense particles remained in the water column. These results help improve our understanding of the interactions between SPM and antibiotics in aquatic systems. abstract_id: PUBMED:18318338 Synopsis of the temporal variation of particulate matter composition and size. A synopsis of the detailed temporal variation of the size and number distribution of particulate matter (PM) and its chemical composition on the basis of measurements performed by several regional research consortia funded by the U.S. Environmental Protection Agency (EPA) PM Supersite Program is presented. This program deployed and evaluated a variety of research and emerging commercial measurement technologies to investigate the physical and chemical properties of atmospheric aerosols at a level of detail never before achieved. Most notably these studies demonstrated that systematic size-segregated measurements of mass, number, and associated chemical composition of the fine (PM2.5) and ultrafine (PM0.1) fraction of ambient aerosol with a time resolution down to minutes and less is achievable. A wealth of new information on the temporal variation of aerosol has been added to the existing knowledge pool that can be mined to resolve outstanding research and policy-related questions. This paper explores the nature of temporal variations (on time scales from several minutes to hours) in the chemical and physical properties of PM and its implications in the identification of PM formation processes, and source attribution (primary versus secondary), the contribution of local versus transported PM and the development of effective PM control strategies. The PM Supersite results summarized indicate that location, time of day, and season significantly influence not only the mass and chemical composition but also the size-resolved chemical/elemental composition of PM. Ambient measurements also show that ultrafine particles have different compositions and make up only a small portion of the PM mass concentration compared with inhalable coarse and fine particles, but their number concentration is significantly larger than their coarse or fine counterparts. PM size classes show differences in the relative amounts of nitrates, sulfates, crustal materials, and most especially carbon as well as variations in seasonal and diurnal patterns. abstract_id: PUBMED:34989887 Effects of aerosol particle size on the measurement of airborne PM2.5 with a low-cost particulate matter sensor (LCPMS) in a laboratory chamber. Previous validation studies found a good linear correlation between the low-cost particulate matter sensors (LCPMS) and other research grade particulate matter (PM) monitors. This study aimed to determine if different particle size bins of PM would affect the linear relationship and agreement between the Dylos DC1700 (LCPMS) particle count measurements (converted to PM2.5 mass concentrations) and the Grimm 11R (research grade instrument) mass concentration measurements. Three size groups of PM2.5 (mass median aerodynamic diameters (MMAD): &lt; 1 µm, 1-2 µm, and &gt; 2 µm) were generated inside a laboratory chamber, controlled for temperature and relative humidity, by dispersing sodium chloride crystals through a nebulizer. A linear regression comparing 1-min average PM2.5 particle counts from the Dylos DC1700 (Dylos) to the Grimm 11R (Grimm) mass concentrations was estimated by particle size group. The slope for the linear regression was found to increase as MMAD increased (&lt; 1 µm, 0.75 (R2 = 0.95); 1-2 µm, 0.90 (R2 = 0.93); and &gt; 2 µm, 1.03 (R2 = 0.94). The linear slopes were used to convert Dylos counts to mass concentration, and the agreement between converted Dylos mass and Grimm mass was estimated. The absolute relative error between converted Dylos mass and the Grimm mass was smaller in the &lt; 1 µm group (16%) and 1-2 µm group (16%) compared to the &gt; 2 µm group (32%). Therefore, the bias between converted Dylos mass and Grimm mass varied by size group. Future studies examining particle size bins over a wider range of coarse particles (&gt; 2.5 µm) would provide useful information for accurately converting LCPMS counts to mass concentration. abstract_id: PUBMED:35669811 The effect of size distribution of ambient air particulate matter on oxidative potential by acellular method Dithiothreitol; a systematic review. Today air pollution caused by particulate matter (PM) is a global issue, especially in densely populated and high-traffic cities. The formation of reactive oxygen species (ROS) by various toxicological studies is considered as one of the important effects caused by airborne particles that can lead to adverse effects on human health. In this study, to answer the question of whether particle size affects oxidative potential (OP), we searched the main databases, including PubMed, Scopus, Embase, and Web of Science, and defined search strategy based on the MESH terms for the above-mentioned search engines. All articles published until 2021 were searched. An ANOVA was run using R software to show the correlation between the size distributions of particulate matter and oxidative potential (base on mass and volumetric units) in ambient air. As expected, the regression results showed that the relationship between particle size and OP values for the studies based on mass-logarithm has a significant difference in the different distribution size categories, which was related to the difference between the &lt;2.5 and &lt; 1 categories. However, ANOVA analysis did not show a significant difference in the volumetric OP logarithm in the different distribution size categories. In this study, it was found that sizes higher than 2.5 μm did not have much effect on human health, and it is recommended that future research focus on PM2.5. Supplementary Information: The online version contains supplementary material available at 10.1007/s40201-021-00768-w. abstract_id: PUBMED:28510903 Characterization of soil organic matter in perhumid natural cypress forest: comparison of humification in different particle-size fractions. Background: The Chamaecyparis forest is a valuable natural resource in eastern Asia. The characteristics of soil humic substances and the influence of environmental factors in natural Chamaecyparis forests in subtropical mountain regions are poorly understood. The study site of a perhumid Chamaecyparis forest is in the Yuanyang Lake Preserved Area in northcentral Taiwan. We collected samples from organic horizons (Oi, Oe and Oa) and from the surface horizon (O/A horizon) at the summit, footslope and lakeshore to characterize the composition of the soil organic matter. Samples of organic horizons were dried and ground, and those of the O/A horizon were passed through wet sieving for different particle-size fractions before analysis. The C chemical structure in the samples was determined with CP/MAS 13C NMR spectra. Results: The ratios of alkyl-C/O-alkyl-C and aromaticity increased with decomposition of litter from the Oi, Oe, to Oa horizon. The ratio of alkyl-C/O-alkyl-C also increased from coarse (&gt; 250 μm) to very fine (&lt; 2 μm) particle fractions, which indicates increased humification of soil organic matter (SOM) in the fine-sized fractions. However, aromaticity tended to decrease with decreasing particle size, so it may not be useful in evaluating SOM humification of different particle-size fractions. Conclusions: The humification degree of the samples from O horizons and different particle-size fractions of the O/A horizon showed no gradient change with change in topography. This prevalent slow decomposition of organic matter in these perhumid climate conditions may narrow the difference in humification from the summit to lakeshore. abstract_id: PUBMED:31854643 Characteristics of Component Particle Size Distributions of Particulate Matter Emitted from a Waste Incineration Plant There are few analyses on the components of particulate matter emitted from waste incineration plants. In past studies, analyses of particle size distribution characteristics of the components were mainly targeted at particles with larger particle sizes. An electrical low pressure impactor (ELPI) was used in this study to collect the particulate matter emitted from a waste incineration plant, and the elements and carbonaceous components of these samples were analyzed. The particle size characteristics of organic carbon (OC), elemental carbon (EC), and heavy metal elements in 14 particle size segments were analyzed and composition profiles of elements and carbonaceous components of PM1, PM2.5, and PM10 from the waste incineration plant were established to provide a reference for refined source apportionment research. The results showed that the main components of the waste incineration plant included Al, Si, S, Ca, Cr, Fe, OC, EC, etc. OC and Ca were dominating components, and mass fractions of these components in the PM2.5 profile were 10.15% and 12.37%, respectively. The contents of heavy metals were ranked as Cr &gt; Pb &gt; Zn &gt; Mn &gt; Cu &gt; Cd &gt; Ni, and the mass fractions of Cr and Pb in PM2.5 amounted to 1.83% and 0.74%, respectively. OC in the range of 2.39-3.99 and 6.68-9.91 μm accounted for 15.02% and 20.45% of the total OC content, respectively, and the content of OC in fine particles was higher than that in coarse particles. The content of EC in fine particles was much higher than that in coarse particles, and it accounted for 14.8% in the 0.382-0.613 μm particle size. Heavy metal elements such as Cr, Mn, Ni, Cu, Zn, Cd, and Pb were mainly concentrated in the fine particles. abstract_id: PUBMED:28391118 Molecular size-dependent abundance and composition of dissolved organic matter in river, lake and sea waters. Dissolved organic matter (DOM) is ubiquitous in natural waters. The ecological role and environmental fate of DOM are highly related to the chemical composition and size distribution. To evaluate size-dependent DOM quantity and quality, water samples were collected from river, lake, and coastal marine environments and size fractionated through a series of micro- and ultra-filtrations with different membranes having different pore-sizes/cutoffs, including 0.7, 0.4, and 0.2 μm and 100, 10, 3, and 1 kDa. Abundance of dissolved organic carbon, total carbohydrates, chromophoric and fluorescent components in the filtrates decreased consistently with decreasing filter/membrane cutoffs, but with a rapid decline when the filter cutoff reached 3 kDa, showing an evident size-dependent DOM abundance and composition. About 70% of carbohydrates and 90% of humic- and protein-like components were measured in the &lt;3 kDa fraction in freshwater samples, but these percentages were higher in the seawater sample. Spectroscopic properties of DOM, such as specific ultraviolet absorbance, spectral slope, and biological and humification indices also varied significantly with membrane cutoffs. In addition, different ultrafiltration membranes with the same manufacture-rated cutoff also gave rise to different DOM retention efficiencies and thus different colloidal abundances and size spectra. Thus, the size-dependent DOM properties were related to both sample types and membranes used. Our results here provide not only baseline data for filter pore-size selection when exploring DOM ecological and environmental roles, but also new insights into better understanding the physical definition of DOM and its size continuum in quantity and quality in aquatic environments. Answer: Yes, size does matter when it comes to particulate matter (PM) and its environmental and health impacts. The size distribution characteristics of PM are crucial in determining their concentration, behavior, and potential effects on human health and the environment. For instance, in the top working area of a coke oven, the main particulate matter pollutants were found to be PM1.0, which, along with PM5.0, accounted for a large proportion of the mass concentration of PM. This suggests that PM1.0 and PM5.0 should be considered for occupational health surveillance (PUBMED:25916437). In the context of brain research, sex differences in gray matter asymmetry were found to be influenced by brain size, with males showing a significantly stronger rightward asymmetry than females within the cerebellum when brain size was accounted for (PUBMED:29287244). The size distribution of heavy metals in particulate matter was also found to be significant, with certain metals like arsenic, cadmium, and lead showing a bimodal distribution with peaks at 1.1-2.1 µm and 9-10 µm, indicating their association with fine and coarse modes, respectively (PUBMED:30195677). In aquatic systems, the particle size of suspended particulate matter (SPM) significantly influenced the adsorption capacity for antibiotics, with different size fractions showing varying adsorption behaviors (PUBMED:30384061). Temporal variations in the size and number distribution of PM and its chemical composition were found to be significant, with location, time of day, and season influencing not only the mass and chemical composition but also the size-resolved chemical/elemental composition of PM (PUBMED:18318338). The performance of low-cost particulate matter sensors (LCPMS) was found to be affected by the size of PM2.5 particles, with different size groups affecting the linear relationship and agreement between LCPMS and research-grade instruments (PUBMED:34989887). The oxidative potential of ambient air particulate matter was shown to be affected by particle size, with sizes higher than 2.5 µm not having much effect on human health, suggesting a focus on PM2.5 for future research (PUBMED:35669811). In soil organic matter studies, the humification degree was found to increase from coarse to very fine particle fractions, indicating increased humification of soil organic matter in the fine-sized fractions. However, aromaticity, which is another indicator of humification, tended to decrease with decreasing particle size, suggesting that it may not be a reliable indicator of soil organic matter humification across different particle-size fractions (PUBMED:28510903). In a study on particulate matter emitted from a waste incineration plant, it was found that organic carbon (OC) and elemental carbon (EC), as well as heavy metal elements, were mainly concentrated in fine particles. This indicates that the particle size distribution of these components is an important factor in understanding their environmental impact and for conducting refined source apportionment research (PUBMED:31854643). Finally, the molecular size of dissolved organic matter (DOM) in river, lake, and sea waters was found to be a determinant of its abundance and composition. The study showed that the abundance of various DOM components, such as carbohydrates and humic substances, decreased with decreasing filter/membrane cutoffs, with a significant decline observed at the 3 kDa cutoff. This size-dependent property of DOM is important for understanding its ecological and environmental roles (PUBMED:28391118). In summary, the size of particulate matter and other environmental components like DOM is a critical factor that influences their behavior, distribution, and potential effects on health and the environment.
Instruction: Do oral probiotics affect growth and neurodevelopmental outcomes in very low-birth-weight preterm infants? Abstracts: abstract_id: PUBMED:27045204 Impact of oral probiotics on neurodevelopmental outcomes in preterm infants. Objective: The aim of the study was to evaluate the neurodevelopment outcomes of very low birth weight (VLBW) preterm infants supplemented with oral probiotics for the prevention of necrotizing enterocolitis (NEC). Methods: A prospective follow-up study was performed in a cohort of VLBW preterm infants enrolled in a single center randomized controlled clinical trial to evaluate the efficacy of oral probiotics for the prevention of NEC. Cognitive and neuromotor developments were assessed by using the Bayley scales of infant development II. Sensory and neurological performance was evaluated by standard techniques. The primary outcome was neurodevelopmental impairment at 18-24 months' corrected age. Results: A total of 400 infants completed the trial protocol. Of the 370 infants eligible for follow-up, 249 infants (124 in the probiotics group and 125 in the control group) were evaluated. There was no significant difference in any of the neurodevelopmental and sensory outcomes between the two groups. Conclusion: Oral probiotic given to VLBW infants to reduce the incidense and severity of NEC started with the first feed did not affect neuromotor, neurosensory and cognitive outcomes at 18-24 months' corrected age. abstract_id: PUBMED:22566113 Do oral probiotics affect growth and neurodevelopmental outcomes in very low-birth-weight preterm infants? Objective: To evaluate the growth and neurodevelopment outcomes of very low-birth-weight (VLBW) preterm infants supplemented with oral probiotics for the prevention of necrotizing enterocolitis (NEC). Study Design: This prospective follow-up study was conducted in a cohort of VLBW preterm infants enrolled in a randomized controlled clinical trial to evaluate the efficacy of oral probiotics for the prevention of NEC. Growth outcomes included weight, length, and head circumference. Cognitive and neuromotor development were assessed by using the Bayley Scales of Infant Development II. Sensory and neurological performance was evaluated by standard techniques. The primary outcome was neurodevelopmental impairment at 18 to 22 months' corrected age. Results: A total of 221 infants completed the trial protocol. Of the 208 infants eligible for follow-up, 174 infants (86 in the probiotics group and 88 in the control group) were evaluated. There was no significant difference in growth and neurodevelopmental outcomes between the two groups. Conclusion: Oral probiotic administered to VLBW infants to reduce the incidence and severity of NEC started with the first feed did not affect growth, neuromotor, neurosensory, and cognitive outcomes at 18 to 22 months' corrected age. abstract_id: PUBMED:19914635 Lack of effects of oral probiotics on growth and neurodevelopmental outcomes in preterm very low birth weight infants. Objective: To evaluate growth and neurodevelopmental outcomes in preterm very low birth weight (PVLBW) infants treated with oral probiotics for the prevention of necrotizing enterocolitis (NEC). Study Design: A prospective follow-up study was performed in a cohort of PVLBW infants enrolled in a single center with a masked randomized control trial to evaluate the efficacy of oral probiotics in preventing NEC. Growth measures included weight, length, and head circumference. Neurologic and sensory performance was evaluated with standard techniques. Psychometric parameters were measured used the Bayley Scales of Infant Development II (BSID-II). The studies were performed at 3 years corrected age. The primary outcome was death or neurodevelopmental impairment. Results: Of the 367 subjects enrolled in trial, 301 (89.9%) were evaluated (153 in the probiotics group and 148 in the control group). There were no significant differences in growth or in any of the neurodevelopmental and sensory outcomes between the 2 groups. Conclusions: Oral probiotics given to PVLBW infants at 1 week after birth to reduce the incidence of NEC did not affect growth and neurodevelopmental and sensory outcomes at 3 years corrected age. abstract_id: PUBMED:24938866 Early optimal nutrition improves neurodevelopmental outcomes for very preterm infants. Recent advances in perinatal and neonatal intensive care have resulted in significant improvements in the survival of preterm extremely low-birthweight (PELBW) infants; however, extrauterine growth restriction (EUGR) and undernutrition occur frequently during hospitalization and are associated with adverse outcomes, including bronchopulmonary dysplasia, sepsis, and neurodevelopmental impairment. Early optimal parenteral nutrition with adequate amino acids and lipids, especially long-chain polyunsaturated fatty acids, has been shown to decrease the incidence of EUGR, bronchopulmonary dysplasia, necrotizing enterocolitis, sepsis, and retinopathy of prematurity in animal models and clinical trials. In PELBW infants, breast milk and probiotics have been shown to reduce the incidence of necrotizing enterocolitis, and lactoferrin has been demonstrated to prevent late-onset sepsis. Thus, early administration of optimal postnatal parenteral and enteral nutrients can help prevent neurodevelopmental impairment caused by EUGR, necrotizing enterocolitis, sepsis, bronchopulmonary dysplasia, and retinopathy of prematurity, and recent evidence indicates such treatment is feasible. abstract_id: PUBMED:34243041 Extra-uterine growth restriction in preterm infants: Neurodevelopmental outcomes according to different definitions. Aim: Extra-uterine Growth Restriction (EUGR) is common among preterm infants. Two types of EUGR definitions are still now available: cross-sectional definitions and longitudinal ones. In a cohort of very preterm infants, we aimed to evaluate which definition could better predict neurodevelopmental outcomes at 2 years of corrected age. We used Italian Neonatal Study Charts (INeS) growth charts and INTERGROWTH-21st (IG-21) standard charts and compared results. Method: We restrospectively collected data from clinical charts of 324 preterm newborns with a gestational age ≤30 weeks born from 2012 to 2017. Then we compared forty-eight definitions (24 cross-sectional and 24 longitudinal) of EUGR, in term of neurodevelopmental outcomes at 2 years of corrected age. Results: We included in the study 254 preterm infants, whose clinical information met the enrolment criteria. Nineteen out of 48 definitions of EUGR were significantly predictive both for Griffith's Development Quotient (GDQ) and Neurodevelopment Impairment (NDI). Among these, longitudinal definitions appeared to have a higher negative predictive value for NDI than cross-sectional ones. Furthermore, infants with EUGR appeared to have a lower cognitive score than their peers without EUGR. Interpretation: A loss of Zs &gt; 1 SDS in weight and head circumference, calculated from when physiological weight loss is over and identified as soon as possible rather than at discharge, better predicts neurodevelopmental outcomes of preterm infants. abstract_id: PUBMED:36194256 Neurodevelopmental outcomes of very preterm infants who received cord milking at birth: a randomized controlled trial. Umbilical cord milking improves postnatal adaptation and short-term outcomes of very preterm infants compared to early cord clamping. Little is known about the impact of umbilical cord milking on long-term neurodevelopmental outcomes. The objective of this study is to compare the effects of intact umbilical cord milking (UCM) vs. early cord clamping (ECC) at birth on neurodevelopmental outcomes at 36 months' corrected age. Preterm infants &lt; 31 weeks' gestation who were randomized at birth to receive three time milking of their attached cord or ECC (&lt; 10 s) were evaluated at 36 months' corrected age. Neurodevelopmental outcomes were assessed by blinded examiners using Bayley Scales of Infant and Toddler Development (version III). Analysis was by intention to treat. Out of the 73 infants included in the original trial, 2 died and 65 (92%) infants were evaluated at 36 months' corrected age. Patient characteristics and short-term outcomes were similar in both study groups. There were no significant differences in the median cognitive, motor or language scores or in the rates of cerebral palsy, developmental impairment, deafness, or blindness between study groups. Conclusion: Neurodevelopmental outcomes at 36 months' corrected age of very preterm infants who received UCM were not shown to be significantly different from those who received ECC at birth. Trial Registration: ClinicalTrials.gov: NCT01487187 What is Known: • Compared to early cord clamping, umbilical cord milking improves postnatal adaptation and short-term outcomes of very preterm infants compared to early cord clamping. • Little is known about the impact of umbilical cord milking on neurodevelopmental outcomes. What Is New: • Neurodevelopmental outcomes at 3 years of age were not significantly different in very preterm infants who received cord milking vs. those who received early cord clamping at birth. abstract_id: PUBMED:34979801 Growth patterns of preterm infants in Korea. With advancements in neonatal care and nutrition, the postnatal growth of preterm infants has improved; however, it remains an issue. Accurate assessments of growth using a standardized reference are needed to interpret the intrauterine and postnatal growth patterns of preterm infants. Growth in the earlier periods of life can contribute to later outcomes, and the refinement of postnatal growth failure is needed to optimize outcomes. Catchup growth occurs mainly before discharge and until 24 months of age, and very low birth weight infants in Korea achieve retarded growth later in life. Knowing an infant's perinatal history, reducing morbidity rates during admission, and performing regular monitoring after discharge are required. Preterm infants with a lower birth weight or who were small for gestational age are at increased risk of poor neurodevelopmental outcomes. Furthermore, poor postnatal growth is predictive of adverse neurodevelopmental outcomes. Careful monitoring and early intervention will contribute to better development outcomes and national public health improvements. abstract_id: PUBMED:27865437 Early neurodevelopmental outcomes of extremely preterm infants. Infants born at extreme preterm gestation are at risk for both death and disability. Although rates of survival have improved for this population, and some evidence suggests a trend toward decreased neuromotor impairment over the past decades, a significant improvement in overall early neurodevelopmental outcome has not yet been realized. This review will examine the rates and types of neurodevelopmental impairment seen after extremely preterm birth, including neurosensory, motor, cognitive, and behavioral outcomes. We focus on early outcomes in the first 18-36 months of life, as the majority of large neonatal studies examining neurodevelopmental outcomes stop at this age. However, this early age is clearly just a first glimpse into lifetime outcomes; the neurodevelopmental effects of extreme prematurity may last through school age, adolescence, and beyond. Importantly, prematurity appears to be an independent risk factor for adverse development, but this population demonstrates considerable variability in the types and severity of impairments. Understanding both the nature and prevalence of neurodevelopmental impairment among extremely preterm infants is important because it can lead to targeted interventions that in turn may lead to improved outcomes. abstract_id: PUBMED:26608051 Does High Protein Intake During First Week of Life Improve Growth and Neurodevelopmental Outcome at 18 months Corrected Age in Extremely Preterm Infants? Objective: To examine whether high protein intake during the first week of life alters the growth and neurodevelopmental outcomes at 18 mo corrected age (CA) in preterm infants born &lt; 29 wk. Methods: This was a retrospective cohort study of preterm infants (&lt;29 wk) before and after introduction of nutritional policy targeting higher protein intake during the first week of life. The authors compared the growth and neurodevelopmental outcomes at 18 mo CA between infants born before (epoch 1) and after (epoch 2) the introduction of nutrition policy. Results: Of 171 eligible infants who completed follow-up at 18 mo CA, 87 (51 %) were in post intervention group (epoch 2). The mean (± SD) gestational age (26.3 ± 1.49 wk vs. 26.2 ± 1.48 wk) and birth weight (947 ± 220 g vs. 924 ± 225 g) were similar between the two groups. At 18 mo CA, there were no significant differences in the growth and neurodevelopmental impairment rates between the two groups. Logistic regression analysis revealed that high protein intake (&gt;3.5 g/kg/d) was not associated with improved neurodevelopmental outcome (OR 1.49, 95 % CI 0.52-4.26). Conclusions: High protein intake during the first week of age was not associated with better growth or neurodevelopmental outcome at 18 mo CA in preterm infants. abstract_id: PUBMED:38393924 Oral feeding trajectories and neurodevelopmental outcomes at 12 and 24 month follow-up for preterm infants. Background: Few studies characterize feeding performance in the NICU when predicting neurodevelopmental outcomes. Our objective was to investigate the relationship between time to full oral feeds (FULL-PO) and neurodevelopmental and feeding outcomes in the first 2 years in preterm infants admitted to the NICU. Methods: This retrospective study included infants born between 01/01/2014-07/31/2017, gestational age &lt; 32 weeks and/or birth weight &lt; 1500 g. We examined feeding difficulties, cerebral palsy, and Bayley scores for those reaching FULL-PO at a post menstrual age (PMA)≤38.0 weeks (EARLY) vs.&gt;38.0 weeks (LATE). Additionally, the oral feeding achieved at various timepoints between 36- and 42-weeks postmenstrual age (PMA) was measured to construct a timeline of oral feeding acquisition. Results: Of 192 infants, 147(77%) achieved FULL-PO EARLY and 45(23%) LATE. Comorbidities and length of stay were higher and unadjusted Bayley scores were lower at 12 months corrected age (CA) and 24 months chronological age (CH) in the LATE group. Feeding difficulties were higher in the LATE group at 24 months CH. Infants born &lt; 27-28 weeks GA were more likely to achieve oral feeding at a later PMA. Infants with bronchopulmonary dysplasia (BPD) had significant feeding and developmental delays. Conclusions: Establishing full oral feeds by 38.0 weeks PMA may be used as a predictor for feeding difficulties at 24 months CH. Infants born &lt; 27-28 weeks GA and those with BPD are more likely to take extended amounts of time to achieve full oral feeding and need additional feeding support. Infants with BPD are high risk for neurodevelopmental delays. Answer: The studies provided indicate that oral probiotics do not significantly affect growth and neurodevelopmental outcomes in very low birth weight (VLBW) preterm infants. Specifically, three studies (PUBMED:27045204, PUBMED:22566113, PUBMED:19914635) conducted prospective follow-up studies in cohorts of VLBW preterm infants who were enrolled in randomized controlled clinical trials to evaluate the efficacy of oral probiotics for the prevention of necrotizing enterocolitis (NEC). These studies assessed various outcomes, including cognitive and neuromotor development using the Bayley Scales of Infant Development II, as well as sensory and neurological performance. The primary outcomes were neurodevelopmental impairment at 18-24 months' corrected age (PUBMED:27045204, PUBMED:22566113) and at 3 years corrected age (PUBMED:19914635). Across these studies, no significant differences were found in growth or neurodevelopmental and sensory outcomes between the probiotics group and the control group. In contrast, other studies have shown that early optimal nutrition, including the use of breast milk and probiotics, can help prevent neurodevelopmental impairment caused by conditions such as extrauterine growth restriction (EUGR), NEC, sepsis, bronchopulmonary dysplasia, and retinopathy of prematurity (PUBMED:24938866). However, these benefits seem to be more related to the prevention of these conditions rather than a direct effect of probiotics on neurodevelopmental outcomes. Furthermore, a study on EUGR in preterm infants found that certain definitions of EUGR, particularly longitudinal ones, were predictive of neurodevelopmental outcomes at 2 years of corrected age (PUBMED:34243041). This suggests that growth patterns and nutritional status are important for neurodevelopmental outcomes, but the specific role of probiotics in this context is not clear from the provided abstracts. In summary, based on the abstracts provided, oral probiotics administered to VLBW infants do not appear to have a significant impact on growth and neurodevelopmental outcomes, although optimal nutrition and the prevention of NEC and other conditions are important for the neurodevelopment of preterm infants.
Instruction: Colorectal cancer with non-resectable synchronous metastases: should the primary tumor be resected? Abstracts: abstract_id: PUBMED:34012646 Efficacy of modified bevacizumab-XELOX therapy in Japanese patients with stage IV recurrent or non-resectable colorectal cancer. Background: Neoadjuvant chemotherapy (NAC) has been conducted for patients with non-resectable colorectal cancer; however, few reports of a systematic approach to NAC exist. At our hospital, bevacizumab with capecitabine and oxaliplatin (B-mab XELOX) has been used as chemotherapy for Stage IV colorectal cancer since 2014. We aimed to evaluate the efficacy and safety of NAC with a molecular-targeting agent for Stage IV colorectal cancer. Methods: A retrospective, single-institute analysis was performed including 27 patients with advanced recurrent cancer following primary tumor resection and 43 patients with non-resectable tumors and remote metastasis. At the time of resection, 17 were receiving chemotherapy. All 70 patients received at least 3 cycles of B-mab XELOX (total: 920 cycles). We determined the 1-year progression-free survival (1Y-PFS), 1-year overall survival (1Y-OS), 3Y-PFS, 3Y-OS, and number of treatment cycles. The objective response rate, clinical benefit rate, and adverse events were assessed. The number of chemotherapy cycles, survival time, and R0 surgery rate were determined for patients who underwent RO conversion surgery. Results: The 1Y-PFS was 28.5% [median survival time (MST): 7.4 months], 1Y-OS was 76.6% (MST not reached), 3Y-PFS was 5.5% (MST: 7.4 months), and 3Y-OS was 26.4% (MST: 25.2 months). The mean and median number of cycles of B-mab XELOX was 13.1 and 10.5, respectively. The objective response rate was 28.6%, and the clinical benefit rate was 58.6%. Grade 1 or Grade 2 adverse events occurred in 60 patients (85.7%); however, they all resolved without intervention. A single Grade 4 event (perforation of the primary tumor) occurred in 1 patient (1.4%). RO conversion surgery was performed in 7 patients (10.0%; primary + liver in 2 patients, primary + lung in 1 patient, liver in 3 patients, and primary in 1 patient). These patients received 3 to 10 cycles preoperatively (mean: 7.3; median: 6.5). R0 surgery was achieved in 5 of the 7 patients (71.4%). Postoperative survival ranged from 1 to 26 months (MST: 8 months). Conclusions: This modified regimen was safe and effective in Japanese patients, and a high quality of life/quality-adjusted life-year was achieved. To further evaluate PFS and OS, more patients are being investigated. abstract_id: PUBMED:34645173 Advances in researches on neoadjuvant therapy for resectable colorectal liver metastasis Surgery is recognized as the core treatment for colorectal liver metastasis (CRLM), while its recurrence rate remains relatively high, even for resectable CRLM. This hints that the efficacy of treatment involves not only technological factors of surgery, but also biological behavior of tumor. For resectable CRLM, neoadjuvant therapy is beneficial to eliminate the micro-metastasis, reduce postoperative recurrence rate, screen tumor biological behavior and improve prognosis. However, questions about which kind of CRLM patients fits for neoadjuvant therapy and what regimen should be used are still debatable. This paper reviews stratified management of resectable CRLM, choice of neoadjuvant regimen, especially the application value of targeted therapy, based on the latest guidelines and studies. abstract_id: PUBMED:28090195 Complex Surgical Strategies to Improve Resectability in Borderline-Resectable Disease. Colorectal cancer is the third most common malignancy in the USA and continues to pose a significant epidemiologic problem, despite major advances in the treatment of patients with advanced disease. Up to 50 % of patients will develop metastatic disease at some point during the course of their disease, with the liver being the most common site of metastatic disease. In this review, we address the relatively poorly defined entity of borderline-resectable colorectal liver metastases. The workup and staging of borderline-resectable disease are discussed. We then discuss management strategies, including surgical techniques and medical therapies, which are currently utilized in order to improve resectability. abstract_id: PUBMED:27465325 Radiofrequency ablation vs. hepatic resection for resectable colorectal liver metastases. The treatments of resectable colorectal liver metastases (CRLM) are controversial. This study aimed to evaluate the relative efficacy and safety of hepatic resection (HR) and radiofrequency ablation (RFA) for treating resectable CRLM. Between January 2004 and May 2010, the enrolled patients were given hepatic resection (HR group; n=32) or percutaneous RFA (RFA group; n=21) as a first-line treatment for CRLM. All the tumors had a maximum diameter of 3.5 cm and all patients had five or less tumors. The patient background, tumor characteristics, cumulative survival rate and recurrence-free survival rate were assessed in both groups. There were significantly more patients with comorbidities in the RFA group than those in the HR group (17 in RFA group vs. 10 in HR group; P&lt;0.000). The mean maximum tumor diameter in the HR group and RFA group was 2.25±0.68 and 1.89±0.62 cm (P=0.054), and the mean number of tumors was 2.28±1.05 and 2.38±1.12 (P=0.744), respectively. The 1-, 3- and 5-year cumulative survival rates in the HR group were 87.5%, 53.1% and 31.3%, respectively, and those in the RFA group were 85.7%, 38.1% and 14.2%, respectively with the differences being not significant between the two groups (P=0.062). The 1-, 3- and 5-year recurrence-free survival rates in the HR group were 90.6%, 56.3% and 28.1%, respectively, and those in the RFA group were 76.1%, 23.8% and 4.8%, respectively, with the differences being significant between the two groups (P=0.036). In conclusion, as HR has greater efficacy than RFA in the treatment of resectable CRLM, we recommend it as the first option for this malignancy. abstract_id: PUBMED:24802519 The impact of primary tumor resection on overall survival in patients with colorectal carcinoma and unresectable distant metastases: a prospective cohort study. Background: To compare the patients with primary colorectal carcinoma (CRC) and non-resectable distant metastases with or without primary colorectal tumor resection as a primary treatment in terms of postoperative mortality and overall survival (OS). Patients And Methods: The clinicopathological data of 188 CRC patients with non-resectable distant metastases was analyzed. All patient data were collected prospectively. Colorectal tumor was resected in 121 patients (64.3%). Kaplan-Meier method was used for calculation and plotting of the OS curves of the patient groups, and log-rank test was used for the comparison of the survival curves. The relative importance of the prognostic features was investigated using the Cox proportional hazards model. Results: In the whole series and in the patient group undergoing emergency surgical intervention, mortality rate was lower in patients having colorectal tumor resection compared with non-resected patients, with differences approaching the significance level (p = 0.072 and p = 0.076, respectively). Median OS time was significantly longer in resection group (11.0 months), compared with non-resection group (5.5 months) (p &lt; 0.001); in the multivariate Cox analysis colorectal tumor resection had independent prognostic significance (p &lt; 0.001). Conclusion: Resection of colorectal tumor in primary CRC patients with non-resectable distant metastasis gives significant survival advantage without increasing postoperative mortality compared with non-resection. abstract_id: PUBMED:38182479 Liver transplantation as an alternative for the treatment of non-resectable liver colorectal cancer: Advancing the therapeutic algorithm. Colorectal cancer is a leading cause of cancer-related mortality, with nearly half of the affected patients developing liver metastases. For three decades, liver resection (LR) has been the primary curative strategy, yet its applicability is limited to about 20% of cases. Liver transplantation (LT) for unresectable metastases was attempted unsuccessfully in the 1990s, with high rates of perioperative death and recurrence. There is now more interest in this strategy due to improvements in systemic therapies and surgical techniques. A significant study conducted by the Oslo group showed that patients receiving liver transplants had a 60% chance of survival after five years. Significantly better results have been achieved by using advanced imaging for risk stratification and further refining selection criteria, especially in the Norvegian SECA trials. This review carefully charts the development and history of LT as a treatment option for colorectal cancer liver metastases. The revolutionary path from the early days of exploratory surgery to the current situation of cautious optimism is traced, highlighting the critical clinical developments and improved patient selection standards that have made LT a potentially curative treatment for such challenging very well selected cases. abstract_id: PUBMED:38335813 Preoperative chemotherapy in upfront resectable colorectal liver metastases: New elements for an old dilemma? The use of preoperative or "neoadjuvant" chemotherapy (NAC) has long been controversial for resectable colorectal liver metastases (CRLM). The European Society of Medical Oncology (ESMO) 2023 guidelines on metastatic colorectal cancer (CRC) indicate a combination of surgical/technical and oncologic/prognostic criteria as the two determinants for allocating patients to NAC or upfront hepatectomy. However, surgical and technical criteria have evolved, and oncologic prognostic criteria date from the pre-modern chemotherapy era and lack prospective validation. The traditional literature is interpreted as not supporting the use of NAC because several studies fail to demonstrate a benefit in overall survival (OS) compared to upfront surgery; however, OS may not be the most appropriate endpoint to consider. Moreover, the commonly quoted studies against NAC contain many limitations that may explain why NAC failed to demonstrate its value. The query of the recent literature focused primarily on other aspects than OS, such as surgical technique, the impact of side effects of chemotherapy, the histological growth pattern of metastases, or the detection of circulating tumor DNA, shows data that support a more widespread use of NAC. These should prompt a critical reappraisal of the use of NAC, leading to a more precise selection of patients who could benefit from it. abstract_id: PUBMED:34506756 Liver transplantation for non-resectable colorectal liver metastases: the International Hepato-Pancreato-Biliary Association consensus guidelines. Colorectal cancer is a prevalent disease worldwide, with more than 50% of patients developing metastases to the liver. Despite advances in improving resectability, most patients present with non-resectable colorectal liver metastases requiring palliative systemic therapy and locoregional disease control strategies. There is a growing interest in the use of liver transplantation to treat non-resectable colorectal liver metastases in well selected patients, leading to a surge in the number of studies and prospective trials worldwide, thereby fuelling the emerging field of transplant oncology. The interdisciplinary nature of this field requires domain-specific evidence and expertise to be drawn from multiple clinical specialities and the basic sciences. Importantly, the wider societal implication of liver transplantation for non-resectable colorectal liver metastases, such as the effect on the allocation of resources and national transplant waitlists, should be considered. To address the urgent need for a consensus approach, the International Hepato-Pancreato-Biliary Association commissioned the Liver Transplantation for Colorectal liver Metastases 2021 working group, consisting of international leaders in the areas of hepatobiliary surgery, colorectal oncology, liver transplantation, hepatology, and bioethics. The aim of this study was to standardise nomenclature and define management principles in five key domains: patient selection, evaluation of biological behaviour, graft selection, recipient considerations, and outcomes. An extensive literature review was done within the five domains identified. Between November, 2020, and January, 2021, a three-step modified Delphi consensus process was undertaken by the workgroup, who were further subgrouped into the Scientific Committee, Expert Panel, and Transplant Centre Representatives. A final consensus of 44 statements, standardised nomenclature, and a practical management algorithm is presented. Specific criteria for clinico-patho-radiological assessments with molecular profiling is crucial in this setting. After this, the careful evaluation of biological behaviour with bridging therapy to transplantation with an appropriate assessment of the response is required. The sequencing of treatment in synchronous metastatic disease requires special consideration and is highlighted here. Some ethical dilemmas within organ allocation for malignant indications are discussed and the role for extended criteria grafts, living donor transplantation, and machine perfusion technologies for non-resectable colorectal liver metastases are reviewed. Appropriate immunosuppressive regimens and strategies for the follow-up and treatment of recurrent disease are proposed. This consensus guideline provides a framework by which liver transplantation for non-resectable colorectal liver metastases might be safely instituted and is a meaningful step towards future evidenced-based practice for better patient selection and organ allocation to improve the survival for patients with this disease. abstract_id: PUBMED:26683263 Evaluation of resectability after neoadjuvant chemotherapy for primary non-resectable colorectal liver metastases: A multicenter study. Background/aim: The Kyushu Study Group of Clinical Cancer (KSCC) previously reported the safety and efficacy of neoadjuvant chemotherapy with mFOLFOX6 + bevacizumab for H2/H3 liver metastases of colorectal cancer. The aim of the current study was to evaluate the resectability of these metastases before and after chemotherapy as determined by independent liver surgeons. Methods: Between May 2008 and April 2010, 40 patients were registered in a multicenter phase 2 trial of neoadjuvant chemotherapy (KSCC 0802). In Study 1, 5 independent liver surgeons from five different KSCC centers evaluated the resectability of liver metastases of colorectal cancer based on imaging studies performed before and after chemotherapy. Each surgeon was blinded to the other surgeons' evaluations. In addition, no information about the patients' characteristics was provided. In Study 2, 3 surgeons evaluated the resectability of these lesions based on imaging studies with discussion with each other, with the surgeons being provided with information on the patients' characteristics. Results: In Study 1, 13 patients (36.1%) were evaluated to be resectable at baseline, whereas 17 patients (47.2%) were evaluated to be resectable after chemotherapy. In Study 2, 4 patients (11.1%) were evaluated to be resectable at baseline, compared to 23 patients (63.9%) after chemotherapy. Conclusion: Neoadjuvant chemotherapy with mFOLFOX6 + bevacizumab was confirmed to increase the resectability of non-resectable liver metastases of colorectal cancer according to the independent assessments of surgeons. abstract_id: PUBMED:15243315 Colorectal cancer with non-resectable synchronous metastases: should the primary tumor be resected? Objectives: In asymptomatic patients presenting with non-resectable synchronous metastatic disease from colorectal adenocarcinoma, the beneficial effect of resecting the primary tumor remains to be documented. The aim of this study was to compare survival of patients with metastatic colorectal cancer who underwent elective resection of the primary tumor to those who did not. Methods: A retrospective analysis of patients with metastatic colo-rectal cancer treated between June, 1996 and December, 1999 was performed. Overall survival was compared between patients who underwent first-line resection of the primary colorectal tumor (group 1) or those who did not undergo elective resection of the primary (group 2). The probability of surgical resection of the primary tumor for gastrointestinal complications in group 2 was evaluated. Results: Thirty-one and 23 patients were included in groups 1 and 2 respectively. Five patients (21.7%, 95% confidence interval CI95% 4.9-38.5%) in group 2 required surgical treatment for intestinal obstruction due to the primary tumor. Two clinical characteristics were significantly different between groups 1 and 2: rectal localization (9.7% versus 34.7%; P=0.03) and presence of fewer than three metastases (29.0% versus 4.3%; P=0.03). Survival curves were not significantly different (logrank). Median duration of survival was 21 and 14 Months, respectively (P=0.718). Conclusion: In patients with non-resectable synchronous metastatic disease, non-surgical management of the primary tumor is a rational alternative if asymptomatic. A prospective randomized trial integrating the quality-of-life factor should be organized. Answer: The question of whether to resect the primary tumor in patients with colorectal cancer and non-resectable synchronous metastases is debated. Some studies suggest that resecting the primary tumor can provide a survival advantage without increasing postoperative mortality compared to non-resection. For instance, a prospective cohort study found that median overall survival (OS) time was significantly longer in the resection group (11.0 months) compared with the non-resection group (5.5 months), and colorectal tumor resection had independent prognostic significance (PUBMED:24802519). However, another study indicated that in patients with non-resectable synchronous metastatic disease, non-surgical management of the primary tumor is a rational alternative if asymptomatic, as survival curves were not significantly different between those who underwent first-line resection of the primary colorectal tumor and those who did not (PUBMED:15243315). The decision to resect the primary tumor should be individualized, taking into account the patient's clinical situation, the potential risks and benefits of surgery, and the presence of symptoms. In some cases, resection may be considered to prevent future complications such as obstruction or bleeding, even if it does not necessarily improve survival outcomes. It is important to note that advances in neoadjuvant therapies and liver transplantation are also influencing the management of colorectal liver metastases. Neoadjuvant chemotherapy can increase the resectability of non-resectable liver metastases (PUBMED:26683263), and liver transplantation has emerged as a potential curative treatment for very well selected cases of non-resectable colorectal liver metastases (PUBMED:38182479; PUBMED:34506756). In conclusion, the decision to resect the primary tumor in the context of non-resectable synchronous metastases should be made on a case-by-case basis, considering the latest evidence, the patient's overall health, the expected quality of life, and the availability of other therapeutic options.
Instruction: Expansion of tandem repeats in sea anemone Nematostella vectensis proteome: A source for gene novelty? Abstracts: abstract_id: PUBMED:20003297 Expansion of tandem repeats in sea anemone Nematostella vectensis proteome: A source for gene novelty? Background: The complete proteome of the starlet sea anemone, Nematostella vectensis, provides insights into gene invention dating back to the Cnidarian-Bilaterian ancestor. With the addition of the complete proteomes of Hydra magnipapillata and Monosiga brevicollis, the investigation of proteins having unique features in early metazoan life has become practical. We focused on the properties and the evolutionary trends of tandem repeat (TR) sequences in Cnidaria proteomes. Results: We found that 11-16% of N. vectensis proteins contain tandem repeats. Most TRs cover 150 amino acid segments that are comprised of basic units of 5-20 amino acids. In total, the N. Vectensis proteome has about 3300 unique TR-units, but only a small fraction of them are shared with H. magnipapillata, M. brevicollis, or mammalian proteomes. The overall abundance of these TRs stands out relative to that of 14 proteomes representing the diversity among eukaryotes and within the metazoan world. TR-units are characterized by a unique composition of amino acids, with cysteine and histidine being over-represented. Structurally, most TR-segments are associated with coiled and disordered regions. Interestingly, 80% of the TR-segments can be read in more than one open reading frame. For over 100 of them, translation of the alternative frames would result in long proteins. Most domain families that are characterized as repeats in eukaryotes are found in the TR-proteomes from Nematostella and Hydra. Conclusions: While most TR-proteins have originated from prediction tools and are still awaiting experimental validations, supportive evidence exists for hundreds of TR-units in Nematostella. The existence of TR-proteins in early metazoan life may have served as a robust mode for novel genes with previously overlooked structural and functional characteristics. abstract_id: PUBMED:27216622 Do novel genes drive morphological novelty? An investigation of the nematosomes in the sea anemone Nematostella vectensis. Background: The evolution of novel genes is thought to be a critical component of morphological innovation but few studies have explicitly examined the contribution of novel genes to the evolution of novel tissues. Nematosomes, the free-floating cellular masses that circulate through the body cavity of the sea anemone Nematostella vectensis, are the defining apomorphy of the genus Nematostella and are a useful model for understanding the evolution of novel tissues. Although many hypotheses have been proposed, the function of nematosomes is unknown. To gain insight into their putative function and to test hypotheses about the role of lineage-specific genes in the evolution of novel structures, we have re-examined the cellular and molecular biology of nematosomes. Results: Using behavioral assays, we demonstrate that nematosomes are capable of immobilizing live brine shrimp (Artemia salina) by discharging their abundant cnidocytes. Additionally, the ability of nematosomes to engulf fluorescently labeled bacteria (E. coli) reveals the presence of phagocytes in this tissue. Using RNA-Seq, we show that the gene expression profile of nematosomes is distinct from that of the tentacles and the mesenteries (their tissue of origin) and, further, that nematosomes (a Nematostella-specific tissue) are enriched in Nematostella-specific genes. Conclusions: Despite the small number of cell types they contain, nematosomes are distinct among tissues, both functionally and molecularly. We provide the first evidence that nematosomes comprise part of the innate immune system in N. vectensis, and suggest that this tissue is potentially an important place to look for genes associated with pathogen stress. Finally, we demonstrate that Nematostella-specific genes comprise a significant proportion of the differentially expressed genes in all three of the tissues we examined and may play an important role in novel cell functions. abstract_id: PUBMED:35359322 Manipulation of Gene Activity in the Regenerative Model Sea Anemone, Nematostella vectensis. With a surprisingly complex genome and an ever-expanding genetic toolkit, the sea anemone Nematostella vectensis has become a powerful model system for the study of both development and whole-body regeneration. Here we provide the most current protocols for short-hairpin RNA (shRNA )-mediated gene knockdown and CRISPR/Cas9-targeted mutagenesis in this system. We further show that a simple Klenow reaction followed by in vitro transcription allows for the production of gene-specific shRNAs and single guide RNAs (sgRNAs) in a fast, affordable, and readily scalable manner. Together, shRNA knockdown and CRISPR/Cas9-targeted mutagenesis allow for rapid screens of gene function as well as the production of stable mutant lines that enable functional genetic analysis throughout the Nematostella life cycle. abstract_id: PUBMED:33074544 Transcriptomic Analysis in the Sea Anemone Nematostella vectensis. The sea anemone Nematostella vectensis is an emerging research model to study embryonic development and regeneration at the molecular and global transcriptomic level. Transcriptomics analysis is now routinely used to detect differential expression at the genome level. Here we present the latest procedures for isolating high-quality RNA required for next generation sequencing, as well as methods and resources for quantifying transcriptomic data. abstract_id: PUBMED:26633371 Characterization of Morphological and Cellular Events Underlying Oral Regeneration in the Sea Anemone, Nematostella vectensis. Cnidarians, the extant sister group to bilateria, are well known for their impressive regenerative capacity. The sea anemone Nematostella vectensis is a well-established system for the study of development and evolution that is receiving increased attention for its regenerative capacity. Nematostella is able to regrow missing body parts within five to six days after its bisection, yet studies describing the morphological, cellular, and molecular events underlying this process are sparse and very heterogeneous in their experimental approaches. In this study, we lay down the basic framework to study oral regeneration in Nematostella vectensis. Using various imaging and staining techniques we characterize in detail the morphological, cellular, and global molecular events that define specific landmarks of this process. Furthermore, we describe in vivo assays to evaluate wound healing success and the initiation of pharynx reformation. Using our described landmarks for regeneration and in vivo assays, we analyze the effects of perturbing either transcription or cellular proliferation on the regenerative process. Interestingly, neither one of these experimental perturbations has major effects on wound closure, although they slightly delay or partially block it. We further show that while the inhibition of transcription blocks regeneration in a very early step, inhibiting cellular proliferation only affects later events such as pharynx reformation and tentacle elongation. abstract_id: PUBMED:33417693 The starlet sea anemone, Nematostella vectensis, possesses body region-specific bacterial associations with spirochetes dominating the capitulum. Sampling of different body regions can reveal highly specialized bacterial associations within the holobiont and facilitate identification of core microbial symbionts that would otherwise be overlooked by bulk sampling methods. Here, we characterized compartment-specific associations present within the model cnidarian Nematostella vectensis by dividing its morphology into three distinct microhabitats. This sampling design allowed us to uncover a capitulum-specific dominance of spirochetes within N. vectensis. Bacteria from the family Spirochaetaceae made up 66% of the community in the capitulum, while only representing 1.2% and 0.1% of the communities in the mesenteries and physa, respectively. A phylogenetic analysis of the predominant spirochete sequence recovered from N. vectensis showed a close relation to spirochetes previously recovered from wild N. vectensis. These sequences clustered closer to the recently described genus Oceanispirochaeta, rather than Spirochaeta perfilievii, supporting them as members of this clade. This suggests a prevalent and yet uncharacterized association between N. vectensis and spirochetes from the order Spirochaetales. abstract_id: PUBMED:25984291 Sex-specific and developmental expression of Dmrt genes in the starlet sea anemone, Nematostella vectensis. Background: The molecular mechanisms underlying sex determination and differentiation in animals are incredibly diverse. The Dmrt (doublesex and mab-3 related transcription factor) gene family is an evolutionary ancient group of transcription factors dating to the ancestor of metazoans that are, in part, involved in sex determination and differentiation in numerous bilaterian animals and thus represents a potentially conserved mechanism for differentiating males and females dating to the protostome-deuterostome ancestor. Recently, the diversity of this gene family throughout animals has been described, but the expression and potential function for Dmrt genes is not well understood outside the bilaterians. Results: Here, we report sex- and developmental-specific expression of all 11 Dmrts in the starlet sea anemone Nematostella vectensis. Nine out of the eleven Dmrts showed significant differences in developmental expression, with the highest expression typically in the adult stage and, in some cases, with little or no expression measured during embryogenesis. When expression was compared in females and males, seven of the eleven Dmrt genes had significant differences in expression with higher expression in males than in females for six of the genes. Lastly, expressions of two Dmrt genes with differential expression in each sex are located in the mesenteries and into the pharynx in polyps. Conclusions: Our results show that the phylogenetic diversity of Dmrt genes in N. vectensis is matched by an equally diverse pattern of expression during development and in each sex. This dynamic expression suggests multiple functions for Dmrt genes likely present in early diverging metazoans. Detailed functional analyses of individual genes will inform hypotheses regarding the antiquity of function for these transcription factors. abstract_id: PUBMED:37986927 LRRK2 kinase activity is necessary for development and regeneration in Nematostella vectensis. Background: The starlet sea anemone, Nematostella vectensis, is an emerging model organism with a high regenerative capacity, which was recently found to possess an orthologue to the human LRRK2 gene (nvLRRK2). The leucine rich repeat kinase 2 (LRRK2) gene, when mutated, is the most common cause of inherited Parkinson's Disease (PD). Its protein product (LRRK2) has implications in a variety of cellular processes, however, the full function of LRRK2 is not well established. Current research is focusing on understanding the function of LRRK2, including both its physiological role as well as its pathobiological underpinnings. Methods: We used bioinformatics to determine the cross-species conservation of LRRK2, then applied drugs targeting the kinase activity of LRRK2 to examine its function in development, homeostasis and regeneration in Nematostella vectensis. Results: An in-silico characterization and phylogenetic analysis of nvLRRK2 comparing it to human LRRK2 highlighted key conserved motifs and residues. In vivo analyses inhibiting the kinase function of this enzyme demonstrated a role of nvLRRK2 in development and regeneration of N. vectensis. These findings implicate a developmental role of LRRK2 in Nematostella, adding to the expanding knowledge of its physiological function. Conclusions: Our work introduces a new model organism with which to study LRRK biology. We show a necessity for LRRK2 in development and regeneration. Given the short generation time, genetic trackability and in vivo imaging capabilities, this work introduces Nematostella vectensis as a new model in which to study genes linked to neurodegenerative diseases such as Parkinson's. abstract_id: PUBMED:35359334 Creating a User-Friendly and Open-Access Gene Expression Database for Comparing Embryonic Development and Regeneration in Nematostella vectensis. The sea anemone Nematostella vectensis has emerged as a powerful research model to understand at the gene regulatory network level, to what extend regeneration recapitulates embryonic development. Such comparison involves massive transcriptomic analysis, a routine approach for identifying differential gene expression. Here we present a workflow to build a user-friendly, mineable, and open-access database providing access to the scientific community to various RNAseq datasets. abstract_id: PUBMED:26365810 The making of an embryo in a basal metazoan: Proteomic analysis in the sea anemone Nematostella vectensis. Cnidarians are widely distributed basal metazoans that play an important role in the marine ecosystem. Their genetic diversity and dispersal depends on successful oogenesis, fertilization and embryogenesis. To understand the processes that lead to successful embryogenesis in these basal organisms, we conducted comparative proteomics on the model sea anemone Nematostella vectensis. We examined four developmental stages from oocyte maturation through early embryogenesis, as well as the oocyte jelly sac in which fertilization and embryogenesis take place. Our analysis revealed 37 stage-specifically expressed proteins, including cell cycle, cellular energy related and DNA replication proteins and transcription regulators. Using in situ hybridization, we show that within the mesenteria, two cell types support successful oocyte development and embryogenesis. Large somatic supporting cells synthesize vitellogenin, the most abundant egg yolk protein within the oocyte, whereas mesenteria gland cells synthesize mucin 5B, which was found to be the main component of the jelly sac. These findings shed light on the sexual reproduction program in cnidarians and suggest a high conservation with proteins governing oogenesis in Bilateria. Answer: The expansion of tandem repeats (TRs) in the proteome of the sea anemone Nematostella vectensis has been identified as a potential source for gene novelty. The investigation into the N. vectensis proteome revealed that 11-16% of its proteins contain TRs, with most TRs covering 150 amino acid segments composed of basic units of 5-20 amino acids. The N. vectensis proteome contains approximately 3300 unique TR-units, but only a small fraction of these are shared with the proteomes of Hydra magnipapillata, Monosiga brevicollis, or mammals. These TRs are characterized by a unique composition of amino acids, particularly with an over-representation of cysteine and histidine, and are often associated with coiled and disordered regions. Additionally, 80% of the TR-segments can be read in more than one open reading frame, suggesting the potential for alternative protein products. The presence of TR-proteins in early metazoan life, such as Nematostella, may have served as a robust mode for the emergence of novel genes with structural and functional characteristics that were previously overlooked (PUBMED:20003297).
Instruction: Can measures of limb loading and dynamic stability during the squat maneuver provide an index of early functional recovery after unilateral total hip arthroplasty? Abstracts: abstract_id: PUBMED:24953251 Can measures of limb loading and dynamic stability during the squat maneuver provide an index of early functional recovery after unilateral total hip arthroplasty? Objective: To investigate limb loading and dynamic stability during squatting in the early functional recovery of patients who had undergone total hip arthroplasty (THA). Design: Cohort study. Setting: Inpatient rehabilitation clinic. Participants: Of the total participants (N=99), a random sample of patients who had undergone THA (n=61; 34 men and 27 women; mean age, 62±9y; weight, 77±14kg; height, 174±9 cm) was assessed twice, 13.2±3.8 days (t1) and 26.6±3.3 days postsurgery (t2), and compared with a healthy reference group (n=38; 22 men and 16 women; mean age, 47±12y; weight, 78±20kg; height, 175±10cm). Interventions: Patients who had undergone THA received 2 weeks of standard inpatient rehabilitation. Main Outcome Measures: Interlimb vertical force distribution and dynamic stability during the squat maneuver, as defined by the root mean square of the center of pressure in anteroposterior and mediolateral directions, of operated and nonoperated limbs. Self-reported function was assessed via the Function Assessment Questionnaire Hannover for Osteoarthritis questionnaire. Results: At t1, unloading of the operated limb was 15.8% greater (P&lt;.001; d=1.070) and anteroposterior and mediolateral center of pressure root mean square values were 30% to 34% higher in patients who had undergone THA than in the healthy reference group (P&lt;.05). Unloading was reduced by 12.8% toward a more equal distribution from t1 to t2 (P&lt;.001; d=.874). Although mediolateral stability improved between t1 and t2 (operated limb: 14.8%; P=.024; d=.397; nonoperated limb: 13.1%; P=.015; d=.321), anteroposterior stability was not significantly different. Self-reported physical function improved by 15.8% (P&lt;.001; d=.965). Conclusions: Patients who had undergone THA unload the operated limb and are dynamically more unstable during squatting in the early rehabilitation phase after THA than are healthy adults. Although loading symmetry and mediolateral stability improved to the level of healthy adults with rehabilitation, anteroposterior stability remained impaired. Measures of dynamic stability and load symmetry during squatting provide quantitative information that can be used to clinically monitor early functional recovery from THA. abstract_id: PUBMED:29536559 Abnormal loading of the hip and knee joints in unilateral hip osteoarthritis persists two years after total hip replacement. A total hip replacement (THR) is a common and routine procedure to reduce pain and restore normal activity. Gait analysis can provide insights into functional characteristics and dynamic joint loading situation not identifiable by clinical examination or static radiographic measures. The present prospective longitudinal study tested whether 2 years after surgery a THR would restore dynamic loading of the knee and hip joints in the frontal plane to normal. Instrumented gait analysis was performed shortly before surgery and approximately 2 years after THR on 15 unilateral hip osteoarthritis (OA) patients. 15 asymptomatic matched individuals were recruited as healthy controls. Results showed that abnormal joint loading persisted 2 years after THR. The 2nd external knee adduction moment in terminal stance in the affected (-34%, p = 0.002, d = 1.22) and non-affected limb (-25%, p = 0.035, d = 0.81) was lower compared to controls and thus indicated a shift in the knee joint load distribution from medial to lateral. A correlation analysis revealed that a smaller hip range of motion explained 46% of 2nd knee adduction moment alterations. In contrast, the 2nd external hip adduction moment in terminal stance was postoperatively higher in the affected (+22%, p = 0.007, d = 1.04) and non-affected limb (+22%, p = 0.005, d = 1.05). Here, 51% of 2nd hip adduction moment alterations can be explained with a greater hip adduction angle. Patients with a THR may therefore be at higher risk for abnormal joint loading and thus for the development of OA in other joints of the lower extremities. © 2018 Orthopaedic Research Society. Published by Wiley Periodicals, Inc. J Orthop Res. abstract_id: PUBMED:31442926 Functional and postural recovery after bilateral or unilateral total hip arthroplasty. One-stage bilateral total hip arthroplasty (THA) implies similar complication rate and hospitalization time to unilateral THA, but no studies have evaluated the functional and postural recovery in these patients. The aim of this study was to assess short-term functional and postural recovery in patients after one-stage bilateral or unilateral THA. Forty patients undergoing bilateral (n = 20) or unilateral (n = 20) THA were assessed by Timed Up and Go (TUG), Numeric Rating Scale (NRS), Tampa Scale of Kinesiophobia (TSK) and Body Weight Distribution Symmetry Index (BWDSI) during stand-to-sit (STS). Centre of Pressure (CoP) parameters and BWDSI during standing with eyes open (EO) and closed (EC) were also assessed. Data were collected one day before surgery, at three and seven days. No between-group differences were found for TUG, NRS and TSK at any time-point, showing similar mobility, pain and fear of movement in both groups. BWDSI during STS (P = 0.001) and standing (OE P = 0.007; CE P = 0.012) revealed differences over time in favor of patients with bilateral THA, who showed better symmetry in weight distribution. Shorter CoP path length was observed during standing in patients with unilateral THA (OE P = 0.023; CE P = 0.018), who mainly used their non-affected limb to maintain balance. abstract_id: PUBMED:33562021 Excellent Functional Outcome and Quality of Life after Primary Cementless Total Hip Arthroplasty (THA) Using an Enhanced Recovery Setup. Background: Total hip arthroplasty combined with the concept of enhanced recovery is of continued worldwide interest, as it is reported to improve early functional outcome and treatment quality without increasing complications. The aim of the study was to investigate functional outcome and quality of life 4 weeks and 12 months after cementless total hip arthroplasty in combination with an enhanced recovery concept. Methods: A total of 109 patients underwent primary cementless Total Hip Arthroplasty (THA) in an enhanced recovery concept and were retrospectively analyzed. After 4 weeks and 12 months, clinical examination was analyzed regarding function, pain and satisfaction; results were evaluated using Harris Hip score, Western Ontario and McMaster Universities Osteoarthritis Index (WOMAC), EQ-5D-5L, EQ-VAS and subjective patient-related outcome measures (PROMs). Preoperatively, HADS (Hospital Anxiety and Depression Scale) was collected. A correlation analysis of age, American Society of Anesthesiologists (ASA), HADS and comorbidities (diabetes mellitus, art. hypertension, cardiovascular disease) with WOMAC, Harris Hip score (HHS) and EQ-5D was performed. Results: Patients showed a significant improvement in Harris Hip score 4 weeks and 12 months postoperatively (p &lt; 0.001). WOMAC total score, subscale pain, subscale stiffness and subscale function improved significantly from preoperative to 12 months postoperative (p &lt; 0.001). EQ-5D showed a significant improvement preoperative to postoperative (p &lt; 0.001). The influence of anxiety or depression (HADS-A or HADS-D) on functional outcome could not be determined. There was a high patient satisfaction postoperatively, and almost 100% of patients would choose enhanced recovery surgery again. Conclusion: Cementless THA with the concept of enhanced recovery improves early clinical function and quality of life. PROMs showed a continuous improvement over a follow-up of 12 months after surgery. PROMs can help patients and surgeons to modify expectations and improve patient satisfaction. abstract_id: PUBMED:15013502 Dynamic stability during walking following unilateral total hip arthroplasty. The purpose of this study was to examine dynamic stability, defined as the vertical projection of the center of mass (COM) to the base of support (BOS) mediolaterally during walking in 16 healthy and 16 unilateral total hip arthroplasty (THA) persons. There was a significant effect of side for double limb support (DLS) for the healthy group and between groups but not significant for single limb support. The dynamic stability pattern for the THA group was to hold the COM in the midline during a longer DLS phase demonstrating a different motor control strategy compared to healthy adults. abstract_id: PUBMED:32237927 Prognostic factors for inpatient functional recovery following total hip and knee arthroplasty: a systematic review. Background and purpose - Essential for safe and timely hospital discharge, inpatient functional recovery following lower limb arthroplasty is also variable. A previous systematic review reported moderate and conflicting levels of evidence regarding patient-related predictors of inpatient recovery for primary total hip arthroplasty (THA). A systematic review of surgical prognostic factors for inpatient recovery following THA or total knee arthroplasty (TKA) is yet to be undertaken. We identified patient and surgical prognostic factors for inpatient functional recovery following THA and TKA; determined whether inpatient functional recovery varies between these procedures; and established whether validated outcome measures relevant to the patient's functional requirements for hospital discharge are routinely assessed.Patients and methods - Critical Appraisal Skills Programme checklists assessed methodological quality, and a best-evidence synthesis approach determined the levels of evidence supporting individual prognostic factors. PubMed, CINAHL, Embase, Scopus, and PEDro databases were searched from inception to May 2019. Included studies examined patient or surgical prognostic factors and a validated measure of post-operative function within 2 weeks of primary, unilateral THA or TKA.Results - Comorbidity status and preoperative function are supported by a strong level of evidence for TKA. For THA, no strong level of evidence was found for patient-related prognostic factors, and no surgical factors were independently prognostic for either arthroplasty site. Limited evidence supports fast-track protocols in the TKA population.Interpretation - Preoperative screening and optimization is recommended. Assessment of Enhanced Recovery Pathways using validated outcome measures appropriate for the early postoperative period is warranted. abstract_id: PUBMED:30186905 Total hip arthroplasty and femoral nail lengthening for hip dysplasia and limb-length discrepancy. The application of distraction osteogenesis through the use of magnet-operated, remote-controlled intramedullary lengthening continues to provide new opportunities for accurate limb equalization. While limb-length discrepancy and deformity can be addressed by total hip arthroplasty alone, the magnitude of correction is limited by the soft-tissue envelope and complications such as sciatic nerve palsy. This 3-patient case series presents the combination of staged ipsilateral total hip arthroplasty and retrograde intramedullary femoral nail lengthening for the correction of both deformity and limb-length discrepancy. Our results report leg-length equalization, independent ambulation without assistive devices, and excellent bone and functional outcomes without complications, demonstrating that this combined technique can be used to achieve targeted lengthening and deformity correction. abstract_id: PUBMED:26797005 Lower limb length and offset in total hip arthroplasty. Restoration of normal hip biomechanics is a key goal of total hip arthroplasty (THA) and favorably affects functional recovery. Furthermore, a major concern for both the surgeon and the patient is preservation or restoration of limb length equality, which must be achieved without compromising the stability of the prosthesis. Here, definitions are given for anatomic and functional limb length discrepancies and for femoral and hip offset, determined taking anteversion into account. Data on the influence of operated-limb length and offset on patient satisfaction, hip function, and prosthesis survival after THA are reviewed. Errors may adversely impact function, quality of life, and prosthetic survival and may also generate conflicts between the surgeon and patient. Surgeons rely on two- or three-dimensional preoperative templating and on intraoperative landmarks to manage offset and length. Accuracy can be improved by using computer-assisted planning or surgery and the more recently introduced EOS imaging system. The prosthetic's armamentarium now includes varus-aligned and lateralized implants, as well as implants with modular or custom-made necks, which allow restoration of the normal hip geometry, most notably in patients with coxa vara or coxa valga. Femoral anteversion must also receive careful attention. The most common errors are limb lengthening and a decrease in hip offset. When symptoms are caused by an error in length and/or offset, revision arthroplasty may deserve consideration. abstract_id: PUBMED:29966959 Asymmetrical loading during sit-to-stand movement in patients 1 year after total hip arthroplasty. Background: Asymmetrical loading during functional performance can occur after total hip arthroplasty. We hypothesized that during sit-to-stand movements, the loading of the limbs of patients who undergo total hip arthroplasty is more asymmetrical than that of those who do not. The objective of the present study was to compare asymmetrical loading during the sit-to-stand movements of patients at 1 year after undergoing total hip arthroplasty, and that of healthy adults. Methods: Twenty-eight patients at 1 year after undergoing total hip arthroplasty and 16 healthy adults were included. We measured the vertical ground reaction force during the sit-to-stand movement for each leg and calculated the ratio of the peak vertical ground reaction force, and the ratio at the time of peak magnitude of the non-operated limb (control, right limb). Findings: The mean peak vertical ground reaction force ratio of the patients was 0.77 (0.2), and it was significantly lower than that of healthy adults. The vertical ground reaction force ratio for these patients at the time of peak magnitude of the non-operated limb was 0.72 (0.2), and was significantly lower than that of healthy adults. Interpretation: Loading on the operated leg during the sit-to-stand movement was lower than that on the non-operated leg in patients at 1 year after undergoing total hip arthroplasty. Furthermore, loading during sit-to-stand movement of patients 1 year after undergoing total hip arthroplasty was more asymmetrical than that of healthy adults. Even at 1 year after undergoing total hip arthroplasty, these patients performed the sit-to-stand movement asymmetrically. abstract_id: PUBMED:32033550 The maximal gait speed is a simple and useful prognostic indicator for functional recovery after total hip arthroplasty. Purpose: The present study aimed to compare the capabilities of preoperative usual and maximal gait speeds in predicting functional recovery in patients who have undergone total hip arthroplasty (THA). Methods: Primary and unilateral THAs were performed in 317 patients, and the proportion of patients who achieved unassisted walking (functional recovery) 5 days postoperatively was recorded as an outcome measure. Preoperative functional assessment included hip pain, leg muscle strength, range of motion (ROM), and gait speed evaluations. The capabilities of preoperative usual and maximal gait speeds in predicting functional recovery were compared based on the areas under the curves (AUCs) of receiver operating characteristic (ROC) curves. Further, ROC curves were constructed using two models: 1. a model of gait speed only and 2. a clinical model including age, sex, leg muscle strength, and ROM. Results: On the AUCs for predictive ability of functional recovery, maximal gait speed was greater than usual gait speed (0.66 and 0.70, respectively). The AUC for maximal gait speed was as large as that of the clinical model (0.70 and 0.70, respectively). Conclusion: Our results suggest that maximal gait speed is a simple and useful prognostic indicator of functional recovery in patients who have undergone THA. Answer: Yes, measures of limb loading and dynamic stability during the squat maneuver can provide an index of early functional recovery after unilateral total hip arthroplasty (THA). A study found that patients who had undergone THA exhibited greater unloading of the operated limb and were dynamically more unstable during squatting in the early rehabilitation phase compared to healthy adults. However, with rehabilitation, loading symmetry and mediolateral stability improved to the level of healthy adults, although anteroposterior stability remained impaired. These measures of dynamic stability and load symmetry during squatting can be used to clinically monitor early functional recovery from THA (PUBMED:24953251). Additionally, other studies have shown that abnormal joint loading can persist even two years after THA, indicating a shift in knee joint load distribution from medial to lateral, which may put patients at higher risk for abnormal joint loading and the development of osteoarthritis in other lower extremity joints (PUBMED:29536559). Furthermore, patients who underwent bilateral THA showed better symmetry in weight distribution during stand-to-sit movements and standing with eyes open and closed compared to those with unilateral THA (PUBMED:31442926). Overall, these findings suggest that while there is improvement in limb loading and dynamic stability following THA, some abnormalities may persist, and therefore, these measures can serve as valuable indices for assessing functional recovery and guiding rehabilitation efforts post-surgery.
Instruction: Could hyaluronic acid (HA) reduce Bacillus Calmette-Guérin (BCG) local side effects? Abstracts: abstract_id: PUBMED:25123116 Could hyaluronic acid (HA) reduce Bacillus Calmette-Guérin (BCG) local side effects? Results of a pilot study. Background: Bacillus Calmette-Guérin (BCG) is considered the most effective treatment to reduce recurrence and progression of non-muscle invasive bladder cancer (NMIBC) but can induce local side effects leading to treatment discontinuation or interruption. Aim of this exploratory study is to investigate if the sequential administration of Hyaluronic acid (HA) may reduce local side effects of BCG. Methods: 30 consecutive subjects undergoing BCG intravesical administration for high risk NMIBC were randomized to receive BCG only (Group A) or BCG and HA (Group B). A 1 to 10 Visual Analog Scale (VAS) for bladder pain, International Prostate Symptom Score (IPSS) and number of micturitions per day were evaluated in the two groups before and after six weekly BCG instillations. Patients were also evaluated at 3 and 6 months by means of cystostopy and urine cytology. Results: One out of 30 (3,3%) patients in group A dropped out from the protocol, for local side effects. Mean VAS for pain was significantly lower in group B after BCG treatment (4.2 vs. 5.8, p = 0.04). Post vs. pre treatment differences in VAS for pain, IPSS and number of daily micturitions were all significantly lower in group B. Three patients in group A and 4 in group B presented with recurrent pathology at 6 month follow up. Conclusions: These preliminary data suggest a possible role of HA in reducing BCG local side effects and could be used to design larger randomized controlled trials, assessing safety and efficacy of sequential BCG and HA administration. Trial Registration: NCT02207608 (ClinicalTrials.gov) 01/08/2014. Policlinico Tor Vergata Ethics Committee, resolution n 69-2011. abstract_id: PUBMED:29633792 Intravesical administration of combined hyaluronic acid and chondroitin sulfate can improve symptoms in patients with refractory bacillus Calmette-Guerin-induced chemical cystitis: Preliminary experience with one-year follow-up. Objective: We investigated the efficacy of intravesical instillations of combined hyaluronic acid (HA) and chondroitin sulphate (CS) in patients with bacillus Calmette-Guérin (BCG)-induced chemical cystitis unresponsive to first-line therapies. Patients And Methods: We retrospectively reviewed the clinical records of patients with grade 2 BCG-induced chemical cystitis unresponsive to first line therapeutic options performed according to the International Bladder Cancer Group guidelines who underwent intravesical instillations of HA/CS. Bladder pain, urinary urgency, voiding volume and number of voids/24 hours recorded prior to treatment, at the end of the treatment, at six months and at one-year follow-up were recorded and analyzed. Results: The records of 20 patients were identified. All patients underwent eight weekly instillations of HA/CS. Mean baseline visual analogue scale (VAS) scores ± Standard Deviation (SD) for urinary urgency and bladder pain were 7.8 ± 0.5 and 7.2 ± 1.0, respectively. Mean number of voids/24 hours ± SD was 15.4 ± 2.3 and mean urine volume per void ± SD was 85.8 ± 21.0 mL. At the end of the treatment, mean VAS scores ± SD for urgency and pain significantly decreased to 4.7 ± 1.1 and 4.2 ± 0.9, respectively (p &lt; 0.05 in both cases). Mean number of voids/24 hours ± SD decreased to 9.6 ± 1.4 (p &lt; 0.05) and mean urine volume per void ± SD significantly increased to 194.1 ± 59.5 mL (p &lt; 0.05). At six months and one-year followup, all outcome measures remained stable. Conclusions: Bladder instillations of HA/CS provide significant and durable improvement of bladder pain, urinary urgency, urinary volume per void and urinary frequency in patients with refractory BCG-induced chemical cystitis. abstract_id: PUBMED:30870228 A systematic review of preventive and therapeutic options for symptoms of cystitis in patients with bladder cancer receiving intravesical bacillus Calmette-Guérin immunotherapy. Local adverse effects are the most common clinical issues in patients with bladder cancer receiving intravesical BCG immunotherapy. The aim of this systematic review was to present available options for prevention and treatment of cystitis symptoms related to bacillus Calmette-Guérin (BCG) intravesical instillations. A literature search within the Medline database was conducted in June 2018 with the following search terms: adverse events, Bacillus Calmette-Guerin, BCG, bladder cancer, cystitis, dose, dwell time, dysuria, frequency, intravesical instillations, haematuria, pain, side effects, toxicity and urgency. Eighteen relevant original articles were identified, including 15 randomized controlled trials. Potentially effective options to prevent symptoms of cystitis are BCG dose reduction, intravesical hyaluronic acid instillations and oral prulifloxacin or ofloxacin administration. For the treatment of BCG-related cystitis, available options include oral pentosan polysulphate or a combination of intravesical hyaluronic acid and chondroitin sulphate. The included studies were characterized by high heterogeneity in terms of BCG strains, schedules and endpoints. Studies on treatment of BCG-related cystitis included only small number of patients. Studies on directed medical interventions did not consider the influence on the BCG efficacy. Among few proposed preventive or therapeutic options for symptoms of cystitis related to BCG, none was proven to be both definitively effective and oncologically safe. abstract_id: PUBMED:21167517 Paclitaxel-hyaluronic acid for intravesical therapy of bacillus Calmette-Guérin refractory carcinoma in situ of the bladder: results of a phase I study. Purpose: Carcinoma in situ represents high grade anaplasia of the bladder mucosa. Intravesical immunotherapy with bacillus Calmette-Guérin is the gold standard treatment for patients with carcinoma in situ. Patients with carcinoma in situ refractory to bacillus Calmette-Guérin are candidates for major surgery such as radical cystectomy. We identified the maximum tolerated dose and the recommended dose, and evaluated the safety profile of paclitaxel-hyaluronic acid bioconjugate given by intravesical instillation to patients with carcinoma in situ refractory to bacillus Calmette-Guérin. Materials And Methods: A total of 16 patients with carcinoma in situ refractory to bacillus Calmette-Guérin were enrolled in a phase I, open label, single institution study. A minimum of 3 eligible patients were included per dose level. Paclitaxel-hyaluronic acid solution (ONCOFID-P-B™) was administered for 6 consecutive weeks. The primary objective was to identify the maximum tolerated dose and the recommended dose. As secondary objectives the safety profile of ONCOFID-P-B, the pharmacokinetic profile after each instillation and the tumor response were also evaluated. Results: No dose limiting toxicity occurred at any drug level evaluated. The plasma levels of the study drug were always below the lower limit of quantification at all tested doses after each instillation. A total of 11 adverse events were reported by 7 patients and 9 (60%) showed complete treatment response. Conclusions: Intravesical instillation of ONCOFID-P-B for carcinoma in situ refractory to bacillus Calmette-Guérin showed minimal toxicity and no systemic absorption in the first human intravesical clinical trial to our knowledge. Finally, satisfactory response rates were observed. abstract_id: PUBMED:28390901 BCG vaccine powder-laden and dissolvable microneedle arrays for lesion-free vaccination. Live attenuated Bacille Calmette-Guerin (BCG) bacillus is the only licensed vaccine for tuberculosis prevention worldwide to date. It must be delivered intradermally to be effective, which causes severe skin inflammation and sometimes, permanent scars. To minimize the side effects, we developed a novel microneedle array (MNA) that could deliver live attenuated freeze-dried BCG powder into the epidermis in a painless, lesion-free, and self-applicable fashion. The MNA was fabricated with biocompatible and dissolvable hyaluronic acid with a deep cave formed in the basal portion of each microneedle, into which BCG powder could be packaged directly. Viability of BCG vaccine packaged in the caves and the mechanical strength of the powder-laden MNA did not alter significantly before and after more than two months of storage at room temperature. Following insertion of the MNA into the skin, individual microneedle shafts melted away by interstitial fluid from the epidermis and upper dermis, exposing the powder to epidermal tissues. The powder sucked interstitial fluid, dissolved slowly, and diffused into the epidermis in a day against the interstitial fluid influx. Vaccination with BCG-MNA caused no overt skin irritation, in marked contrast to intradermal vaccination that provoked severe inflammation and bruise. While causing little skin irritation, vaccination efficacy of BCG-MNAs was comparable to that of intradermal immunization whether it was evaluated by humoral or cellular immunity. This powder-laden and dissolvable MNA represents a novel technology to sufficiently deliver live attenuated vaccine powders into the skin. abstract_id: PUBMED:7129597 Increased hyaluronic acid is associated with dermal delayed-type hypersensitivity. Rabbits sensitized subcutaneously with heat-killed bacilli Calmette-Guerin (BCG) and challenged intradermally with heat-killed BCG or purified protein derivative (PPD) demonstrated classical dermal delayed-type hypersensitivity which peaked two days postchallenge. Animals challenged with BCG developed dermal granulomas as measured by induration and gross observation. Challenge with either PPD or BCG resulted in increased levels of dermal hyaluronic acid (HA) by two days postchallenge. Dermal HA returned to normal levels by seven days postchallenge regardless of the challenge antigen. These results indicated that increased HA is associated with dermal delayed-type sensitivity, but increased HA is not associated with dermal granulomatous hypersensitivity. These results are in contrast to previously reported work which indicates that increased HA is associated with both pulmonary delayed hypersensitivity and pulmonary granulomatous hypersensitivity. abstract_id: PUBMED:34703805 Staphylococcus aureus derived hyaluronic acid and bacillus Calmette-Guérin purified proteins as immune enhancers to rabies vaccine and related immuno-histopathological alterations. Purpose: One of the essential goals regarding the successful control of rabies infection is the development of a safe, effective, and inexpensive vaccine. the current study aimed to evaluate the inactivation potential of β-propiolactone (βPL), binary ethyleneimine (BEI), and hydrogen peroxide (H2O2). Materials And Methods: Estimating the inactivation kinetics of βPL, BEI, and H2O2 revealed that the tested inactivants could completely and irreversibly inactivate rabies virus within 2, 12, and 4 hours, respectively while maintaining its viral immunogenicity. The potency of βPL, BEI, and H2O2 inactivated vaccines was higher than the World Health Organization acceptance limit and were in the order of 3.75, 4.21, and 3.64 IU/mL, respectively. Monitoring the humoral and cellular immunity elicited post-immunization using Staphylococcus aureus derived hyaluronic acid (HA) and bacillus Calmette-Guérin purified protein derivative (PPD) adjuvanted rabies vaccine candidates were carried out using enzyme-linked immunosorbent assay. Results: Results demonstrated that both adjuvants could progressively enhance the release of anti-rabies total immunoglobulin G as well as the pro-inflammatory mediators (interferon-gamma and interleukin-5) relative to time. However, a higher immune response was developed in the case of HA adjuvanted rabies vaccine compared to PPD adjuvanted one. The harmful consequences of the tested adjuvants were considered via investigating the histopathological changes in the tissues of the immunized rats using hematoxylin and eosin stain. Lower adverse effects were observed post-vaccination with HA and PPD adjuvanted vaccines compared to that detected following administration of the currently used alum as standard adjuvant. Conclusion: Our findings suggested that HA and PPD could serve as a promising platform for the development of newly adjuvanted rabies vaccines with elevated immune enhancing potentials and lower risk of health hazards. abstract_id: PUBMED:2037037 Bacillus Calmette-Guérin (BCG) influences cell proliferation and glycosaminoglycans of chondrocyte cultures. There are only few reports on the correlation between bacterial products and the GAG pattern of cartilage. Mycobacteria bovis (BCG) were applied to chondrocyte monolayer cultures for one week. The following parameters did change: cell proliferation increased, glycosaminoglycan synthesis and secretion decreased, hyaluronic acid in secreted and cell-associated glycosaminoglycans increased, a correlation between the degree of these changes and the degree of cell differentiation seems to exist. The contact of bacteria like BCG to chondrocytes may change the cellular metabolism. On the tissue level this may injure articular cartilage and thus support the concept of predamaged cartilage that is readily susceptible to further degradation. abstract_id: PUBMED:28032168 The Efficacy and Safety of HA IDF Plus (with Lidocaine) Versus HA IDF (Without Lidocaine) in Nasolabial Folds Injection: A Randomized, Multicenter, Double-Blind, Split-Face Study. Background: Injection-related pain of dermal fillers is a consistent and bothersome problem for patients undergoing soft tissue augmentation. Reducing the pain could improve overall patient satisfaction. Objective: The purpose of this study was to compare the pain relief, efficacy, and safety of HA IDF plus containing lidocaine with HA IDF without lidocaine during correction of nasolabial folds (NLFs). Methods: Sixty-two subjects were enrolled in a randomized, multicenter, double-blind, split-face study of HA IDF plus and HA IDF for NLF correction. For split-face study, HA IDF plus was injected to one side of NLF, and HA IDF was injected to the other side. The first evaluation variable was the injection site pain measured using a 100-mm visual analogue scale (VAS). The second evaluation variables included the global aesthetic improvement scale, wrinkle severity rating scale, and adverse events. Results: Immediately after injection, 91.94% of subjects experienced at least 10 mm decrease in VAS scores at the side injected with HA IDF plus compared with HA IDF, and the rate of subjects is statistically significant. The two fillers were not significantly different in safety profile or wrinkle correction during the follow-up visit. Conclusions: HA IDF plus significantly reduced the injection-related pain during NLFs correction compared with HA IDF without altering clinical outcomes or safety. Both HA IDF plus and HA IDF were considerably tolerated and most adverse reactions were mild and transient. Level Of Evidence I: This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266 . abstract_id: PUBMED:26321524 Effect of Yupingfeng granules on HA and Foxp3(+) Treg expression in patients with nasopharyngeal carcinoma. Objective: To investigate the effect of Yupingfeng on hyaluronic acid (HA) and Foxp3(+) Treg in patients with nasopharyngeal carcinoma. Methods: A total of 58 cases of nasopharyngeal carcinoma were divided into two groups, 30 cases in the treatment group, 28 cases in the control group. Patients in two groups were treated with synchronous radiotherapy and chemotherapy treatment, the treatment group was treated with the Yupingfeng granules through oral administration, 10 g/time, tid for 2 courses. The serum Foxp3(+) Treg markers of each group were detected by flow cytometry assay before treatment and after treatment, and the level of HA in serum was detected by radio immunoassay. Results: After radiotherapy and chemotherapy, the contents of Foxp3(+) Treg and HA were significantly decreased in two groups (P &lt; 0.05), and the decrease of treatment group was more significantly (P &lt; 0.01). Correlation analysis showed positive correlation between Foxp3(+) Treg and HA (P &lt; 0.05). After treatment, the incidence of side effects in two groups was significantly decreased. And there was significant difference between two groups (P &lt; 0.05). Conclusions: Combined chemotherapy and radiotherapy with Yupingfeng treatment can decrease the levels of Foxp3(+) Treg and HA in nasopharyngeal carcinoma patients. Yupingfeng can also effectively reduce the side effect due to radiation and chemotherapy. Answer: Yes, hyaluronic acid (HA) could potentially reduce local side effects caused by Bacillus Calmette-Guérin (BCG) treatment. A pilot study indicated that the sequential administration of HA following BCG treatment resulted in significantly lower Visual Analog Scale (VAS) scores for bladder pain compared to the group that received BCG alone. Additionally, post-treatment differences in VAS for pain, International Prostate Symptom Score (IPSS), and number of daily micturitions were all significantly lower in the group that received both BCG and HA (PUBMED:25123116). Another study found that intravesical instillations of combined HA and chondroitin sulfate significantly improved symptoms such as bladder pain, urinary urgency, voiding volume, and number of voids per 24 hours in patients with BCG-induced chemical cystitis that was unresponsive to first-line therapies. These improvements were maintained at six months and one-year follow-up (PUBMED:29633792). A systematic review also mentioned intravesical HA instillations as a potentially effective option to prevent symptoms of cystitis related to BCG intravesical instillations (PUBMED:30870228). These findings suggest that HA has a role in mitigating the local side effects of BCG treatment for bladder cancer, although further research is needed to confirm these results and to ensure that the use of HA does not interfere with the efficacy of BCG therapy.
Instruction: Anti-D in Rh(D)-negative pregnant women: are at-risk pregnancies and deliveries receiving appropriate prophylaxis? Abstracts: abstract_id: PUBMED:22555135 Anti-D in Rh(D)-negative pregnant women: are at-risk pregnancies and deliveries receiving appropriate prophylaxis? Objective: Although anti-D prophylaxis has greatly reduced the rate of Rh-immunization, there remain women who sensitize during or after pregnancy because of inadequate prophylaxis. The purpose of this study was to compare adherence to prophylaxis recommendations for antenatal and postnatal anti-D immunoglobulin administration. Methods: We conducted a retrospective cohort study of all pregnancies recorded at the Royal Victoria Hospital between 2001 and 2006 to determine the rates of antenatal and postnatal prophylaxis in Rh(D)-negative women. We compared adherence to anti-D prophylaxis recommendations between our institution's physician-dependent antenatal approach and the protocol-based postpartum approach. Logistic regression analysis was used to estimate the odds ratio and 95% confidence intervals of determinants of non-adherence to current recommendations for anti-D prophylaxis. Results: Antenatal administration was analyzed in 1868 pregnancies in eligible Rh-negative women. Among these women, 85.7% received appropriate antenatal prophylaxis and 98.5% of eligible women received appropriate postnatal prophylaxis. Factors independently associated with non-adherence to antepartum prophylaxis included first visit in the third trimester (P &lt; 0.001), transfer from an outside hospital (P = 0.03), and physician licensing before 1980 (P = 0.04). Conclusion: Unlike hospital-based protocol-dependent systems, physician-dependent systems for antenatal anti-D prophylaxis remain subject to errors of omission. A more standardized system is needed to ensure effective antenatal prophylaxis. abstract_id: PUBMED:10822755 Rh (D) alloimmunization and pregnancy. Analysis of the causes after prophylaxis introduction Purpose: Prenatal and postnatal prophylaxis of the Rh (D) haemolytic disease of the newborn have clearly reduced the number of cases but still there are alloimmunizations. Patients And Methods: All cases detected in our Hospital in the last 24 years have been reviewed and possible causes analyzed. Results: From a total of 10,332 deliveries in Rh (D) negative women we have detected 114 anti-D in 86 women. In 74 women anti-D was the only antibody and in 12 there were more antibodies. Data were managed in 3-year periods and we see a progressive decrease in the incidence of alloimmunization with a minimum of 0.03 per 1000 pregnancies in the period 89-91 and a posterior progression to an incidence of 0.12 in the last 3-year period 95-97. The causes were: pregnancies before 1970 in 31, incorrect prophylaxis in 12, despite a correct prophylaxis in 6, previous pregnancies without complete information about the prophylaxis in 13, previous transfusion in 6, previous pregnancies or transfusion in 8 and indetermined in 10. Conclusion: It is desirable to reduce at minimum the number of Rh (D) alloimmunizations by strictly following the prophylaxis protocols. abstract_id: PUBMED:21270966 Rh isoimmunization in Sub-Saharan Africa indicates need for universal access to anti-RhD immunoglobulin and effective management of D-negative pregnancies. Transplacental or fetomaternal hemorrhage (FMH) may occur during pregnancy or at delivery and lead to immunization to the D antigen if the mother is Rh-negative and the baby is Rh-positive. This can result in hemolytic disease of the fetus and newborn (HDFN) in subsequent D-positive pregnancies. The aim of this study is to highlight the challenges associated with the effective management and prevention of Rh alloimmunization among Rh-negative women in Sub-Saharan Africa. In most Sub-Saharan African countries, there is poor and sometimes no alloimmunization prevention following potentially sensitizing events and during medical termination of pregnancy in Rh-negative women. Information about previous pregnancies and termination are often lacking in patients' medical notes due to poor data management. These issues have made the management of Rh-negative pregnancy a huge challenge. Despite the fact that the prevalence of Rh-negative phenotype is significantly lower among Africans than Caucasians, Rh alloimmunization remains a major factor responsible for perinatal morbidity in Sub-Saharan Africa and may result in the compromise of the woman's obstetric care due to the unaffordability of anti-D immunoglobulin. There is the urgent need for the implementation of universal access to anti-D immunoglobulin for the Rh-negative pregnant population in Africa. Anti-D immunoglobulin should be available in cases of potentially sensitizing events such as amniocentesis, cordocentesis, antepartum hemorrhage, vaginal bleeding during pregnancy, external cephalic version, abdominal trauma, intrauterine death and stillbirth, in utero therapeutic interventions, miscarriage, and therapeutic termination of pregnancy. There is also the need for the availability of FMH measurements following potentially sensitizing events. The low-cost acid elution method, a modification of the Kleihauer-Betke (KB) test, can become a readily available, affordable, and minimum alternative to flow cytometric measurement of FMH. Knowledge of anti-D prophylaxis among obstetricians, biomedical scientist, midwives, traditional birth attendants, pharmacists, and nurses in Africa needs to be improved. This will facilitate quality antenatal and postnatal care offered to Rh-negative pregnant population and improve perinatal outcomes. abstract_id: PUBMED:119891 Prevention of Rh sensitization using anti-D immunoglobulin. (An 8 year study) From July 1969 to July 1977 the authors administered 629 doses of anti D immunoglobulin to 597 women, 26 of them having received the preparation twice and 3 three times. The preparation of anti D immunoglobulin is most frequently given after delivery to Rh negative, in the ABO system compatible primiparae having given birth to a Rh positive child and being not immunized. The effectiveness of the prophylaxis was tested by sensibilization tests 3--6 months after delivery in 159 women, and in 1977, 137 more women were summoned to a follow-up, so that in all 296 women were hematologically followed up. The prophylaxis proved a failure in 0.6% of women. Biological control was carried out in a group of 118 women who had given birth to Rh positive children following the use of anti D immunoglobulin. It showed failure of the prophylaxis applied in 1.6% of cases. abstract_id: PUBMED:29479934 Is the management of Rh-Rh incompatibility with noninvasive fetal Rh genotyping for targeted prophylaxis cost-effective in the Turkish population? Background/aim: The aim of this study was to assess unnecessary immunization rates and compare the cost-effectiveness of targeted prophylaxis with fetal Rh genotyping with that of traditional management of Rh-Rh incompatibility in a virtual economic model. Materials and methods: This retrospective data analysis was conducted at two tertiary centers between 2011 and 2015. The data of 1135 pregnant women were analyzed. The main outcome measure was to determine the unnecessary immunization rate among the whole Rh-Rh incompatibility group. The second outcome measure was to compare the cost-effectiveness of universal immunization with that of targeted prophylaxis with fetal Rh genotyping in a virtual economic model. Results: Average cost per patient was found as $259.20 with universal prophylaxis and the total cost was $177,344, whereas if targeted prophylaxis had been applied to these patients the total cost would have been $263,392 and cost per patient would have been $385. Universal prophylaxis was more cost-effective than targeted prophylaxis in terms of both total cost and cost per patient (P &lt; 0.0001). Conclusion: Unless the cost of noninvasive fetal Rh genotyping is reduced, a universal approach of anti-D immune globulin prophylaxis is more cost-effective than noninvasive determination of fetal Rh genotyping with targeted prophylaxis. abstract_id: PUBMED:418599 External quality control for anti-D demonstration within the scope of Rh immunoprevention In the GDR the success and the dosage of IgG-anti-D in immune prophylaxis after birth of Rh-positive infants from Rh-negative mothers are controlled by the estimation of anti-D antibodies in maternal serum. For this purpose the papain method was introduced in centers for immune prophylaxis since 1970/71 at the time of general introduction of this immune prophylaxis in the GDR. The results if five external quality control experiments in the 16 to 18 such centers in the district of Magdeburg between 1972 and 1977 are recorded. The false positive and false negative reports are discussed and the demand for those regular external quality controls is carried ant. abstract_id: PUBMED:3003466 Advantages of preventing Rh isoimmunization The authors analysed the frequency of Rh immunization from 1972 to 1983. The incidence of Rh-immunized women who after the birth of a Rh (D) positive child were not given anti-D immunoglobulin G and in subsequent pregnancies gave birth to a Rh (D) positive child was found to amount to 11.76%, while in women who were given anti-D immunoglobulin D this incidence was 0.77% (t = 5.98; p less than 0.05). Out of 29 Rh-immunized pregnant women, two developed Rh immunization in the course of the first pregnancy, three after the unsuccessful prevention of Rh immunization, and the rest after delivery or after delivery and abortion. Out of 29 Rh-immunized women, 27 (93.10%) were ABO-compatible and 2 (6.90%) ABO-incompatible with their child (p less than 0.05). In the first pregnancy the incidence of Rh immunization was 1.86 per 1000 deliveries in Rh negative pregnant women and 21.19 per 1000 deliveries in subsequent pregnancies (p less than 0.05). In the period observed there were 2.24 Rh immunizations per 1000 of all deliveries. From 1972 to 1977 there were 3.19 Rh immunizations per 1000 deliveries and from 1978 to 1983 only 1.43 (t = 2.08; p less than 0.05), which is a reduction by 55.17%. The perinatal mortality rate of children affected by Rh-hemolytic disease was 20%. In the last six years it has gone down by 60%, while the number of children with Rh-hemolytic diseases has been reduced by 50%. abstract_id: PUBMED:35224728 Following targeted routine antenatal anti-D prophylaxis, almost half of the pregnant women had undetectable anti-D prophylaxis at delivery. Introduction: In September 2016, a nationwide targeted routine antenatal anti-D prophylaxis program was implemented in Norway. The prophylaxis (anti-D immunoglobulin) aims to cover the whole third trimester and is administered in gestational week 28 to RhD-negative women who carry RhD-positive fetuses. However, in many women, antibody screening at delivery does not detect anti-D immunoglobulin. The goal of this study was to investigate the presumable role of dose and timing of antenatal anti-D immunoglobulin administration in non-detectable prophylaxis at the time of delivery. Material And Methods: In this retrospective observational study, RhD-negative pregnant women who gave birth at Oslo University Hospital and Akershus University Hospital between January 2017 and December 2019 were analyzed. Women who received antenatal anti-D immunoglobulin (1500 IU at Oslo University Hospital and 1250 IU at Akershus University Hospital) when fetal RHD genotyping at gestational week 24 predicted an RhD-positive fetus were included if an antibody screen at delivery was available. Data from the blood bank, maternity information systems, and electronic patient records were used. Results: Analysis of the 984 RhD-negative women at the two hospitals revealed that 45.4% had non-detectable anti-D at delivery. A significant difference between the two hospitals was observed: 40.5% at Oslo University Hospital (n = 509) and 50.7% at Akershus University Hospital (n = 475) (p = 0.001). The proportion with non-detectable anti-D increased to 56.0 and 75.3%, respectively (p = 0.008) in the group of women who gave birth 12 weeks after routine antenatal anti-D prophylaxis. Significantly fewer women had detectable anti-D at delivery when the lower anti-D immunoglobulin dose (1250 IU) was administered antenatally. Multiple logistic regression indicated that the time interval between routine antenatal anti-D prophylaxis and delivery, in addition to anti-D dose, were significantly associated with detectable anti-D at delivery (p &lt; 0.001). Conclusions: We do not know which RhD-negative pregnant women, despite antenatal anti-D prophylaxis, are at risk of RhD alloimmunization, when antibody screening is negative at delivery. Administration of antenatal prophylaxis should probably be moved closer to delivery, since the risk of fetomaternal hemorrhage is higher during the last weeks of the third trimester. abstract_id: PUBMED:6095582 Antenatal prophylaxis of Rh immunization with 250 micrograms anti-D immunoglobulin. Anti-D immunoglobulin (250 micrograms) was given i.m. to 830 Rh-negative primigravidae and multigravidae round the 32nd to 34th week of gestation. The multigravidae had previously been treated with anti-D immunoglobulin post partum or after abortion and had been followed up serologically after 8 months. Five hundred and twenty-nine of the women delivered Rh-positive infants and received another injection of anti-D immunoglobulin (250 micrograms) within 72 hours of delivery. At the serological follow-up 8 months after delivery 2 women (0.4%) had weak anti-D antibodies by the papain technique. No anti-D could be detected in these 2 women 14 and 20 months, respectively, after delivery. In a previously performed postnatal clinical study with 250 micrograms anti-D immunoglobulin the failure rate was 1.6% (10 out of 645 women). Thus, antenatal prophylaxis significantly (p less than 0.05) reduced the incidence of Rh immunization. The haemoglobin and bilirubin levels in cord blood and capillary blood did not differ in the Rh-positive and Rh-negative infants. Three primiparae (0.2%) had anti-D antibodies at the time of the antenatal injection before delivery. Thus, the antenatal regimen of gestation did not give full protection from Rh immunization. It is suggested that an antenatal injection at the 28th week of gestation would have been more effective, as Rh sensitization during pregnancy has been reported to be more frequent from the 29th week of pregnancy. Since the introduction of postnatal anti-D prophylaxis the Rh immunization occurring during pregnancy accounts for most of the observed failures (2, 4, 5, 14, 20). The efficacy and safety of antenatal injection of anti-D immunoglobulin has therefore been investigated (5, 6, 7). At the Växjö Hospital in Sweden, where the incidence of anti-D antibodies was 1.6% (15), 250 micrograms of anti-D immunoglobulin were given at about 32-34 weeks of gestation from March, 1973, to December, 1977. The results are presented here. abstract_id: PUBMED:31121145 The continuing burden of Rh disease 50 years after the introduction of anti-Rh(D) immunoglobin prophylaxis: call to action. Severe morbidity and death because of Rh disease have only been reduced by approximately 50% globally during the last 50 years, despite the advent of anti-Rh(D) immunoglobin prophylaxis, which has resulted in &gt;160,000 perinatal deaths and 100,000 disabilities annually. This apparent failure to take appropriate preventive measures is of great concern. Thus, there is a great need to do much better. We wish to draw attention to the unnecessary continuing burden of Rh disease, to discuss some of the reasons for this failure, and to provide suggestions for a better way forward. Answer: The studies on anti-D prophylaxis in Rh(D)-negative pregnant women indicate that while there has been a significant reduction in Rh immunization due to the administration of anti-D immunoglobulin, there are still instances of inadequate prophylaxis leading to sensitization during or after pregnancy. A retrospective cohort study at the Royal Victoria Hospital between 2001 and 2006 found that 85.7% of eligible Rh-negative women received appropriate antenatal prophylaxis, and 98.5% received appropriate postnatal prophylaxis. Factors associated with non-adherence to antepartum prophylaxis included late first visits, transfers from outside hospitals, and physician licensing before 1980 (PUBMED:22555135). Another study reviewing cases over 24 years at a hospital found a progressive decrease in the incidence of alloimmunization, with various causes including pregnancies before 1970, incorrect prophylaxis, and previous pregnancies or transfusions without complete information about prophylaxis (PUBMED:10822755). In Sub-Saharan Africa, the management of Rh-negative pregnancy is challenged by poor access to anti-D immunoglobulin and ineffective management of potentially sensitizing events, leading to a significant factor in perinatal morbidity (PUBMED:21270966). A study in Turkey compared the cost-effectiveness of targeted prophylaxis with fetal Rh genotyping to traditional management and found that universal prophylaxis was more cost-effective unless the cost of noninvasive fetal Rh genotyping is reduced (PUBMED:29479934). In Norway, after the implementation of a targeted routine antenatal anti-D prophylaxis program, nearly half of the pregnant women had undetectable anti-D prophylaxis at delivery, suggesting that the timing and dosage of antenatal anti-D immunoglobulin administration may need adjustment (PUBMED:35224728). Overall, these studies suggest that while anti-D prophylaxis has been effective in reducing Rh immunization rates, there are still gaps in adherence to prophylaxis protocols, and improvements are needed to ensure that all at-risk pregnancies and deliveries receive appropriate prophylaxis.
Instruction: Can functional status, after rehabilitation, independently predict long-term mortality of hip-fractured elderly patients? Abstracts: abstract_id: PUBMED:15132291 Can functional status, after rehabilitation, independently predict long-term mortality of hip-fractured elderly patients? Background And Aims: Hip fractures are one of the most serious causes of functional impairment and death in the elderly. The aim of this study was to evaluate prospectively the predictive value of functional performance, after rehabilitation, of hip fracture on long-term mortality in community-dwelling patients. Methods: One hundred and seventy-one patients aged 60 years and over, admitted to a geriatric rehabilitation day unit after inpatient rehabilitation, were followed for up to 4 years. Main outcome measures were Functional Independent Measure (FIM), Timed Get Up and Go test (GUAG), cognitive status using the Mini-Mental State Examination on admission, and mortality during the follow-up period. Kaplan-Meier analysis was carried out on survival curves. Results: All 24 deceased patients performed the GUAG test in &gt; 20 seconds. Although approaching significance, the survival curves were not statistically different between patients performing the test in &lt; or = 20 and those performing it in &gt; 20 seconds (p = 0.08). Survival curves were significantly higher in patients with a FIM score of &gt; or = 90 (p = 0.004), no cardio-cerebrovascular (CCV) diseases (p = 0.001) and no diabetes mellitus (p = 0.01). There were no differences in survival according to age, gender, educational level, marital status, surgical vs conservative treatment, and cognition. A multivariate analysis including FIM score, CCV diseases and diabetes mellitus, demonstrated that only CCV disease was an independent variable for survival (p = 0.02). Conclusions: Performance, as evidenced by FIM scores after rehabilitation for hip fracture, may provide additional useful information on long-term survival. However, since functional status after rehabilitation is not an independent risk factor for long-term mortality, its predictive value must be interpreted in view of the comorbidities, mainly CCV diseases, which are more important to the risk of mortality than the event of hip fracture itself. abstract_id: PUBMED:30976999 Long-term functional outcome after a low-energy hip fracture in elderly patients. Background: The incidence of hip fractures is increasing. Elderly patients with a hip fracture frequently present with comorbidities, which are associated with higher mortality rates. Clinical studies regarding long-term functional outcome and mortality in hip fractures are rare. The aim of this study was to analyse the functional outcome and the mortality rate after a follow-up of 5 years in elderly patients with a hip fracture. Materials And Methods: This combined retrospective and cross-sectional study included patients aged 65 years or older with a low energy hip fracture who underwent surgery in the Maastricht University Medical Center+, the Netherlands. Data such as demographics and mortality rates were retrospectively collected and functional outcome (i.e. mobility, pain, housing conditions and quality of life) was assessed by a questionnaire. Results: Two hundred and sixteen patients were included in this study (mean age 82.2, SD ± 7.5). No significant differences were found in pain before hip fracture and after 1-year and 5-year follow-ups. Long-term functional outcome deteriorated after a hip fracture, with a significant increase in the use of walking aids (p &lt; 0.001), a significant decrease of patients living in a private home (p &lt; 0.001), and a low physical quality of life (SF-12 PCS = 27.1). The mortality incidences after 30-day, 1-year and 5-year follow-ups were 7.9%, 37.0% and 69.4%, respectively. Conclusion: Long-term functional outcome in elderly patients with hip fractures significantly deteriorated, with an increased dependency for mobility and housing conditions and a decreased physical quality of life. In addition, hip fractures are associated with high mortality rates at the 5-year follow-up. Level Of Evidence: Level III, a retrospective cohort study. abstract_id: PUBMED:30217470 Poor nutritional status but not cognitive or functional impairment per se independently predict 1 year mortality in elderly patients with hip-fracture. Background & Aims: Hip fractures are strongly associated with mortality in the elderly. Studies investigating predisposing factors have suggested a negative impact of poor nutritional, cognitive and functional status on patient survival, however their independent prognostic impact as well as their interactions remain undefined. This study aimed to determine whether poor nutritional status independently predicts 1 year post-fracture mortality after adjusting for cognitive and functional status and for other clinically relevant covariates. Methods: 1211 surgically treated hip fracture elderly (age ≥ 65) patients consecutively admitted to the Orthopaedic Surgery Unit of the "Azienda Sanitaria Universitaria Integrata Trieste" (ASUITs), Cattinara Hospital, Trieste, Italy and managed by a dedicated orthogeriatric team. Pre-admission nutritional status was evaluated by Mini Nutritional Assessment (MNA) questionnaire, cognitive status by Short Portable Mental Status Questionnaire (SPMSQ) and functional status by Activity of Daily Living (ADL) questionnaire. All other clinical data, including comorbidities, type of surgery, post-operative complications (delirium, deep vein thrombosis, cardiovascular complications, infections, need for blood transfusions) were obtained by hospital clinical records and by mortality registry. Results: Poor nutritional status (defined as MNA ≤23.5), increased cognitive and functional impairment were all associated with 3-, 6- and 12 month mortality (p &lt; 0.001). Both cognitive and functional impairment were associated with poor nutritional status (p &lt; 0.001). Logistic regression analysis demonstrated that the association between nutritional status and 3-, 6- and 12- month mortality was independent of age, gender, comorbidities, type of surgery and post-operative complications as well as of cognitive and functional impairment (p &lt; 0.001). In contrast, the associations between mortality and cognitive and functional impairment were independent (p &lt; 0.001) of demographic (age, gender) and clinical covariates but not of malnutrition. Kaplan-Meier analysis showed a lower mean survival time (p &lt; 0.001) in patients with poor nutritional status compared with those well-nourished. Conclusions: In hip fracture elderly patients, poor nutritional status strongly predicts 1 year mortality, independently of demographic, functional, cognitive and clinical risk factors. The negative prognostic impact of functional and cognitive impairment on mortality is mediated by their association with poor nutritional status. abstract_id: PUBMED:29760963 Effectiveness of a Day Rehabilitation Program in Improving Functional Outcome and Reducing Mortality and Readmission of Elderly Patients With Fragility Hip Fractures. Introduction: The incidence of hip fracture is projected to increase in the next 25 years as the world population ages. Hip fracture is often associated with subsequent readmission and mortality. Nevertheless, elderly patients often may not achieve the same level of functional ability as prior to their injury. Several studies have shown that close collaboration between orthopedic surgeons and geriatricians can improve such outcomes and Geriatric Day Hospital (GDH) is one of the examples of collaboration to improve such outcomes. The aim of this descriptive retrospective study is to review the effectiveness of the day rehabilitation program provided by a GDH on functional outcomes, mortality, and readmission rate, among a sample of elderly patients with hip fracture. Methods: The medical records of patients from January 1, 2009, to December 31, 2012, were collected and evaluated. Demographic data of the patients and Charlson Comorbidity Index were collected. The Barthel Index, Elderly Mobility Scale, and Mini-Mental State Examination were measured on admission and at discharge of the patients to evaluate both physical and cognitive functions. Results: The results showed that the majority of patients benefited from rehabilitation in the GDH. The 12-month mortality rate of patients taking full-course rehabilitation in the GDH was improved. The age of patient was the most important factor influencing the rehabilitation outcomes. Gender was the only risk factor for 12-month mortality and 6-month readmission. Discussion: Since patients were selected to attend GDH, there was a bias during the selection of patients. Furthermore, it was difficult to compare patients attended GDH with patients who did not because outcomes of the latter were difficult to be recorded. Conclusions: Our study shows that postoperative geriatric hip fracture patients definitely can benefit from rehabilitation service offered by GDH in terms of functional and cognitive outcomes. abstract_id: PUBMED:32339961 Initial functional recovery as a measure for rehabilitation outcome in post-acute hip fractured patients. Background: The assessment of immediate functional change after hip fracture surgery may assist in predicting rehabilitation outcome and improve program efficiency. Objective: To evaluate the potential of the initial functional recovery measure in predicting rehabilitation outcome of hip fractured patients. Design: Retrospective cohort study. Setting: Post-acute geriatric rehabilitation center. Participants: A total of 630 hip fractured patients admitted from 1/2016-6/2019. Methods: The initial functional recovery, a measure of functional change from the time of surgery to rehabilitation was calculated for each patient. The association between initial functional recovery and clinical and demographic variables was assessed by the Mann-Whitney and chi-square tests. A multiple linear regression model evaluated the association between initial functional recovery and the discharge functional independence measure score. Regression analysis estimated the association between initial functional recovery and the achievement of a favorable outcome measure. Results: A good correlations were found between initial functional recovery and rehabilitation outcomes but not length of stay (p &lt; 0.001). Multiple linear regression analysis showed that after adjustment for potential confounding variables, the higher the initial functional recovery, the better the discharge functional independence measure score (r2 = 0.783; p &lt; 0.001). Logistic regression revealed that a high initial functional recovery was significantly associated with a higher probability of achieving a favorable functional gain (OR = 1.093; 95% CI = 1.04-1.15, P &lt; 0.001). Conclusion: Initial functional recovery is a good predictor of rehabilitation outcomes in hip fractured patients, and may be utilized in decision-making regarding the rehabilitation program in addition to coordinating expectations with patients and their caregivers. abstract_id: PUBMED:28823174 Hip Fracture in the Elderly Patients: A Sentinel Event. Importance: Hip fracture in the elderly patients is associated with increased morbidity and mortality. There is great need for advance care planning should a patient fail to rehabilitate or experience an adverse event during or after recovery. This study was performed to evaluate for palliative care consultation and changes in code status and/or advance directives in elderly patients with hip fracture. Methods: We performed a retrospective review of 186 consecutive patients aged 65 years and older with a hip fracture due to a low-energy fall who underwent surgery at a large academic institution between August 1, 2013, and September 1, 2014. Risk factors assessed were patient demographics, home status, mobility, code status, comorbidities, medications, and hospitalizations prior to injury. Outcomes of interest included palliative care consultation, complications, mortality, and most recent code status, mobility, and home. Results: About 186 patients with hip fractures were included. Three patients died, and 12 (6.5%) sustained major complications during admission. Nearly one-third (51 patients) died upon final follow-up approximately 1.5 years after surgery. Of the patients who died, palliative care consulted on 6 (11.8%) during initial admission. Eleven (21.6%) were full code at death. Three patients underwent cardiopulmonary resuscitation (CPR) and 1 underwent massive transfusion and extracorporeal membrane oxygenation prior to changing their code status to do not attempt resuscitation. Conclusion: Hip fracture in elderly patients is an important opportunity to reassess the patient's personal health-care priorities. Advance directives, goals of care, and code status documentation should be updated in all elderly patients with hip fracture, should the patient's health decompensate. abstract_id: PUBMED:32778164 Multidisciplinary inpatient rehabilitation improves the long-term functional status of geriatric hip-fracture patients. Background: As the world population ages, the number of hip-related fractures in the elderly is steadily increasing. These fractures generate a major worldwide healthcare problem and frequently lead to deterioration of life quality, mobility and independence in activity of daily life of geriatric patients. At present, many studies have investigated and proved benefits of multidisciplinary orthogeriatric care for elderly hip-fracture patients. Only few studies however, have analyzed treatment concepts for those patients directly following discharge from hospital in specialized rehabilitation centers. The aim of this study was to evaluate effects of a multidisciplinary inpatient rehabilitation on the short- and long-term functional status of geriatric patients who suffered from hip fracture. Methods: A total of 161 hip-fracture patients aged 80 years and above, or additionally 70 years and above suffering from age-typical multimorbidity were included in this study. Patients who had an initial Barthel Index lower than 30 points were excluded from this study, as most of these patients were not able to attend a therapy at the rehabilitation center due to a poor functional status. The patients were separated into two subgroups dependent on the availability of treatment spots at the rehabilitation center. No other item was used to discriminate between the groups. Group A (n = 95) stayed an average of 21 days at an inpatient rehabilitation center that specialized in geriatric patients. Group B (n = 66) underwent the standard postoperative treatment and were sent home with further treatment by their general practitioner, nursing staff and physiotherapists. To evaluate the patients' functional status over the course of time we used the Barthel Index, which was evaluated for every patient on the day of discharge, as well as during checkups after 3, 6 and 12 months. Results: The average Barthel Index at the day of discharge was 57.79 ± 14.92 points for Group A and 56.82 ± 18.76 points for Group B (p = 0.431). After 3 months, the average Barthel Index was 82.43 points for Group A and 73.11 points for group B (p = 0.005). In the 6-month checkup Group A's average Barthel Index was 83.95 points and Group B's was 74.02 points (p = 0.002). After 12 months, patients from Group A had an average Barthel Index of 81.21 while patients from Group B had an average Barthel Index of 69.85 (p = 0.005). Conclusion: The results of this study reveal a significantly better outcome concerning both, short-term and long-term functional status after 3, 6 and 12 months for geriatric hip-fracture patients, who underwent an inpatient treatment in a rehabilitation center following the initial therapy. abstract_id: PUBMED:35247062 Short- and long-term prognostic factors associated with functional recovery in elderly patients with hip fracture: A systematic review. This systematic review aimed to identify short- and long-term associated factors to functional recovery of elderly hip fracture patients after discharge. We identified 43 studies reporting 74 associated factors to functional recovery; most of them were biological, sociodemographic, or inherent factors to patients' baseline characteristics, including their pre-facture functional capacity. Purpose: This systematic review aimed to identify short- and long-term associated factors to functional recovery of elderly hip fracture patients after hospital discharge. We assessed the use of the hip fracture core-set and key-performance indicators for secondary fracture reduction. Methods: A search was performed in seven electronic databases. Observational studies reporting predictors after usual care of elderly patients with hip fracture diagnoses receiving surgical or conservative treatment were included. Primary outcomes considered were part of the domains corresponding to functional capacity. Results: Of 3873 references identified, and after the screening and selection process, 43 studies were included. Sixty-one functional measures were identified for ten functional outcomes, including BADLs, IADLs, ambulation, and mobility. Biological characteristics such as age, sex, comorbidities, cognitive status, nutritional state, and biochemical parameters are significantly associated. Determinants such as contact and size of social network and those related to institutional care quality are relevant for functional recovery at six and 12 months. Age, pre-fracture function, cognitive status, and complications continue to be associated five years after discharge. We found 74 associated factors to functional recovery of elderly hip fracture patients. Ten of the studies reported rehabilitation programs as suggested in KPI 9; none used the complete hip fracture core-set. Conclusion: Most of the associated factors for functional recovery of elderly hip fracture were biological, sociodemographic, or inherent factors to patients' baseline characteristics, including their pre-facture functional capacity. For the core-set and KPI's, we found an insufficient use and report. This study reports 61 different instruments to measure functional capacity. Registration Number: PROSPERO (CRD42020149563). abstract_id: PUBMED:35815486 The Predictive Values of the Functional Status, Comorbidities, and the Types of Treatment on the Treatment Outcomes in Elderly Patients Following the Hip Fracture. PURPOSE OF THE STUDY The hip fracture is the leading cause of disability and deaths in elderly. If left untreated, the hip fracture results in inability to walk, and therefore the patient is dependent on the other's people help. The aim of this study was to determine the predictive values of the functional status, present comorbidities, and the types of treatment on the long-term functional status and mortality in patients with the hip fracture. MATERIAL AND METHODS We analyzed the data about the patients with the hip fractures treated in the period between 01.07.2006. and 01.07.2016., with the mean follow-up period between 34.3±4.9 months. The data included radiographs, age, comorbidities, functional status on admission, functional status on discharge, and follow-ups. RESULTS Female sex, increasing age, and poorer functional status upon admission, conservative treatment, trochanteric fracture, the presence of neurological or lung disease, diabetes, were all significant risk factors leading to poorer functional outcomes on the latest follow-up (p &lt; 0.05). On the other hand, the patients who were mobile upon admission had a longer survival (38.1±3.51 months) as compared with the other patients (12.3±5.1 months; χ2=25.202, p = 0.001). Besides, the results of this study revealed that the direct, statistically significant relationship exists between the untreated internal and neurological diseases, and increased mortality and poorer functional outcomes on the latest follow-up. CONCLUSIONS These results are useful for the orthopedic and trauma surgeons, who are treating these patients, the physical medicine and rehabilitation specialists, who are performing the rehabilitation of these patients, but also the health policy makers, who may promote the programmes of healthy aging (treating the diseases, maintaining fitness, etc.) Key words: hip fracture; comorbidity; mortality; functional status. abstract_id: PUBMED:24613271 Comprehensive care improves physical recovery of hip-fractured elderly Taiwanese patients with poor nutritional status. Objectives: The effects of nutritional management among other intervention components have not been examined for hip-fractured elderly persons with poor nutritional status. Accordingly, this study explored the intervention effects of an in-home program using a comprehensive care model that included a nutrition-management component on recovery of hip-fractured older persons with poor nutritional status at hospital discharge. Design: A secondary analysis of data from a randomized controlled trial with 24-month follow-up. Setting: A 3000-bed medical center in northern Taiwan. Participants: Subjects were included only if they had "poor nutritional status" at hospital discharge, including those at risk for malnutrition or malnourished. The subsample included 80 subjects with poor nutritional status in the comprehensive care group, 87 in the interdisciplinary care group, and 85 in the usual care group. Interventions: The 3 care models were usual care, interdisciplinary care, and comprehensive care. Usual care provided no in-home care, interdisciplinary care provided 4 months of in-home rehabilitation, and comprehensive care included management of depressive symptoms, falls, and nutrition as well as 1 year of in-home rehabilitation. Measurements: Data were collected on nutritional status and physical functions, including range of motion, muscle power, proprioception, balance and functional independence, and analyzed using a generalized estimating equation approach. We also compared patients' baseline characteristics: demographic characteristics, type of surgery, comorbidities, length of hospital stay, cognitive function, and depression. Results: Patients with poor nutritional status who received comprehensive care were 1.67 times (95% confidence interval 1.06-2.61) more likely to recover their nutritional status than those who received interdisciplinary and usual care. Furthermore, the comprehensive care model improved the functional independence and balance of patients who recovered their nutritional status over the first year following discharge, but not of those who had not yet recovered. Conclusions: An in-home program using the comprehensive care model with a nutritional component effectively improved the nutritional status of hip-fractured patients with poor nutrition. This comprehensive care intervention more effectively improved recovery of functional independence and balance for patients with recovered nutritional status. Answer: The functional status after rehabilitation may provide useful information on long-term survival for elderly patients who have suffered a hip fracture, but it is not an independent risk factor for long-term mortality. According to the study by PUBMED:15132291, functional performance after rehabilitation, as measured by the Functional Independent Measure (FIM) scores, was associated with long-term survival. However, the study concluded that functional status after rehabilitation is not an independent risk factor for long-term mortality when considering comorbidities, particularly cardio-cerebrovascular diseases, which are more significant predictors of mortality risk than the hip fracture event itself. Other studies have shown that while functional outcomes such as mobility and living conditions can deteriorate significantly after a hip fracture (PUBMED:30976999), and poor nutritional status can independently predict 1-year post-fracture mortality (PUBMED:30217470), the functional status alone is not a standalone predictor of mortality. Instead, it is influenced by a range of factors including nutritional status, cognitive impairment, and other clinical covariates. Moreover, multidisciplinary inpatient rehabilitation has been shown to improve the long-term functional status of geriatric hip-fracture patients (PUBMED:32778164), and initial functional recovery can be a good predictor of rehabilitation outcomes (PUBMED:32339961). However, these studies do not directly address the independent predictive value of functional status on long-term mortality. In summary, while functional status after rehabilitation is associated with long-term survival and is an important measure for rehabilitation outcomes, it does not independently predict long-term mortality in elderly patients with hip fractures. Other factors, particularly comorbidities such as cardio-cerebrovascular diseases and nutritional status, play a more significant role in determining mortality risk (PUBMED:15132291; PUBMED:30217470).
Instruction: Is drip infusion cholecystocholangiography (DIC) an acceptable modality at cholecystectomy for cholecystolithiasis, considering the frequency of bile duct maljunction and intraoperative bile duct injury? Abstracts: abstract_id: PUBMED:15127278 Is drip infusion cholecystocholangiography (DIC) an acceptable modality at cholecystectomy for cholecystolithiasis, considering the frequency of bile duct maljunction and intraoperative bile duct injury? Background/purpose: Knowledge of the configuration of the extrahepatic bile duct is indispensable to avoid bile duct injury during cholecystectomy. Various methods of examining the biliary tract have been developed; however, the most appropriate preoperative diagnostic modality at cholecystectomy for cholecystolithiasis has not yet been reported. Considering the frequency of bile duct maljunction (BDM) and operative bile duct injury, in addition to the cost and invasiveness of the various examination methods, we evaluated the usefulness of drip infusion cholecystocholangiography (DIC) as the optimal method of examination at cholecystectomy for cholecystolithiasis. Methods: Preoperative diagnostic accuracy of BDM was analyzed in relation to operative bile duct injury using 469 patients with benign biliary diseases who was diagnosed with DIC and endoscopic retrograde cholecystography and underwent cholecystectomy. BDM was classified according to Hisatsugu criteria. Results: Out of 469 consecutive patients who underwent a cholecystectomy for biliary stones between January 1, 1995, and September 30, 1998, at Ohta Nishinouchi General Hospital, 21 (4.48%) had a cystic duct maljunction (CDM) and 12 (2.56%) had an aberrant bile duct (ABD). The most common variants were types C and D for CDM, and types II and III for ABD, according to Hisatsugu's classification. Fourteen patients (42.4%) were diagnosed before the surgery; 13 of them received preoperative endoscopic retrograde cholangiography (ERC), and the remaining patient underwent preoperative drip infusion cholecystocholangiography (DIC). Nineteen patients could not be correctly diagnosed based on their preoperative examinations, but were diagnosed during surgery. Operative bile duct injury occurred in 1 patient (0.2%) whose maljunction could not be diagnosed before the operation. Conclusions: Taking into account the medical cost and invasiveness, and the frequency of BDM and related bile duct injuries, we conclude that DIC is an acceptable preoperative diagnostic modality to employ at cholecystectomy for cholecystolithiasis. Knowledge of the configuration of the extrahepatic bile duct is indispensable to avoid bile duct injury during cholecystectomy. Various methods of examining the biliary tract have been developed; however, the most appropriate preoperative diagnostic modality at cholecystectomy for cholecystolithiasis has not yet been reported. Considering the frequency of bile duct maljunction (BDM) and operative bile duct injury, in addition to the cost and invasiveness of the various examination methods, we evaluated the usefulness of drip infusion cholecystocholangiography (DIC) as the optimal method of examination at cholecystectomy for cholecystolithiasis. abstract_id: PUBMED:21033214 Intraoperative ultrasonography for reducing bile duct injury during laparoscopic cholecystectomy. Background/aims: Laparoscopic cholecystectomy has become a standard procedure for gallstone disease. Bile duct injury during laparoscopic cholecystectomy is one of the most serious complications. The cause of such injury is usually the erroneous interpretation of bile duct anatomy, especially mistaking common bile duct for the cystic duct. A method for the prevention of this injury has not been established. The objective of this study was to test the usefulness of intraoperative ultrasonography for preventing bile duct injury. Methodology: We performed intraoperative ultrasonography to visualize the junction of the cystic duct and common bile duct before cannulation for intraoperative cholangiography. A total of 200 patients were enrolled in this study from May 2004 to November 2005. Results: In 188 out of 200 cases (94%), the junction was identified by intraoperative ultrasonography before the cholangiography. Conclusion: Identification of the junction of the cystic duct and common bile duct before intraoperative cholangiography may reduce bile duct injury in laparoscopic cholecystectomy. abstract_id: PUBMED:30130667 Laparoscopic cholecystectomy with aberrant bile duct detected by intraoperative fluorescent cholangiography concomitant with angiography: A case report. Introduction: Laparoscopic cholecystectomy is the standard surgical treatment for patients with benign gallbladder disease. However, bile duct injury continues to be reported as a surgical complication. Intraoperative cholangiography is recommended to reduce the risk of bile duct injury during laparoscopic cholecystectomy. Intraoperative cholangiography using indocyanine green, which is excreted into bile and shows fluorescence under infrared light, has recently been reported as useful in preventing bile duct injury during laparoscopic cholecystectomy. We report here a case of laparoscopic cholecystectomy with an aberrant bile duct detected by intraoperative fluorescent cholangiography concomitant with angiography. Presentation Of Case: An 82-year-old woman was diagnosed with cholecystolithiasis and underwent laparoscopic cholecystectomy. An aberrant bile duct branching from the right side of the common hepatic duct was detected by intraoperative indocyanine green fluorescent cholangiography. Furthermore, we were able to confirm the cystic artery by reinjecting indocyanine green during the procedure. Laparoscopic cholecystectomy was performed safely without injuring the aberrant bile duct, despite no recognition of the abnormality on preoperative computed tomography or magnetic resonance imaging. Discussion And Conclusions: Aberrant bile ducts are rare anatomical variation and clinically important because of the susceptibility to injury during cholecystectomy. Our case reported for the first time that fluorescence cholangiography concomitant with angiography was useful for identifying an aberrant bile duct and the cystic artery during laparoscopic cholecystectomy. abstract_id: PUBMED:21459631 Lower rate of major bile duct injury and increased intraoperative management of common bile duct stones after implementation of routine intraoperative cholangiography. Background: Our university medical center is the only center in The Netherlands that has adopted a policy of routine intraoperative cholangiography (IOC) during cholecystectomy. This study aimed to describe the rate of bile duct injury (BDI) and management of common bile duct (CBD) stones before and after implementation of a routine IOC policy. Study Design: Medical records were reviewed of all patients undergoing cholecystectomy in the 3 years before implementation of routine IOC and 3 years after. Dissection with the goal to achieve the critical view of safety was the standard operative technique during the entire study period. Results: Four hundred and twenty-one patients underwent cholecystectomy with selective IOC and 435 patients with routine IOC. Groups were similar in age, sex, comorbidity, indication for surgery, and surgical approach. IOC was attempted in 5.9% in the selective IOC group and 59.8% in the routine IOC group (p &lt; 0.001). The rate of major BDI was 1.9% in the selective IOC group and 0% in the routine IOC group (p = 0.004). The injuries consisted of 5 type B and 3 type D injuries, according to the Amsterdam classification. The rate of minor BDI did not differ substantially among the groups. More CBD stones were detected in the routine IOC group (4.8% versus 1.0%; p = 0.001) and they were managed intraoperatively more frequently (2.8% versus 0.7%; p = 0.023). There was a trend toward fewer preoperative and postoperative ERCPs and other interventions for CBD stones (19.1% versus 24.2%; p = 0.067). Conclusions: Implementation of routine IOC policy was followed by fewer major BDIs and higher rates of intraoperative CBD stone management. abstract_id: PUBMED:15494281 Endoscopic and surgical management of bile duct injury after laparoscopic cholecystectomy. Laparoscopic cholecystectomy has become the first choice of management for symptomatic cholecystolithiasis. While it is associated with decreased postoperative morbidity and mortality, bile duct injuries are reported to be more severe and more common (0-2.7%), when compared to open cholecystectomy (0.2-0.5%) [New Engl. J. Med. 234 (1991) 1073; Am. J. Surg. 165 (1993) 9; Surg. Clin. N Am. 80 (2000) 1127]. These bile duct injuries include leaks, strictures, transection and removal of (part of) the duct, with or without vascular damage. Bile duct injury might be due to misidentification of the biliary tract anatomy due to acute cholecystitis, large impacted stones, short cystic duct, anatomical variations, but also due to technical errors leading to bleeding with subsequent clipping and coagulation trauma [Ann. Surg. 237 (2003) 460]. Early recognition and adequate multidisciplinary approach is the cornerstone for the optimal final outcome. Suboptimal management of injuries often leads to more extensive damage to the biliary tree and its vasculature with as consequences biliary peritonitis, sepsis, abscesses, multiple organ failure, a more difficult (proximal) reconstruction and in the long run, secondary biliary cirrhosis, and liver failure. Despite increasing experience in performing laparoscopic cholecystectomy, the frequency of bile duct injuries has not decreased [Ann. Surg. 234 (2001) 549]. Therapy encompasses endoscopic stenting, percutaneous transhepatic dilatation (PTCD) and surgical reconstruction. abstract_id: PUBMED:15963227 Can bile duct injuries be prevented? "A new technique in laparoscopic cholecystectomy". Background: Over the last decade, laparoscopic cholecystectomy has gained worldwide acceptance and considered to be as "gold standard" in the surgical management of symptomatic cholecystolithiasis. However, the incidence of bile duct injury in laparoscopic cholecystectomy is still two times greater compared to classic open surgery. The development of bile duct injury may result in biliary cirrhosis and increase in mortality rates. The mostly blamed causitive factor is the misidentification of the anatomy, especially by a surgeon who is at the beginning of his learning curve. Biliary tree injuries may be decreased by direct coloration of the cystic duct, ductus choledochus and even the gall bladder. Methods: gall bladder fundus was punctured by Veress needle and all the bile was aspirated. The same amount of fifty percent methylene blue diluted by saline solution was injected into the gall bladder for coloration of biliary tree. The dissection of Calot triangle was much more safely performed after obtention of coloration of the gall bladder, cystic duct and choledocus. Results: Between October 2003 and December 2004, overall 46 patients (of which 9 males) with a mean age of 47 (between 24 and 74) underwent laparoscopic cholecystectomy with methylene blue injection technique. The diagnosis of chronic cholecystitis (the thickness of the gall bladder wall was normal) confirmed by pre-operative abdominal ultrasonography in all patients. The diameters of the stones were greater than 1 centimeter in 32 patients and calcula of various sizes being smaller than 1 cm. were documented in 13 cases. One patient was operated for gall bladder polyp (our first case). Successful coloration of the gall bladder, cystic duct and ductus choledochus was possible in 43 patients, whereas only the gall bladder and proximal cystic duct were visualised in 3 cases. In these cases, ductus choledochus visibility was not possible. None of the patients developed bile duct injury. Conclusion: The number of bile duct injuries related to anatomic misidentification can be decreased and even vanished by using intraoperative methylene blue injection technique into the gall bladder fundus intraoperatively. abstract_id: PUBMED:15954837 Retained common bile duct stone as a consequence of a fundus-first laparoscopic cholecystectomy. The fundus-first technique for laparoscopic cholecystectomy provides an alternative to the conventional dissection technique in patients at high risk for conversion to open cholecystectomy or at risk for bile duct injury. We report the complication of a retained common bile duct (CBD) stone after utilizing this technique. Intraoperative cholangiography (IOC) was not performed due to the concern for causing CBD injury in a patient with significant periductal inflammation and no risk factors for CBD stones. Following discharge, the patient developed scleral icterus 3 days later and returned for evaluation. He required endoscopic retrograde cholangiopancreatography for removal of a CBD stone. None of the four series reporting on this technique have described this complication. It should now be recognized that there is a risk of displacing a gallstone into the CBD in utilizing this technique. This report highlights the importance of intraoperative imaging of the CBD when using this technique, even in patients considered to be at low risk for having CBD stones. If IOC is considered hazardous, then intraoperative ultrasound should be the modality of choice. abstract_id: PUBMED:17907984 A case of cystic duct drainage into the left intrahepatic duct and the importance of laparoscopic fundus-first cholecystectomy for prevention of bile duct injury. The laparoscopic cholecystectomy is the gold standard for the treatment of cholecystolithiasis, although it has been reported that the incidence of bile duct injury is higher for this method than for conventional open surgery. These injuries are mainly attributable to a misunderstanding of the biliary tract anatomy owing to severe cholecystitis, large impacted stones, and anatomic variations. In this paper, we report on the successful treatment of a 59-year-old male patient with cholecystolithiasis accompanied with extremely unusual biliary junction, in which the cystic duct drained into the left hepatic duct while using the laparoscopic fundus-first-cholecystectomy approach (i.e., the doom down technique) without any serious intraoperative complications. If the doom-down technique was not indicated in this particular case, the authors believe that a careless division of the cystic duct may have resulted in operative morbidity. abstract_id: PUBMED:36123939 Analysis of risk factors for bile duct injury in laparoscopic cholecystectomy in China: A systematic review and meta-analysis. Background: To explore the risk factors of bile duct injury in laparoscopic cholecystectomy (LC) in China through meta-analysis. Methods: The study commenced with a search and selection of case-control studies on the risk factors for bile duct injury in LC in China using the following databases: PubMed, EMBASE, ScienceNet.cn, CNKI, Wanfang Data, and VIP. Data were extracted from the collected documents independently by 2 researchers, following which a meta-analysis of these data was performed using Revman 5.3. Results: The compilation of all data from a total of 19 case-control studies revealed that among 41,044 patients, 458 patients experienced bile duct injury in LC, accounting for the incidence rate of 1.12% for bile duct injury. The revealed risk factors for bile duct injury were age (≥40 years) (odds ratio [OR] = 6.23, 95% CI [95% confidence interval]: 3.42-11.33, P &lt; .001), abnormal preoperative liver function (OR = 2.01, 95% CI: 1.50-2.70, P &lt; .001), acute and subacute inflammation of gallbladder (OR = 8.35, 95% CI: 5.32-13.10, P &lt; .001; OR = 4.26, 95% CI: 2.73-6.65, P &lt; .001), thickening of gallbladder wall (≥4 mm) (OR = 3.18, 95% CI: 2.34-4.34, P &lt; .001), cholecystolithiasis complicated with effusion (OR = 3.05, 95% CI: 1.39-6.71, P = .006), and the anatomic variations of the gallbladder triangle (OR = 11.82, 95% CI: 6.32-22.09, P &lt; .001). However, the factors of gender and overweight (body mass index ≥ 25 kg/m2) were not significantly correlated with bile duct injury in LC. Conclusions: In the present study, age (≥40 years), abnormal preoperative liver function, gallbladder wall thickening, acute and subacute inflammation of the gallbladder, cholecystolithiasis complicated with effusion, and anatomic variations of the gallbladder triangle were found to be closely associated with bile duct injury in LC. abstract_id: PUBMED:27497939 Effectiveness of a barbed suture in the repair of bile duct injury during laparoscopic cholecystectomy: Report of two cases. Introduction: Bile duct injury during laparoscopic cholecystectomy occurs in rare cases. We report two cases using barbed suture for repair in bile duct injury during laparoscopic cholecystectomies. Presentation Of Cases: The first patient was a 73-year-old woman who underwent elective laparoscopic cholecystectomy for cholecystolithiasis. When the gallbladder was dissected from the bed, bile spillage was observed and an injured small bile duct was detected. The bile duct could not be managed using a titanium clip. The second patient was an 83-year-old woman who underwent emergent laparoscopic cholecystectomy for gallbladder torsion. After the gallbladder was dissected from the bed, bile spillage was observed. In both cases, a running suture, of absorbable monofilament 3-0 barbed suture, was used to laparoscopically repair the injuries. Discussion: Laparoscopic repair of a bile duct injury is technically challenging, especially in the gallbladder bed where suturing is very difficult because of the tangential approach and the risk of additional liver laceration. Barbed sutures have the benefit of being knotless, thus, performing a running suture is not difficult, even in laparoscopic procedures. Further, absorbable and monofilament threads are generally better suited for biliary surgery, compared with non-absorbable and braided sutures, because of the potential association of the other types of materials with bile duct stone and stricture formation. Conclusion: We believe that the V-Loc™ device is an effective and appropriate option for bile duct injuries that occur during laparoscopic cholecystectomies, particularly around the gallbladder bed, and it is especially useful for surgeons unfamiliar with intracorporeal knot tying. Answer: Yes, drip infusion cholecystocholangiography (DIC) is considered an acceptable preoperative diagnostic modality at cholecystectomy for cholecystolithiasis. This conclusion is based on an evaluation that took into account the medical cost and invasiveness of the procedure, as well as the frequency of bile duct maljunction (BDM) and related bile duct injuries. In a study involving 469 patients with benign biliary diseases who were diagnosed with DIC and endoscopic retrograde cholecystography and underwent cholecystectomy, DIC was found to be useful in diagnosing BDM before surgery in a significant number of cases. Out of these patients, 21 (4.48%) had a cystic duct maljunction (CDM) and 12 (2.56%) had an aberrant bile duct (ABD). Operative bile duct injury occurred in only 1 patient (0.2%) whose maljunction could not be diagnosed before the operation. Therefore, considering these factors, DIC is deemed an acceptable modality for preoperative diagnosis in the context of cholecystectomy for cholecystolithiasis (PUBMED:15127278).
Instruction: Can a Neutrophil-Lymphocyte Ratio Derived from Preoperative Blood Tests Predict Arteriovenous Fistula Maturation? Abstracts: abstract_id: PUBMED:27263823 Can a Neutrophil-Lymphocyte Ratio Derived from Preoperative Blood Tests Predict Arteriovenous Fistula Maturation? Background: To evaluate the association between inflammatory markers expressed as neutrophil-lymphocyte ratio (NLR) and process of arteriovenous fistula (AVF) maturation. Methods: A retrospective review of patients with end-stage renal disease referred for formation of a new AVF in one center. Patients referred to the vascular service in the University Hospital of Limerick for creation of vascular access between 2009 and 2013. The association between NLR, calculated from preoperative blood tests, and functional fistula maturation, as determined by successful use of the AVF for 6 consecutive hemodialysis sessions, was investigated using univariate and logistic regression analyses. A logistic regression analysis was carried out to assess potential influence from other factors related to AVF maturation. Stepwise regression test was performed including the NLR. Results: Overall AVF functional maturation rate in our study was 53.7% (66/123). Patients with matured AVFs had an NLR of 4.850 (1.8-15.7) compared with 3.554 (1.7-15.0); this difference was found to be significant (P = 0.024). Female gender (P = 0.008) and a history of kidney transplant (P = 0.004) were the only independent factors of fistula maturation outcome in a logistic regression model. Conclusion: Increased level of NLR was found to be associated with fistula maturation; however, more studies are needed to validate this finding. abstract_id: PUBMED:30741073 Role of neutrophil-to-lymphocyte ratio as a prognostic indicator for hemodialysis arteriovenous fistula failure. Objective: To evaluate the role of preoperative peripheral blood neutrophil-to-lymphocyte ratio in predicting early arteriovenous fistula failure. The impact of postoperative neutrophil-to-lymphocyte ratio on arteriovenous fistula failure was also investigated. Methods: Medical records from 470 patients who underwent first-time arteriovenous fistula creation were studied. Demographic, clinical, and laboratory data were obtained. Receiver operating characteristic curves were constructed to determine the optimal cutoff values of preoperative and postoperative neutrophil-to-lymphocyte ratios for predicting early arteriovenous fistula failure. Univariate and multivariate analyses were performed to evaluate whether the neutrophil-to-lymphocyte ratio parameters were prognostic indicators for arteriovenous fistula failure. Results: Complete data from 396 patients were studied. The prevalence of early arteriovenous fistula failure was 30.6%. The optimal cutoff values of preoperative and postoperative neutrophil-to-lymphocyte ratios for predicting early arteriovenous fistula failure were 2.7 (82.6% sensitivity and 52.0% specificity) and 2.9 (78.5% sensitivity and 73.1% specificity), respectively. By univariate and multivariate analyses, high preoperative and postoperative neutrophil-to-lymphocyte ratios were significantly associated with early arteriovenous fistula failure. The adjusted odds ratios were 5.46 (95% confidence interval: 3.15-9.48) and 7.19 (95% confidence interval: 4.12-12.55), respectively. Conclusions: High preoperative and postoperative neutrophil-to-lymphocyte ratios significantly predict early arteriovenous fistula failure. abstract_id: PUBMED:29147162 High Preoperative Neutrophil-Lymphocyte Ratio (NLR) and Red Blood Cell Distribution Width (RDW) as Independent Predictors of Native Arteriovenous Fistula Failure. Objective: To study the association between a high preoperative neutrophil lymphocyte ratio (NLR) and red cell distribution width (RDW) with arteriovenous fistula (AVF) failure, as well as to determine the cut-off values in a South Asian population. Materials and Methods: A total of 150 consecutive patients with a failed fistula who presented in the Department of Vascular Surgery between January 2014 and January 2016. Patients fulfilling the inclusion criteria were selected as defined as Case. They were compared with 150 patients who had matured fistulae (Control). Results: A significant difference was found between the Case and Control groups in mean preoperative NLR (3.3±0.5 versus 2.2±0.9, P value=0.011) and RDW (15.9±2.9 versus 13.6±1.1%, P value of 0.02), respectively. Multivariate analysis revealed that NLR (Odds Ratios (OR) 1.39; 95% Confidence Intervals (CI) 1.02 to 2.08; P&lt;0.001) and RDW (OR 1.39; 95%CI 1.11 to 1.69; P&lt;0.001) were strong independent predictors of AVF failure. A receiver operating characteristic curve analysis showed a cut-off value of 2.65 (specificity 80%, sensitivity 98%) and 15.1 (specificity 79%, sensitivity 98%) for NLR and RDW, respectively. Conclusion: Increased preoperative NLR and RDW were associated with a high rate of AVF failure in a South Asian population. abstract_id: PUBMED:34412532 Predictive value of neutrophil-to-lymphocyte ratio and platelet-to-lymphocyte ratio for primary patency of percutaneous transluminal angioplasty in hemodialysis arteriovenous fistula stenosis. Objective: The aim of this study is to investigate the potential association of the neutrophil-to-lymphocyte ratio (NLR) and platelet-to-lymphocyte ratio (PLR) with the primary patency of percutaneous transluminal angioplasty (PTA) in hemodialysis arteriovenous fistula (AVF) stenosis. Methods: This study conducted a retrospective review of patients with end-stage renal disease referred for hemodialysis AVF stenosis in one center. The study consisted of 114 patients with significant (significant stenosis was defined as a reduction in the caliber of the fistula vein of &gt; 50% with respect to the non-aneurysmal venous segment). AVF stenosis patients were treated with PTA, with conventional balloon angioplasty. The NLR and PLR were calculated from the pre-interventional blood samples. The patients were classified into two groups: group A, primary patency &lt; 12 months (n = 35) and group B, and primary patency ≥ 12 months (n = 79). Comparisons between the groups were performed using the Mann-Whitney U test. Kaplan-Meier analysis was performed to compare the factors, NLR and PLR, for association with primary patency AVFs. A receiver-operating characteristic curve analysis was performed to identify the sensitivity and specificity of the NLR and PLR cut-off values in the prediction of primary patency time. Results: There was no difference in gender; age; side of AVF; AVF type; comorbid diseases such as diabetes mellitus and hypertension; or blood parameters such as white cell count, erythrocytes, hemoglobin, neutrophils, lymphocytes, monocytes, eosinophils, basophils, C-reactive protein, NLR, or PLR between the two groups (p &gt; 0.05). There was also no significant difference in the patency rate between the NLR &lt; 4.13 and NLR ≥ 4.13 groups at 12 months (NLR cut-off point = 4.13, p = 0.273). There were statistically significant differences between the primary patency rates of the PLR &lt; 187.86 and PLR ≥ 187.86 groups at 12 months (PLR cut-off point = 187.86, p = 0.023). The cut-off value for PLR for the determination of primary patency was 187.86, with a sensitivity of 57.0% and specificity of 34.4%. Conclusion: An increased level of PLR may be a risk factor for the development of early AVF restenosis after successful PTA. However, more studies are needed to validate this finding. abstract_id: PUBMED:31693714 Maturation of arteriovenous fistulas in patients with and without preexisting hemodialysis catheters. Background: Central venous catheter (CVC) placement has been linked to systemic inflammation. This study was conducted to compare the successful maturation of arteriovenous fistulas (AVFs) and the preoperative white blood cell (WBC) profiles between patients with and without preexisting CVCs. Materials And Methods: A retrospective cohort study was conducted with 550 patients who underwent first-time AVF creation. Patients were divided into three groups according to preexisting CVCs and CVC types as follows: tunneled CVC (n = 23), nontunneled CVC (n = 223), and no CVC (n = 304). These three groups were compared in terms of AVF maturation and preoperative WBC profiles. Results: The mean age of the patients was 61.1 ± 14.3 years. The AVF maturation rates of the tunneled CVC, nontunneled CVC and no CVC groups were 65.2%, 54.7% and 74.7%, respectively (p &lt; 0.001). According to the uni- and multivariate analyses, only the nontunneled CVC group had a lower odds of AVF maturation compared to the no CVC group (adjusted odds ratio 0.43; 95% confidence interval 0.29-0.62). Patients with preexisting nontunneled CVC but not tunneled CVC also had significantly higher mean WBC and neutrophil counts but a lower percentage of lymphocytes than patients with no preexisting CVC. Conclusion: Preexisting nontunneled CVC had a negative impact on the successful maturation of the newly created AVF. Patients in the nontunneled CVC group had the highest preoperative WBC and neutrophil counts but the lowest lymphocyte percentage compared to patients in the other two groups. abstract_id: PUBMED:31642376 Preoperative noninvasive artery flow volume and maturation of arteriovenous fistulae: A single surgeon's experience from 403 cases. Background: Preoperative Doppler ultrasound evaluation of arteriovenous fistula inflow artery includes measurements of arterial diameter and flow volume. The purpose of this study was to evaluate the significance of flow volume to arteriovenous fistula maturation rate. Study Design: Review of consecutive patients who underwent arteriovenous fistula creation by a single surgeon. Cases with available preoperative arterial diameter and flow volume were analyzed. Primary end point was arteriovenous fistula failure to mature. Information collected included demographics, Doppler ultrasound reports, level of inflow artery, operative reports, and outcomes to the time of arteriovenous fistula maturation or failure. Risk factors were identified by logistic regression analysis. Outcomes were compared by odds ratio. Results: Four hundred and three cases were identified. Arterial diameter and flow volume were both independent significant risk factors affecting arteriovenous fistula maturation rate (p = 0.001). Arterial diameter of &lt;2.5 mm and flow volume of &lt;20 mL/min predicted failure to mature with 95% specificity. Further comparison of cases with optimal arterial diameter but flow volume of &lt;20 mL/min showed increased failure to mature rate compared to the combination of optimal arterial diameter with optimal flow volume (p = 0.01). Conclusion: Preoperative arterial diameter and flow volume values were both significant independent variables affecting arteriovenous fistula maturation rate. However, flow volume of &lt;20 mL/min remained a significant risk factor to failure-to-mature rate, despite optimal arterial diameter. abstract_id: PUBMED:34749361 Association between Preoperative Monocyte-to-Lymphocyte Ratio and Late Arteriovenous Fistula Dysfunction in Hemodialysis Patients: A Cohort Study. Background: Inflammation appears to be at the biological core of arteriovenous fistula (AVF) dysfunction, and the occurrence of AVF dysfunction is related to high death and disability in hemodialysis (HD) patients. Despite several studies on the correlations between AVF dysfunction and inflammatory indicators, how AVF dysfunction is related to the monocyte-to-lymphocyte ratio (MLR) is much unclear. We hypothesize that preoperative MLR is associated with AVF dysfunction in Chinese HD patients. Methods: In this single-center retrospective cohort study, totally 769 adult HD patients with a new AVF created between 2011 and 2019 were included. Association of preoperative MLR with AVF dysfunction (thrombosis or decrease of normal vessel diameter by &gt;50%, requiring either surgical revision or percutaneous transluminal angioplasty) was assessed by multivariable Cox proportional hazard regression. Results: The patients were aged 55.8 ± 12.2 years and were mostly males (55%). During the average 32-month follow-up (maximum 119 months), 223 (29.0%) patients had permanent vascular access dysfunction. In adjusted multivariable Cox proportional hazard regression analyses, the risk of AVF dysfunction was 4.32 times higher with 1 unit increase in MLR (hazard ratio [HR]: 5.32; 95% confidence interval [CI]: 3.1-9.11). Compared with patients with MLR &lt;0.28, HRs associated with an MLR of 0.28-0.41 and ≥0.41 are 1.54 (95% CI: 1.02-2.32) and 3.17 (2.18-4.62), respectively. Conclusions: A higher preoperative MLR is independently connected with a severer risk of AVF dysfunction in HD patients. Its clinical value should be determined in the future. abstract_id: PUBMED:31884875 Relationship of arteriovenous fistula stenosis and thrombosis with the platelet-lymphocyte ratio in hemodialysis patients. Background: The platelet-lymphocyte ratio, which was reported to have a strong relationship with chronic inflammation and thrombosis, is a useful biomarker. The purpose of this study was to evaluate the relationship between the platelet-lymphocyte ratio, arteriovenous stenosis, and thrombosis in patients with chronic renal failure. Methods: Patients who were referred to our interventional radiology department due to arteriovenous fistula dysfunction from dialysis units between August 2015 and December 2018 were retrospectively reviewed. In the study, 95 patients with arteriovenous fistula access problems were included. Patients were divided into two groups: stenosis (n = 52) and thrombosis (n = 43). Thirty-six subjects with a patent left radiocephalic arteriovenous fistula proven by both color Doppler ultrasonography and clinically were added to the control group. Blood samples were obtained on the same day before the fistulography. Results: Platelet counts, lymphocyte counts, and platelet-lymphocyte ratio were found to be significantly different between the three groups. After the Bonferroni post hoc analysis, there was a significant difference between the stenosis and control group (p = 0.017), and the thrombosis and control group (p &lt; 0.001) in terms of the platelet-lymphocyte ratio. No significant difference for any parameter was found between stenosis and thrombosis group. Conclusion: High levels of the platelet-lymphocyte ratio may be a supportive finding of arteriovenous fistula stenosis and thrombosis and can be taken into consideration during hemodialysis-dependent patients' follow-up. abstract_id: PUBMED:23535043 A systematic review of preoperative duplex ultrasonography and arteriovenous fistula formation. Background: Arteriovenous fistula (AVF) formation for dialysis access is a common procedure. Fistula maturation is unpredictable. Preoperative duplex mapping may increase procedural success. We undertook a systematic review to assess the effect of preoperative duplex mapping on subsequent AVF patency. Methods: The published literature was searched on PubMed and the Cochrane Library using the following keywords: 'arteriovenous fistula,' 'venous mapping,' 'ultrasound,' 'hemodialysis,' 'vascular access,' and 'perioperative vessel mapping.' Conference proceedings were hand searched for otherwise unpublished trials. Only randomized controlled trials in which preoperative duplex mapping was compared with clinical evaluation were eligible. Results: Three trials (402 patients) were identified. More patients who underwent ultrasound successfully started using their fistula for dialysis access, although the difference did not reach statistical significance (174/214 vs 130/188; pooled odds ratio, 1.96; P = .11). Conclusions: Preoperative duplex mapping may improve fistula maturation rates. However, the results do not reach statistical significance and there are no cost-effectiveness data. Further work is required. abstract_id: PUBMED:33091513 Reliability of preoperative venous mapping ultrasonography in predicting for autogenous arteriovenous fistula maturation. Background: Autogenous arteriovenous fistula creation is the preferred route for vascular access for hemodialysis. Although preoperative venous mapping ultrasonography has been advocated as an operative planning adjunct and recently incorporated into the Society for Vascular Surgery clinical guidelines, controversy remains regarding its usefulness for predicting access success. The purpose of the present retrospective clinical study was to test the hypothesis that vein size measured on routine preoperative venous mapping is a poor predictor of primary fistula maturation. Methods: Consecutive upper extremity autogenous arteriovenous fistulas created by three dedicated vascular surgeons were retrospectively reviewed. The demographic characteristics, preoperative venous mapping, functional maturation, and patency were analyzed. The clinically relevant variables were tested for predictive significance using a logistic regression model. Results: A total of 199 upper extremity autogenous arteriovenous fistulas had been created during a 5-year period. Patients were aged 70 ± 16 years (range, 20-96 years), and 62% were men. Most had already been undergoing dialysis before fistula creation (83%), usually via a tunneled central venous catheter (62%). Radial-cephalic, brachial-cephalic, and brachial-basilic arteriovenous fistulas had been created in 82 patients (41%), 76 patients (38%), and 10 patients (5%), respectively. Fistula maturation, defined as a palpable thrill and/or successful cannulation of the fistula with the ability to deliver a flow rate of 400 mL/min, was achieved in 67% of the patients. A higher body mass index was associated with nonmaturation on both univariate and multivariate analyses (success, 28.6 ± 7.7 kg/m2; vs failed, 31.6 ± 9.4 kg/m2; P = .029; odds ratio [OR], 1.06; 95% confidence interval [CI], 1.02-1.10; P &lt; .01). On univariate analysis, the maximum target vein diameter assessed by preoperative venous mapping was slightly larger in the group achieving successful maturation (2.9 ± 1.1 mm vs 2.6 ± 0.9 mm; P = .014). However, neither the maximum target vein diameter nor a target vein size &gt;3 mm was significantly predictive of maturation on multivariate analysis (maximum vein diameter: OR, 0.65; 95% CI, 0.35-1.22; P = .176; vein size &gt;3 mm: OR, 0.91; 95% CI, 0.32-2.60; P = .857). After a median follow-up of 15 months (interquartile range, 26 months), the primary functional patency, primary-assisted patency, and secondary patency rates were 39.1% ± 0.6%, 94.5% ± 0.6%, and 97.9% ± 0.5%. No association of vein diameter with long-term patency was found. Conclusions: Despite the national fistula-first initiatives, most patients still undergo access via catheter at the initiation of hemodialysis. The use of routine preoperative venous mapping does not predict successful primary maturation. Also, no clinically useful predictor of fistula maturation was identified in the present study. Answer: The neutrophil-lymphocyte ratio (NLR) derived from preoperative blood tests has been investigated as a potential predictor of arteriovenous fistula (AVF) maturation in several studies. The findings suggest that there may be an association between NLR and AVF maturation outcomes. One study found that patients with matured AVFs had a significantly higher NLR compared to those with non-matured AVFs, indicating that an increased level of NLR was associated with fistula maturation. However, the study also identified female gender and a history of kidney transplant as the only independent factors of fistula maturation outcome in their logistic regression model, suggesting that more studies are needed to validate the finding of NLR as a predictor (PUBMED:27263823). Another study reported that high preoperative and postoperative NLRs were significantly associated with early arteriovenous fistula failure. They determined optimal cutoff values for preoperative and postoperative NLRs for predicting early AVF failure and found that high NLRs were significant prognostic indicators for AVF failure (PUBMED:30741073). Similarly, a study in a South Asian population found that a high preoperative NLR and red blood cell distribution width (RDW) were strong independent predictors of AVF failure. They established cutoff values for NLR and RDW that were associated with a high rate of AVF failure (PUBMED:29147162). However, another study found no significant difference in the patency rate between groups with different NLR levels, suggesting that NLR may not be a reliable predictor for primary patency of percutaneous transluminal angioplasty in hemodialysis AVF stenosis (PUBMED:34412532). In conclusion, while some studies have found a significant association between NLR and AVF maturation or failure, the evidence is not entirely consistent. More research is needed to validate the predictive value of NLR for AVF maturation and to establish its clinical utility.
Instruction: Does raloxifene therapy affect mammographic breast cancer screening in postmenopausal patients? Abstracts: abstract_id: PUBMED:16620065 Does raloxifene therapy affect mammographic breast cancer screening in postmenopausal patients? Objective: The aim of the study was to determine mammographic breast density changes during raloxifene therapy in postmenopausal patients Materials And Methods: Fifty-five cases who were using raloxifen therapy were included in this retrospective analysis. Raloxifene was given for osteopenia and osteoporosis according to low bone mineral density measured by dual-energy X-ray absorptiometry (DEXA). None of the patients were using hormone replacement therapy 12 months before the initiation of raloxifene treatment or during the study. Mammographic breast density was determined by mammography before the initiation of raloxifene treatment (baseline) and after 12 to 16 months of therapy. The Breast Imaging Reporting and Data System (BI-RADS) breast density score was used for the evaluation of mammographic density. Results: There was no change in mammographic breast density when the baseline and the first mammography taken after the initiation of therapy were compared (p = 0.32). There was no significant correlation between the duration of raloxifene treatment and mammographic density measured after raloxifene treatment (r = -0.158, p = 0.25). Only in one patient did the BI-RADS classification of 2 change to 3 after 12 months of therapy. Conclusions: In conclusion, raloxifene therapy for 12 to 16 months does not increase mammographic breast density in postmenopausal women with low bone mass. abstract_id: PUBMED:11875329 Mammographic changes associated with raloxifene and tibolone therapy in postmenopausal women: a prospective study. Objective: The prolonged use of estrogen therapy is associated with a slightly increased risk of breast cancer. Alternative therapies that are effective in the prevention of menopause, having associated morbidities but no unwanted effects, are of primary interest in the pharmacologic research. The aim of this study was to evaluate the effect of two alternative to estrogens drugs, the selective estrogen receptor modulator raloxifene and the tissue-specific tibolone, on the mammographic appearance of the breast. Design: The study group comprised 131 postmenopausal women aged 41 to 67 years. The women were at least 2 years postmenopausal, free of climacteric symptoms, and at the time of entry to the study had not had therapy for at least 9 months. Women with risk factors for osteoporosis or cardiovascular disease were allocated either to tibolone (n = 56) or raloxifene (n = 48) therapy. Women with no risk factors and women who either did not qualify for or denied treatment (n = 27) served as controls. The study duration was 12 months. Women received a baseline mammogram before commencing therapy and a repeat mammogram at the end of the study period. Mammogram findings were classified according to the modified Wolfe criteria by two expert radiologists. Results: No difference was identified between groups with respect to baseline characteristics associated with breast cancer risk. Similarly, no difference was detected between groups concerning the modified Wolfe classification of baseline mammographic findings. In the tibolone group, 10.7% of the women showed an increase in breast density in the 12-month reevaluation. The respective figure in the raloxifene group was 6.3%, whereas no woman in the control group showed an increase in breast density. Differences in the increase in breast density between groups did not, however, reach statistical significance. Accordingly, 10.7% of women in the tibolone group and 18.8% of women in the raloxifene group exhibited involutionary changes in the repeat mammogram, whereas 25.9% of women in the control group revealed a decrease in breast density in the 12-month examination. The percentages were not significantly different between groups. Conclusions: Breast density as shown by mammography was stable in a majority of patients and changed in a minority of cases for both tibolone and raloxifene. In most patients, these drugs are not likely to interfere with mammogram interpretation. Larger long-term studies are needed to confirm the impact of prolonged tibolone or raloxifene administration on mammography. abstract_id: PUBMED:18355366 Differential impact of conventional and low-dose oral hormone therapy, tibolone and raloxifene on mammographic breast density, assessed by an automated quantitative method. Objective: To evaluate impact of different postmenopausal hormone therapy (HT) regimens and raloxifene on mammographic breast density. Design: Open, randomised, comparative clinical trial. Setting: Women were recruited through local newspapers and posters. They were examined at the Departments of Haematology, Gynaecology, and Radiology in a University Hospital. Population: A total of 202 healthy postmenopausal women between the age of 45 and 65 years. Methods: Women were randomly assigned to receive daily treatment for 12 weeks with tablets containing low-dose HT containing 1 mg 17 beta-estradiol + 0.5 mg norethisterone acetate (NETA) (n = 50), conventional-dose HT containing 2 mg 17 beta-estradiol and 1 mg NETA (n = 50), 2.5 mg tibolone (n = 51), or 60 mg raloxifene (n = 51). Mammographic density was determined at baseline and after 12 weeks by an automated technique in full-field digital mammograms. Main Outcome Measures: Mammographic density was expressed as volumetric breast density estimations. Results: Mammographic breast density increased significantly and to a similar degree in both the conventional- and low-dose HT groups. A small reduction in mammographic breast density was seen in the raloxifene group, whereas those allocated to tibolone treatment only showed minor changes. Conclusions: Our findings demonstrated a significant difference in impact on mammographic breast density between the regimens. Although these results indicate a differential effect of these regimens on breast tissue, the relation to breast cancer risk remains unresolved. abstract_id: PUBMED:22493366 A pilot study of letrozole for one year in women at enhanced risk of developing breast cancer: effects on mammographic density. Background: Tamoxifen or raloxifen for 5 years reduces the risk of developing invasive breast cancer by 40%. To address safety concerns and seek enhanced efficacy, studies of new chemopreventive agents using mammographic density as a surrogate end point are attractive. Patients And Methods: Postmenopausal women with risk factors for developing breast cancer were given letrozole 2.5 mg daily for one year, and mammographic density was the biomarker of breast cancer risk modification. It was assessed (blinded to the reader) at baseline, 6, and 12 months in 16 evaluable women among 20 enrolled. Results: Eight patients exhibited decreased mammographic density at six months, and eleven at 12 months. Toxicities included joint aches not precluding continued treatment. Conclusion: This pilot study supports the use of letrozole for reducing breast cancer risk. In addition, it encourages prospective studies of serial changes in mammographic density as a biomarker of risk modification within a selected high-risk population. abstract_id: PUBMED:20843165 Patient characteristics and utilization of breast cancer screening or diagnostic procedures prior to initiation of raloxifene, bisphosphonates and calcitonin. Objective: This study evaluated the characteristics of postmenopausal women who initiated on raloxifene, bisphosphonates, and calcitonin, specifically evaluating the use of breast cancer screening or diagnostic procedures prior to initiation of therapy. Research Design And Methods: Women 50 years and older with at least one claim for raloxifene (RLX), bisphosphonates (BIS), or calcitonin (CT) in 2005 or 2006 and continuous enrollment (with consecutive gaps in enrollment of no more than 1 month) from January 2004 to December 2007 were identified in a large national commercial and Medicare claims database. Treatment-naïve postmenopausal women initiating on raloxifene, bisphosphonates, and calcitonin were compared in terms of breast cancer screening or diagnostic procedures (i.e., mammogram, breast MRI, ultrasound, and biopsy) as well as age, provider specialty, fractures, bone mineral density screening, Chronic Disease Scores, and comorbidities. Results: Treatment-naïve patients initiated on raloxifene were younger than those initiated on bisphosphonates and calcitonin (mean age 63 years [RLX], 66 years [BIS], 72 years [CT]; p &lt; 0.05). Treatment-naïve patients initiated on raloxifene were more likely to have had breast cancer screening or diagnostic procedures in the 12 months prior to therapy initiation than treatment-naïve bisphosphonate or calcitonin patients (RLX 61%, BIS 57%, CT 41%; p &lt; 0.05), and were more likely to have an increased frequency of mammograms in the 12 months after therapy initiation (RLX 18%, BIS 16%, CT 15%; p &lt; 0.05). Calcitonin patients were the most likely to have had a fracture in the pre-period followed by bisphosphonates then raloxifene patients. Conclusion: These data suggest that there are differences in the clinical characteristics of postmenopausal women who initiate osteoporosis medications specifically in regards to age, pre-period fractures and breast cancer screening or diagnostic procedure use prior to initiation. Key limitations include general claims database limitations, lack of ability to assess behavior change, and lack of information on therapy initiation rationale. abstract_id: PUBMED:20032799 The effects of raloxifene on mammographic breast density: a review of clinical trials. Objective: Breast density is an independent risk factor for the development of invasive breast cancer (BC). It has been hypothesized that because raloxifene (RLX) has been shown to reduce BC risk, its use will result in reduced breast density. Methods: This article provides a review of seven clinical studies that examined the effects of RLX on breast density. Results: Overall, RLX did not increase or decrease mammographic breast density. This article provides a review of the various methods used to determine breast density in these RLX studies and offers a potential explanation as to why the studies failed to show an effect on mammographic density. Conclusions: Presently, no clinical recommendations can be made with regard to RLX and its effects on breast density. To determine the effect of RLX on breast density, larger studies need to be conducted in postmenopausal women with high breast density at baseline who are at high risk for BC, with a standardized method of breast density measurement. abstract_id: PUBMED:23237718 Positive and negative affect, depression, and cognitive processes in the Cognition in the Study of Tamoxifen and Raloxifene (Co-STAR) Trial. Objectives: This study examined the relationship between positive and negative affect, depressive symptoms, and cognitive performance. Methods: The sample consisted of 1479 non-demented, postmenopausal women (mean age = 67 years) at increased risk of breast cancer enrolled in the National Surgical Adjuvant Breast and Bowel Project's Study of Tamoxifen and Raloxifene. At each annual visit, women completed a standardized neuropsychological battery and self-report measures of affect and depression. Data from three visits were used in linear mixed models for repeated measures using likelihood ratio tests. Separate analyses were performed to relate positive/negative affect and depression to each cognitive measure. Results: Higher positive affect was associated with better letter fluency (p = .006) and category fluency (p &lt; .0001). Higher negative affect was associated with worse global cognitive function (p &lt; .0001), verbal memory (CVLT List B; p = .002), and spatial ability (p &lt; .0001). Depressive symptoms were negatively associated with verbal knowledge (p = .004), figural memory (p &lt; .0001), and verbal memory (p's ≤ .0001). Discussion: Findings are consistent with some prior research demonstrating a link between positive affect and increased verbal fluency and between depressive symptoms and decreased memory. The most novel finding shows that negative affect is related to decreased global cognition and visuospatial ability. Overall, this research in a large, longitudinal sample supports the notion that positive affect is related to increases and negative affect to decreases in performance on distinct cognitive measures. abstract_id: PUBMED:23060039 Baseline mammographic breast density and the risk of invasive breast cancer in postmenopausal women participating in the NSABP study of tamoxifen and raloxifene (STAR). Mammographic breast density is an established risk factor for breast cancer. However, results are inconclusive regarding its use in risk prediction models. The current study evaluated 13,409 postmenopausal participants in the NSABP Study of Tamoxifen and Raloxifene. A measure of breast density as reported on the entry mammogram report was extracted and categorized according to The American College of Radiology Breast Imaging Reporting and Data System (BI-RADS) classifications. An increased risk of invasive breast cancer was associated with higher mammographic breast density (P &lt; 0.001). The association remained significant after adjusting for age, treatment, and smoking history [HR 1.35, 95% confidence interval (CI): 1.16-1.58], as well as when added to a model including the Gail score (HR 1.33, 95% CI: 1.14-1.55). At five years after random assignment, time-dependent area under the curve (AUC) improved from 0.63 for a model with Gail score alone to 0.64 when considering breast density and Gail score. Breast density was also significant when added to an abbreviated model tailored for estrogen receptor-positive breast cancers (P = 0.02). In this study, high BI-RADS breast density was significantly associated with increased breast cancer risk when considered in conjunction with Gail score but provided only slight improvement to the Gail score for predicting the incidence of invasive breast cancer. The BI-RADS breast composition classification system is a quick and readily available method for assessing breast density for risk prediction evaluations; however, its addition to the Gail model does not seem to provide substantial predictability improvements in this population of postmenopausal healthy women at increased risk for breast cancer. abstract_id: PUBMED:26289836 Selective estrogen receptor modulators and the combination therapy conjugated estrogens/bazedoxifene: A review of effects on the breast. Traditional menopausal hormone therapy containing estrogens/progestin has been associated with an increased risk of breast cancer, and estrogen exposure is known to promote growth and proliferation of a majority of breast cancers. Therefore, it is important for clinicians to consider the breast safety profile of any hormone-based therapy used in postmenopausal women. This review provides an overview of the breast safety and tolerability profiles of currently marketed selective estrogen receptor modulators, antiestrogens, and the first tissue selective estrogen complex combining conjugated estrogens with the selective estrogen receptor modulator bazedoxifene in postmenopausal women. Selective estrogen receptor modulators and antiestrogens act as estrogen receptor antagonists in the breast. Tamoxifen, toremifene, and the selective estrogen receptor degrader fulvestrant are used to treat breast cancer, and tamoxifen and raloxifene protect against breast cancer in high-risk women. Postmenopausal women using selective estrogen receptor modulators for prevention or treatment of osteoporosis (raloxifene, bazedoxifene) can be reassured that these hormonal treatments do not adversely affect their risk of breast cancer and may, in the case of raloxifene, even be protective. There are limited data on breast cancer in women who use ospemifene for dyspareunia. Conjugated estrogens/bazedoxifene use for up to two years did not increase mammographic breast density or breast pain/tenderness, and there was no evidence of an increased risk of breast cancer, suggesting that conjugated estrogens/bazedoxifene has an improved breast safety profile compared with traditional menopausal hormone therapies. Future research will continue to focus on development of selective estrogen receptor modulators and selective estrogen receptor modulator combinations capable of achieving the ideal balance of estrogen receptor agonist and antagonist effects. abstract_id: PUBMED:25253900 Breast Cancer Risk - Genes, Environment and Clinics. The information available about breast cancer risk factors has increased dramatically during the last 10 years. In particular, studies of low-penetrance genes and mammographic density have improved our understanding of breast cancer risk. In addition, initial steps have been taken in investigating interactions between genes and environmental factors. This review concerns with actual data on this topic. Several genome-wide association studies (GWASs) with a case-control design, as well as large-scale validation studies, have identified and validated more than a dozen single nucleotide polymorphisms (SNPs) associated with breast cancer risk. They are located not only in or close to genes known to be involved in cancer pathogenesis, but also in genes not previously associated with breast cancer pathogenesis, or may even not be related to any genes. SNPs have also been identified that alter the lifetime risk in BRCA mutation carriers. With regard to nongenetic risk factors, studies of postmenopausal hormone replacement therapy (HRT) have revealed important information on how to weigh up the risks and benefits of HRT. Mammographic density (MD) has become an accepted and important breast cancer risk factor. Lifestyle and nutritional considerations have become an integral part of most studies of breast cancer risk, and some improvements have been made in this field as well. More than 10 years after the publication of the first breast cancer prevention studies with tamoxifen, other substances such as raloxifene and aromatase inhibitors have been investigated and have also been shown to have preventive potential. Finally, mammographic screening systems have been implemented in most Western countries during the last decade. These may be developed further by including more individualized methods of predicting the patient's breast cancer risk. Answer: Raloxifene therapy does not appear to significantly affect mammographic breast density in postmenopausal women, which is an important factor in breast cancer screening. A retrospective analysis of 55 postmenopausal patients using raloxifene therapy for osteopenia and osteoporosis showed no change in mammographic breast density after 12 to 16 months of treatment, as measured by the Breast Imaging Reporting and Data System (BI-RADS) breast density score (PUBMED:16620065). Similarly, a prospective study involving 131 postmenopausal women found that only 6.3% of the women in the raloxifene group showed an increase in breast density after 12 months, which was not statistically significant compared to the control group (PUBMED:11875329). Another study evaluating the impact of different postmenopausal hormone therapy regimens on mammographic breast density found a small reduction in mammographic breast density in the raloxifene group (PUBMED:18355366). Furthermore, a review of clinical trials concluded that raloxifene did not increase or decrease mammographic breast density, suggesting that it does not interfere with mammogram interpretation (PUBMED:20032799). This is supported by the fact that postmenopausal women using raloxifene for prevention or treatment of osteoporosis can be reassured that these hormonal treatments do not adversely affect their risk of breast cancer and may even be protective (PUBMED:26289836). Overall, the evidence suggests that raloxifene therapy for postmenopausal women does not have a significant impact on mammographic breast density and, therefore, is unlikely to affect mammographic breast cancer screening outcomes.
Instruction: Does age matter? Abstracts: abstract_id: PUBMED:34118094 A history of previous childbirths is linked to women's white matter brain age in midlife and older age. Maternal brain adaptations occur in response to pregnancy, but little is known about how parity impacts white matter and white matter ageing trajectories later in life. Utilising global and regional brain age prediction based on multi-shell diffusion-weighted imaging data, we investigated the association between previous childbirths and white matter brain age in 8,895 women in the UK Biobank cohort (age range = 54-81 years). The results showed that number of previous childbirths was negatively associated with white matter brain age, potentially indicating a protective effect of parity on white matter later in life. Both global white matter and grey matter brain age estimates showed unique contributions to the association with previous childbirths, suggesting partly independent processes. Corpus callosum contributed uniquely to the global white matter association with previous childbirths, and showed a stronger relationship relative to several other tracts. While our findings demonstrate a link between reproductive history and brain white matter characteristics later in life, longitudinal studies are required to establish causality and determine how parity may influence women's white matter trajectories across the lifespan. abstract_id: PUBMED:27919183 Accelerated Gray and White Matter Deterioration With Age in Schizophrenia. Objective: Although brain changes in schizophrenia have been proposed to mirror those found with advancing age, the trajectory of gray matter and white matter changes during the disease course remains unclear. The authors sought to measure whether these changes in individuals with schizophrenia remain stable, are accelerated, or are diminished with age. Method: Gray matter volume and fractional anisotropy were mapped in 326 individuals diagnosed with schizophrenia or schizoaffective disorder and in 197 healthy comparison subjects aged 20-65 years. Polynomial regression was used to model the influence of age on gray matter volume and fractional anisotropy at a whole-brain and voxel level. Between-group differences in gray matter volume and fractional anisotropy were regionally localized across the lifespan using permutation testing and cluster-based inference. Results: Significant loss of gray matter volume was evident in schizophrenia, progressively worsening with age to a maximal loss of 8% in the seventh decade of life. The inferred rate of gray matter volume loss was significantly accelerated in schizophrenia up to middle age and plateaued thereafter. In contrast, significant reductions in fractional anisotropy emerged in schizophrenia only after age 35, and the rate of fractional anisotropy deterioration with age was constant and best modeled with a straight line. The slope of this line was 60% steeper in schizophrenia relative to comparison subjects, indicating a significantly faster rate of white matter deterioration with age. The rates of reduction of gray matter volume and fractional anisotropy were significantly faster in males than in females, but an interaction between sex and diagnosis was not evident. Conclusions: The findings suggest that schizophrenia is characterized by an initial, rapid rate of gray matter loss that slows in middle life, followed by the emergence of a deficit in white matter that progressively worsens with age at a constant rate. abstract_id: PUBMED:34048307 Effects of Age on White Matter Microstructure in Children With Neurofibromatosis Type 1. Children with neurofibromatosis type 1 (NF1) often report cognitive challenges, though the etiology of such remains an area of active investigation. With the advent of treatments that may affect white matter microstructure, understanding the effects of age on white matter aberrancies in NF1 becomes crucial in determining the timing of such therapeutic interventions. A cross-sectional study was performed with diffusion tensor imaging from 18 NF1 children and 26 age-matched controls. Fractional anisotropy was determined by region of interest analyses for both groups over the corpus callosum, cingulate, and bilateral frontal and temporal white matter regions. Two-way analyses of variance were done with both ages combined and age-stratified into early childhood, middle childhood, and adolescence. Significant differences in fractional anisotropy between NF1 and controls were seen in the corpus callosum and frontal white matter regions when ages were combined. When stratified by age, we found that this difference was largely driven by the early childhood (1-5.9 years) and middle childhood (6-11.9 years) age groups, whereas no significant differences were appreciable in the adolescence age group (12-18 years). This study demonstrates age-related effects on white matter microstructure disorganization in NF1, suggesting that the appropriate timing of therapeutic intervention may be in early childhood. abstract_id: PUBMED:31271249 Paternal age contribution to brain white matter aberrations in autism spectrum disorder. Aim: Although advanced parental age holds an increased risk for autism spectrum disorder (ASD), its role as a potential risk factor for an atypical white matter development underlying the pathophysiology of ASD has not yet been investigated. The current study was aimed to detect white matter disparities in ASD, and further investigate the relationship of paternal and maternal age at birth with such disparities. Methods: Thirty-nine adult males with high-functioning ASD and 37 typically developing (TD) males were analyzed in the study. The FMRIB Software Library and tract-based spatial statistics were utilized to process and analyze the diffusion tensor imaging data. Results: Subjects with ASD exhibited significantly higher mean diffusivity (MD) and radial diffusivity (RD) in white matter fibers, including the association (inferior fronto-occipital fasciculus, right inferior longitudinal fasciculus, superior longitudinal fasciculi, uncinate fasciculus, and cingulum), commissural (forceps minor), and projection tracts (anterior thalamic radiation and right corticospinal tract) compared to TD subjects (Padjusted &lt; 0.05). No differences were seen in either fractional anisotropy or axial diffusivity. Linear regression analyses assessing the relationship between parental ages and the white matter aberrations revealed a positive correlation between paternal age (PA), but not maternal age, and both MD and RD in the affected fibers (Padjusted &lt; 0.05). Multiple regression showed that only PA was a predictor of both MD and RD. Conclusion: Our findings suggest that PA contributes to the white matter disparities seen in individuals with ASD compared to TD subjects. abstract_id: PUBMED:24361462 Differential vulnerability of gray matter and white matter to intrauterine growth restriction in preterm infants at 12 months corrected age. Intrauterine growth restriction (IUGR) is associated with a high risk of abnormal neurodevelopment. Underlying neuroanatomical substrates are partially documented. We hypothesized that at 12 months preterm infants would evidence specific white-matter microstructure alterations and gray-matter differences induced by severe IUGR. Twenty preterm infants with IUGR (26-34 weeks of gestation) were compared with 20 term-born infants and 20 appropriate for gestational age preterm infants of similar gestational age. Preterm groups showed no evidence of brain abnormalities. At 12 months, infants were scanned sleeping naturally. Gray-matter volumes were studied with voxel-based morphometry. White-matter microstructure was examined using tract-based spatial statistics. The relationship between diffusivity indices in white matter, gray matter volumes, and perinatal data was also investigated. Gray-matter decrements attributable to IUGR comprised amygdala, basal ganglia, thalamus and insula bilaterally, left occipital and parietal lobes, and right perirolandic area. Gray-matter volumes positively correlated with birth weight exclusively. Preterm infants had reduced FA in the corpus callosum, and increased FA in the anterior corona radiata. Additionally, IUGR infants had increased FA in the forceps minor, internal and external capsules, uncinate and fronto-occipital white matter tracts. Increased axial diffusivity was observed in several white matter tracts. Fractional anisotropy positively correlated with birth weight and gestational age at birth. These data suggest that IUGR differentially affects gray and white matter development preferentially affecting gray matter. At 12 months IUGR is associated with a specific set of structural gray-matter decrements. White matter follows an unusual developmental pattern, and is apparently affected by IUGR and prematurity combined. abstract_id: PUBMED:36358443 Interpretation for Individual Brain Age Prediction Based on Gray Matter Volume. The relationship between age and the central nervous system (CNS) in humans has been a classical issue that has aroused extensive attention. Especially for individuals, it is of far greater importance to clarify the mechanisms between CNS and age. The primary goal of existing methods is to use MR images to derive high-accuracy predictions for age or degenerative diseases. However, the associated mechanisms between the images and the age have rarely been investigated. In this paper, we address the correlation between gray matter volume (GMV) and age, both in terms of gray matter themselves and their interaction network, using interpretable machine learning models for individuals. Our goal is not only to predict age accurately but more importantly, to explore the relationship between GMV and age. In addition to targeting each individual, we also investigate the dynamic properties of gray matter and their interaction network with individual age. The results show that the mean absolute error (MAE) of age prediction is 7.95 years. More notably, specific locations of gray matter and their interactions play different roles in age, and these roles change dynamically with age. The proposed method is a data-driven approach, which provides a new way to study aging mechanisms and even to diagnose degenerative brain diseases. abstract_id: PUBMED:30323144 Prevalence of white matter hyperintensities increases with age. White matter hyperintensities (WMHs) that arise with age and/or atherosclerosis constitute a heterogeneous disorder in the white matter of the brain. However, the relationship between age-related risk factors and the prevalence of WMHs is still obscure. More clinical data is needed to confirm the relationship between age and the prevalence of WMHs. We collected 836 patients, who were treated in the Renmin Hospital, Hubei University of Medicine, China from January 2015 to February 2016, for a case-controlled retrospective analysis. According to T2-weighted magnetic resonance imaging results, all patients were divided into a WMHs group (n = 333) and a non-WMHs group (n = 503). The WMHs group contained 159 males and 174 females. The prevalence of WMHs increased with age and was associated with age-related risk factors, such as cardiovascular diseases, smoking, drinking, diabetes, hypertension and history of cerebral infarction. There was no significant difference in sex, education level, hyperlipidemia and hyperhomocysteinemia among the different age ranges. These findings confirm that age is an independent risk factor for the prevalence and severity of WMHs. The age-related risk factors enhance the occurrence of WMHs. abstract_id: PUBMED:26446690 Age exacerbates HIV-associated white matter abnormalities. Both HIV disease and advanced age have been associated with alterations to cerebral white matter, as measured with white matter hyperintensities (WMH) on fluid-attenuated inversion recovery (FLAIR) magnetic resonance imaging (MRI), and more recently with diffusion tensor imaging (DTI). This study investigates the combined effects of age and HIV serostatus on WMH and DTI measures, as well as the relationships between these white matter measures, in 88 HIV seropositive (HIV+) and 49 seronegative (HIV-) individuals aged 23-79 years. A whole-brain volumetric measure of WMH was quantified from FLAIR images using a semi-automated process, while fractional anisotropy (FA) was calculated for 15 regions of a whole-brain white matter skeleton generated using tract-based spatial statistics (TBSS). An age by HIV interaction was found indicating a significant association between WMH and older age in HIV+ participants only. Similarly, significant age by HIV interactions were found indicating stronger associations between older age and decreased FA in the posterior limbs of the internal capsules, cerebral peduncles, and anterior corona radiata in HIV+ vs. HIV- participants. The interactive effects of HIV and age were stronger with respect to whole-brain WMH than for any of the FA measures. Among HIV+ participants, greater WMH and lower anterior corona radiata FA were associated with active hepatitis C virus infection, a history of AIDS, and higher current CD4 cell count. Results indicate that age exacerbates HIV-associated abnormalities of whole-brain WMH and fronto-subcortical white matter integrity. abstract_id: PUBMED:24983715 Age-related effects in the neocortical organization of chimpanzees: gray and white matter volume, cortical thickness, and gyrification. Among primates, humans exhibit the most profound degree of age-related brain volumetric decline in particular regions, such as the hippocampus and the frontal lobe. Recent studies have shown that our closest living relatives, the chimpanzees, experience little to no volumetric decline in gray and white matter over the adult lifespan. However, these previous studies were limited with a small sample of chimpanzees of the most advanced ages. In the present study, we sought to further test for potential age-related decline in cortical organization in chimpanzees by expanding the sample size of aged chimpanzees. We used the BrainVisa software to measure total brain volume, gray and white matter volumes, gray matter thickness, and gyrification index in a cross-sectional sample of 219 captive chimpanzees (8-53 years old), with 38 subjects being 40 or more years of age. Mean depth and cortical fold opening of 11 major sulci of the chimpanzee brains were also measured. We found that chimpanzees showed increased gyrification with age and a cubic relationship between age and white matter volume. For the association between age and sulcus depth and width, the results were mostly non-significant with the exception of one negative correlation between age and the fronto-orbital sulcus. In short, results showed that chimpanzees exhibit few age-related changes in global cortical organization, sulcus folding and sulcus width. These findings support previous studies and the theory that the age-related changes in the human brain is due to an extended lifespan. abstract_id: PUBMED:38424358 White matter brain age as a biomarker of cerebrovascular burden in the ageing brain. As the brain ages, it almost invariably accumulates vascular pathology, which differentially affects the cerebral white matter. A rich body of research has investigated the link between vascular risk factors and the brain. One of the less studied questions is that among various modifiable vascular risk factors, which is the most debilitating one for white matter health? A white matter specific brain age was developed to evaluate the overall white matter health from diffusion weighted imaging, using a three-dimensional convolutional neural network deep learning model in both cross-sectional UK biobank participants (n = 37,327) and a longitudinal subset (n = 1409). White matter brain age gap (WMBAG) was the difference between the white matter age and the chronological age. Participants with one, two, and three or more vascular risk factors, compared to those without any, showed an elevated WMBAG of 0.54, 1.23, and 1.94 years, respectively. Diabetes was most strongly associated with an increased WMBAG (1.39 years, p &lt; 0.001) among all risk factors followed by hypertension (0.87 years, p &lt; 0.001) and smoking (0.69 years, p &lt; 0.001). Baseline WMBAG was associated significantly with processing speed, executive and global cognition. Significant associations of diabetes and hypertension with poor processing speed and executive function were found to be mediated through the WMBAG. White matter specific brain age can be successfully targeted for the examination of the most relevant risk factors and cognition, and for tracking an individual's cerebrovascular ageing process. It also provides clinical basis for the better management of specific risk factors. Answer: Yes, age matters in the context of brain health and white matter integrity. Research has shown that age is associated with various changes in the brain's white matter and gray matter. For instance, the prevalence of white matter hyperintensities (WMHs), which are markers of cerebral small vessel disease, increases with age and is linked to age-related risk factors such as cardiovascular diseases, smoking, and hypertension (PUBMED:30323144). Age exacerbates HIV-associated white matter abnormalities, with older HIV-positive individuals showing a significant association between WMH and decreased fractional anisotropy in certain brain regions (PUBMED:26446690). In schizophrenia, there is an accelerated deterioration of gray matter with age, and white matter deterioration also worsens with age at a constant rate (PUBMED:27919183). In children with neurofibromatosis type 1 (NF1), age-related effects on white matter microstructure suggest that early childhood may be the most appropriate timing for therapeutic intervention (PUBMED:34048307). Additionally, paternal age has been shown to contribute to white matter disparities in autism spectrum disorder (PUBMED:31271249). Intrauterine growth restriction (IUGR) in preterm infants affects gray and white matter development differently, with specific white matter tracts being affected by IUGR and prematurity combined (PUBMED:24361462). Moreover, chimpanzees exhibit few age-related changes in global cortical organization, which contrasts with the more profound age-related brain volumetric decline observed in humans (PUBMED:24983715). A study on the UK Biobank cohort found that the number of previous childbirths was negatively associated with white matter brain age in women, suggesting a potential protective effect of parity on white matter in later life (PUBMED:34118094). Another study developed a white matter-specific brain age as a biomarker of cerebrovascular burden, finding that diabetes, hypertension, and smoking are associated with increased white matter brain age gap (WMBAG), which in turn is linked to cognitive performance (PUBMED:38424358). In summary, age plays a significant role in the structural and functional changes of the brain's white matter and gray matter, and these changes can have implications for cognitive health and disease progression.
Instruction: Can keratinocytes cause failure of osseointegration? Abstracts: abstract_id: PUBMED:19063771 Can keratinocytes cause failure of osseointegration? Aim: Bone-anchored hearing aids are well established, implanted devices. We present two patients who suffered mixed hearing loss and who underwent titanium implant placement in the temporal bone to enable attachment of bone-anchored hearing aids. Osseointegration is necessary for such implants to function. We report these two cases to highlight how such osseointegration may be disrupted. Method: Attached tissue from the explanted or removed titanium implants was examined by transmission electron microscopy and histopathological analysis. Results: Attached tissue from both implants showed the presence of keratinocytes at the titanium implant and living bone interface. This was confirmed by histopathological analysis. In one case, there was frank keratinocyte proliferation, which had led to osseointegration failure; in the other case, such proliferation was present but not so advanced. Conclusion: These findings suggest that, in the cases reported, keratinocytes implanted between the titanium and the living bone, leading to disruption of osseointegration. abstract_id: PUBMED:36079421 Autophagy Plays Multiple Roles in the Soft-Tissue Healing and Osseointegration in Dental Implant Surgery-A Narrative Review. Dental endo-osseous implants have become a widely used treatment for replacing missing teeth. Dental implants are placed into a surgically created osteotomy in alveolar bone, the healing of the soft tissue lesion and the osseointegration of the implant being key elements to long-term success. Autophagy is considered the major intracellular degradation system, playing important roles in various cellular processes involved in dental implant integration. The aim of this review is an exploration of autophagy roles in the main cell types involved in the healing and remodeling of soft tissue lesions and implant osseointegration, post-implant surgery. We have focused on the autophagy pathway in macrophages, endothelial cells; osteoclasts, osteoblasts; fibroblasts, myofibroblasts and keratinocytes. In macrophages, autophagy modulates innate and adaptive immune responses playing a key role in osteo-immunity. Autophagy induction in endothelial cells promotes apoptosis resistance, cell survival, and protection against oxidative stress damage. The autophagic machinery is also involved in transporting stromal vesicles containing mineralization-related factors to the extracellular matrix and regulating osteoblasts' functions. Alveolar bone remodeling is achieved by immune cells differentiation into osteoclasts; autophagy plays an important and active role in this process. Autophagy downregulation in fibroblasts induces apoptosis, leading to better wound healing by improving excessive deposition of extracellular matrix and inhibiting fibrosis progression. Autophagy seems to be a dual actor on the scene of dental implant surgery, imposing further research in order to completely reveal its positive features which may be essential for clinical efficacy. abstract_id: PUBMED:26986184 Degenerative and regenerative morphological aspects in peri-implantitis. The purpose of this study was to describe the tissues surrounding two early failed dental implants morphologically and ultrastructurally. In the first case, the implant's surface was analyzed by scanning electron microscopy (SEM) using an environmental scanning electron microscope and the surrounding tissues were analyzed by light microscopy (LM) and transmission electron microscopy. In the second case we used only LM. By SEM analysis, very thin connective tissue was seen at the implant's apex; in the middle part, highly vascularized connective tissue and poorly mineralized areas were found, and there were fewer mineralized areas at the implant's base. Histologically and ultrastructurally, we identified a heterogeneous cellular population including fibroblast-like cells and osteoblasts with abundant amorphous extracellular matrix, as well as a number of inflammatory cells, predominantly neutrophils. In a small area around the implant, microorganisms classified as Actinomyces were identified. In this case, degenerative aspects were predominantly caused by infection. In the second case, our observations focused on recruitment of mesenchymal stem cells, their proliferation and differentiation into epithelial cells with keratinocyte phenotype, and formation of de novo capillaries from mesenchymal progenitors of endothelial cells. These regenerative events were accompanied by multiple areas of mineralization, culminating in bone formation around the implant. Correlative microscopic observations are necessary to conduct in future studies, in various clinical conditions and post-operative stages, to better define the cellular events involved in healing and osseointegration; our observations clearly depict significant regenerative aspects despite the degenerative ones. abstract_id: PUBMED:33844114 Photobiomodulation using LLLT and LED of cells involved in osseointegration and peri-implant soft tissue healing. This study evaluated the influence of photobiomodulation (PBM) using low-level laser therapy (PBM/LLLT) or light-emitting diode (PBM/LED) therapy on peri-implant tissue healing. A laboratory model was used to assess the adhesion and metabolism of osteoblasts (SaOs-2), human gingival fibroblasts (HGF), and normal oral keratinocytes (NOK) seeded on a titanium (Ti) surface. After seeding the cells on disks of Ti placed in wells of 24-well plates, three irradiations were performed every 24 h at energy density of 3 J/cm2. For PBM/LLLT, a LaserTABLE device was used with a wavelength of 780 nm and 25 mW, while for PBM/LED irradiation, a LEDTABLE device was used at 810 nm, 20 mW, at a density of 3 J/cm2. After irradiations, the number of cells (NC) attached and spread on the Ti surface, cell viability (CV), total protein (TP), and collagen (Col) synthesis were assessed. Alkaline phosphate activity (ALP) was evaluated only for SaOs-2. Data were submitted to ANOVA complemented by Turkey statistical tests at a 5% significance level. PBM significantly increased adherence of NOK to the Ti surface, while no significant effect was observed for SaOs-2 and HGF. PBM positively affected CV, as well as Col and TP synthesis, in distinct patterns according to the cell line. Increased ALP activity was observed only in those cells exposed to PBM/LLLT. Considering cell specificity, this investigation reports that photobiomodulation with low-power laser and LED at determined parameters enhances cellular functions related to peri-implant tissue healing in a laboratory model. abstract_id: PUBMED:34159966 Biological sealing and integration of a fibrinogen-modified titanium alloy with soft and hard tissues in a rat model. Percutaneous or transcutaneous devices are important and unique, and the corresponding biological sealing at the skin-implant interface is the key to their long-term success. Herein, we investigated the surface modification to enhance biological sealing, using a metal sheet and screw bonded by biomacromolecule fibrinogen mediated via pre-deposited synthetic macromolecule polydopamine (PDA) as a demonstration. We examined the effects of a Ti-6Al-4V titanium alloy modified with fibrinogen (Ti-Fg), PDA (Ti-PDA) or their combination (Ti-PDA-Fg) on the biological sealing and integration with skin and bone tissues. Human epidermal keratinocytes (HaCaT), human foreskin fibroblasts (HFF) and preosteoblasts (MC3T3-E1), which are closely related to percutaneous implants, exhibited better adhesion and spreading on all the three modified sheets compared with the unmodified alloy. After three-week subcutaneous implantation in Sprague-Dawley (SD) rats, the Ti-PDA-Fg sheets could significantly attenuate the soft tissue response and promote angiogenesis compared with other groups. Furthermore, in the model of percutaneous tibial implantation in SD rats, the Ti-PDA-Fg screws dramatically inhibited epithelial downgrowth and promoted new bone formation. Hence, the covalent immobilization of fibrinogen through the precoating of PDA is promising for enhanced biological sealing and osseointegration of metal implants with soft and hard tissues, which is critical for an orthopedic percutaneous medical device. abstract_id: PUBMED:29146125 All-cause mortality in patients with basal and squamous cell carcinoma: A systematic review and meta-analysis. Background: There are varying reports of the association of basal cell carcinoma (BCC) and cutaneous squamous cell carcinoma (SCC) with mortality. Objective: To synthesize the available information on all-cause mortality after a diagnosis of BCC or SCC in the general population. Methods: We searched PubMed (1966-present), Web of Science (1898-present), and Embase (1947-present) and hand-searched to identify additional records. All English articles that reported all-cause mortality in patients with BCC or SCC were eligible. We excluded case reports, case series, and studies in subpopulations of patients. Random effects model meta-analyses were performed separately for BCC and SCC. Results: The searches yielded 6538 articles, and 156 were assessed in a full-text review. Twelve studies met the inclusion criteria, and 4 were included in the meta-analysis (encompassing 464,230 patients with BCC and with 175,849 SCC), yielding summary relative mortalities of 0.92 (95% confidence interval, 0.83-1.02) in BCC and 1.25 (95% confidence interval, 1.17-1.32) in SCC. Limitations: Only a minority of studies controlled for comorbidities. There was significant heterogeneity in meta-analysis (χ2P &lt; .001, I2 &gt; 98%), but studies of SCC were qualitatively concordant: all showed statistically significant increased relative mortality. Conclusions: We found that patients with SCC are at higher risk for death from any cause compared with the general population. abstract_id: PUBMED:19153339 Topical tretinoin therapy and all-cause mortality. Objective: To evaluate the relation of topical tretinoin, a commonly used retinoid cream, with all-cause mortality in the Veterans Affairs Topical Tretinoin Chemoprevention Trial (VATTC). The planned outcome of this trial was risk of keratinocyte carcinoma, and systemic administration of certain retinoid compounds has been shown to reduce risk of this cancer but has also been associated with increased mortality risk among smokers. Design: The VATTC Trial was a blinded randomized chemoprevention trial, with 2- to 6-year follow-up. Oversight was provided by multiple independent committees. Setting: US Department of Veterans Affairs medical centers. Patients A total of 1131 veterans were randomized. Their mean age was 71 years. Patients with a very high estimated short-term risk of death were excluded. Interventions Application of tretinoin, 0.1%, or vehicle control cream twice daily to the face and ears. Main Outcome Measures: Death, which was not contemplated as an end point in the original study design. Results: The intervention was terminated 6 months early because of an excessive number of deaths in the tretinoin-treated group. Post hoc analysis of this difference revealed minor imbalances in age, comorbidity, and smoking status, all of which were important predictors of death. After adjusting for these imbalances, the difference in mortality between the randomized groups remained statistically significant. Conclusions: We observed an association of topical tretinoin therapy with death, but we do not infer a causal association that current evidence suggests is unlikely. abstract_id: PUBMED:26064889 SERCA2 Haploinsufficiency in a Mouse Model of Darier Disease Causes a Selective Predisposition to Heart Failure. Null mutations in one copy of ATP2A2, the gene encoding sarco/endoplasmic reticulum Ca(2+)-ATPase isoform 2 (SERCA2), cause Darier disease in humans, a skin condition involving keratinocytes. Cardiac function appears to be unimpaired in Darier disease patients, with no evidence that SERCA2 haploinsufficiency itself causes heart disease. However, SERCA2 deficiency is widely considered a contributing factor in heart failure. We therefore analyzed Atp2a2 heterozygous mice to determine whether SERCA2 haploinsufficiency can exacerbate specific heart disease conditions. Despite reduced SERCA2a levels in heart, Atp2a2 heterozygous mice resembled humans in exhibiting normal cardiac physiology. When subjected to hypothyroidism or crossed with a transgenic model of reduced myofibrillar Ca(2+)-sensitivity, SERCA2 deficiency caused no enhancement of the disease state. However, when combined with a transgenic model of increased myofibrillar Ca(2+)-sensitivity, SERCA2 haploinsufficiency caused rapid onset of hypertrophy, decompensation, and death. These effects were associated with reduced expression of the antiapoptotic Hax1, increased levels of the proapoptotic genes Chop and Casp12, and evidence of perturbations in energy metabolism. These data reveal myofibrillar Ca(2+)-sensitivity to be an important determinant of the cardiac effects of SERCA2 haploinsufficiency and raise the possibility that Darier disease patients are more susceptible to heart failure under certain conditions. abstract_id: PUBMED:30772381 Examining the feasibility of a "top-down" approach to enhancing the keratinocyte-implant adhesion. The adhesion of human epidermal keratinocytes to the implant surface is one of the most critical steps during the patient's recovery from implantation of transcutaneous prosthesis. To improve the success rate of transcutaneous prosthetic implants, we explored a new "top-down" approach to promoting this dynamic adhering process through modulation of upstream cell signaling pathways. To examine the feasibility of this novel approach, we first established an in vitro platform that is capable of providing a non-invasive, real-time, quantitative characterization of the keratinocyte-implant interaction. This platform is based on the dissipation monitoring function of the quartz crystal microbalance with dissipation monitoring (QCM-D) in conjunction with the open-module setup of the QCM-D. We then employed this platform to assess the effects of various pathways-specific modulators on the adhering process of keratinocytes. We demonstrated that this "top-down" approach is as effective in enhancing the adhesion of keratinocytes as the conventional "bottom-up" approach that relies on modifying the substrate surface with the adhesion protein such as fibronectin. We envision that this new "top-down" approach combined with the QCM-D-based in vitro platform will help facilitate the future development of new therapies for enhancing osseointegration and promoting wound healing. abstract_id: PUBMED:36097106 Therapeutic effects of mesenchymal stem cells-conditioned medium derived from suspension cultivation or silymarin on liver failure mice. Background: Common treatments of liver disease failed to meet all the needs in this important medical field. It results in an urgent need for proper some new adjuvant therapies. Mesenchymal stem cells (MSCs) and their derivatives are promising tools in this regard. We aimed to compare the Silymarin, as traditional treatment with mesenchymal stem cell conditioned medium (MSC-CM), as a novel strategy, both with therapeutic potentialities in term of liver failure (LF) treatment. Methods And Results: Mice models with liver failure were induced with CCl4 and were treated in the groups as follows: normal mice receiving DMEM-LG medium as control, LF-mice receiving DMEM-LG medium as sham, LF-mice receiving Silymarin as LF-SM, and LF-mice receiving MSC sphere CM as LF-MSC-CM. Biochemical, histopathological, molecular and protein level parameters were evaluated using blood and liver samples. Liver enzymes, MicroRNA-122 values as well as necrotic score were significantly lower in the LF-SM and LF-MSC-CM groups compared to sham. LF-SM showed significantly higher level of total antioxidant capacity and malondialdehyde than that of LF-MSC-CM groups. Sph-MSC-CM not only induced more down-regulated expression of fibrinogen-like protein 1 and receptor interacting protein kinases1 but also led to higher expression level of keratinocyte growth factor. LF-MSC-CM showed less mortality rate compared to other groups. Conclusions: Hepato-protective potentialities of Sph-MSC-CM are comparable to those of Silymarin. More inhibition of necroptosis/ necrosis and inflammation might result in rapid liver repair in case of MSC-CM administration. Answer: Yes, keratinocytes can cause failure of osseointegration. In a study examining bone-anchored hearing aids, which require osseointegration of titanium implants, it was found that the presence of keratinocytes at the titanium implant and living bone interface disrupted osseointegration. In one of the cases presented, there was significant keratinocyte proliferation that led to the failure of osseointegration, while in another case, such proliferation was present but not as advanced (PUBMED:19063771). This suggests that keratinocytes can indeed interfere with the process of osseointegration, leading to implant failure.